Oct 9 07:19:32.980784 kernel: Linux version 6.6.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Tue Oct 8 18:19:34 -00 2024 Oct 9 07:19:32.980820 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=1839da262570fb938be558d95db7fc3d986a0d71e1b77d40d35a3e2a1bac7dcd Oct 9 07:19:32.980834 kernel: BIOS-provided physical RAM map: Oct 9 07:19:32.980841 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Oct 9 07:19:32.980847 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Oct 9 07:19:32.980853 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Oct 9 07:19:32.980861 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffd7fff] usable Oct 9 07:19:32.980867 kernel: BIOS-e820: [mem 0x000000007ffd8000-0x000000007fffffff] reserved Oct 9 07:19:32.980874 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 9 07:19:32.980884 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Oct 9 07:19:32.980892 kernel: NX (Execute Disable) protection: active Oct 9 07:19:32.980898 kernel: APIC: Static calls initialized Oct 9 07:19:32.980905 kernel: SMBIOS 2.8 present. Oct 9 07:19:32.980912 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Oct 9 07:19:32.980920 kernel: Hypervisor detected: KVM Oct 9 07:19:32.980931 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 9 07:19:32.980939 kernel: kvm-clock: using sched offset of 3563102589 cycles Oct 9 07:19:32.980948 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 9 07:19:32.980956 kernel: tsc: Detected 2494.138 MHz processor Oct 9 07:19:32.980963 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 9 07:19:32.980971 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 9 07:19:32.980979 kernel: last_pfn = 0x7ffd8 max_arch_pfn = 0x400000000 Oct 9 07:19:32.980986 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Oct 9 07:19:32.980994 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 9 07:19:32.981005 kernel: ACPI: Early table checksum verification disabled Oct 9 07:19:32.981012 kernel: ACPI: RSDP 0x00000000000F5A50 000014 (v00 BOCHS ) Oct 9 07:19:32.981020 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 07:19:32.981028 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 07:19:32.981035 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 07:19:32.981043 kernel: ACPI: FACS 0x000000007FFE0000 000040 Oct 9 07:19:32.981050 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 07:19:32.981081 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 07:19:32.981089 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 07:19:32.981101 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 07:19:32.981108 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Oct 9 07:19:32.981115 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Oct 9 07:19:32.981124 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Oct 9 07:19:32.981135 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Oct 9 07:19:32.981145 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Oct 9 07:19:32.981155 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Oct 9 07:19:32.981172 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Oct 9 07:19:32.981187 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Oct 9 07:19:32.981200 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Oct 9 07:19:32.981211 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Oct 9 07:19:32.981222 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Oct 9 07:19:32.981233 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffd7fff] -> [mem 0x00000000-0x7ffd7fff] Oct 9 07:19:32.981245 kernel: NODE_DATA(0) allocated [mem 0x7ffd2000-0x7ffd7fff] Oct 9 07:19:32.981262 kernel: Zone ranges: Oct 9 07:19:32.981274 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 9 07:19:32.981286 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffd7fff] Oct 9 07:19:32.981298 kernel: Normal empty Oct 9 07:19:32.981309 kernel: Movable zone start for each node Oct 9 07:19:32.981321 kernel: Early memory node ranges Oct 9 07:19:32.981333 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Oct 9 07:19:32.981345 kernel: node 0: [mem 0x0000000000100000-0x000000007ffd7fff] Oct 9 07:19:32.981355 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffd7fff] Oct 9 07:19:32.981368 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 9 07:19:32.981376 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Oct 9 07:19:32.981385 kernel: On node 0, zone DMA32: 40 pages in unavailable ranges Oct 9 07:19:32.981393 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 9 07:19:32.981401 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 9 07:19:32.981410 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 9 07:19:32.981418 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 9 07:19:32.981426 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 9 07:19:32.981434 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 9 07:19:32.981446 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 9 07:19:32.981454 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 9 07:19:32.981462 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 9 07:19:32.981470 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 9 07:19:32.981478 kernel: TSC deadline timer available Oct 9 07:19:32.981486 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Oct 9 07:19:32.981494 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 9 07:19:32.981502 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Oct 9 07:19:32.981510 kernel: Booting paravirtualized kernel on KVM Oct 9 07:19:32.981522 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 9 07:19:32.981530 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Oct 9 07:19:32.981538 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Oct 9 07:19:32.981546 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Oct 9 07:19:32.981554 kernel: pcpu-alloc: [0] 0 1 Oct 9 07:19:32.981561 kernel: kvm-guest: PV spinlocks disabled, no host support Oct 9 07:19:32.981572 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=1839da262570fb938be558d95db7fc3d986a0d71e1b77d40d35a3e2a1bac7dcd Oct 9 07:19:32.981580 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 9 07:19:32.981591 kernel: random: crng init done Oct 9 07:19:32.981599 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 9 07:19:32.981608 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Oct 9 07:19:32.981616 kernel: Fallback order for Node 0: 0 Oct 9 07:19:32.981624 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515800 Oct 9 07:19:32.981631 kernel: Policy zone: DMA32 Oct 9 07:19:32.981640 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 9 07:19:32.981648 kernel: Memory: 1965048K/2096600K available (12288K kernel code, 2304K rwdata, 22648K rodata, 49452K init, 1888K bss, 131292K reserved, 0K cma-reserved) Oct 9 07:19:32.981656 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Oct 9 07:19:32.981668 kernel: Kernel/User page tables isolation: enabled Oct 9 07:19:32.981676 kernel: ftrace: allocating 37706 entries in 148 pages Oct 9 07:19:32.981684 kernel: ftrace: allocated 148 pages with 3 groups Oct 9 07:19:32.981691 kernel: Dynamic Preempt: voluntary Oct 9 07:19:32.981699 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 9 07:19:32.981709 kernel: rcu: RCU event tracing is enabled. Oct 9 07:19:32.981717 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Oct 9 07:19:32.981725 kernel: Trampoline variant of Tasks RCU enabled. Oct 9 07:19:32.981733 kernel: Rude variant of Tasks RCU enabled. Oct 9 07:19:32.981745 kernel: Tracing variant of Tasks RCU enabled. Oct 9 07:19:32.981753 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 9 07:19:32.981761 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Oct 9 07:19:32.981769 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Oct 9 07:19:32.981777 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 9 07:19:32.981785 kernel: Console: colour VGA+ 80x25 Oct 9 07:19:32.981794 kernel: printk: console [tty0] enabled Oct 9 07:19:32.981801 kernel: printk: console [ttyS0] enabled Oct 9 07:19:32.981810 kernel: ACPI: Core revision 20230628 Oct 9 07:19:32.981818 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 9 07:19:32.981829 kernel: APIC: Switch to symmetric I/O mode setup Oct 9 07:19:32.981837 kernel: x2apic enabled Oct 9 07:19:32.981845 kernel: APIC: Switched APIC routing to: physical x2apic Oct 9 07:19:32.981853 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 9 07:19:32.981862 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns Oct 9 07:19:32.981870 kernel: Calibrating delay loop (skipped) preset value.. 4988.27 BogoMIPS (lpj=2494138) Oct 9 07:19:32.981878 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Oct 9 07:19:32.981886 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Oct 9 07:19:32.981908 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 9 07:19:32.981916 kernel: Spectre V2 : Mitigation: Retpolines Oct 9 07:19:32.981925 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Oct 9 07:19:32.981936 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Oct 9 07:19:32.981945 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Oct 9 07:19:32.981953 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 9 07:19:32.981962 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 9 07:19:32.981970 kernel: MDS: Mitigation: Clear CPU buffers Oct 9 07:19:32.981979 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Oct 9 07:19:32.981991 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 9 07:19:32.982000 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 9 07:19:32.982008 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 9 07:19:32.982017 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 9 07:19:32.982026 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Oct 9 07:19:32.982034 kernel: Freeing SMP alternatives memory: 32K Oct 9 07:19:32.982043 kernel: pid_max: default: 32768 minimum: 301 Oct 9 07:19:32.982051 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Oct 9 07:19:32.985169 kernel: SELinux: Initializing. Oct 9 07:19:32.985183 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 9 07:19:32.985192 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 9 07:19:32.985202 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Oct 9 07:19:32.985211 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 9 07:19:32.985219 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 9 07:19:32.985228 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 9 07:19:32.985238 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Oct 9 07:19:32.985257 kernel: signal: max sigframe size: 1776 Oct 9 07:19:32.985266 kernel: rcu: Hierarchical SRCU implementation. Oct 9 07:19:32.985276 kernel: rcu: Max phase no-delay instances is 400. Oct 9 07:19:32.985285 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Oct 9 07:19:32.985293 kernel: smp: Bringing up secondary CPUs ... Oct 9 07:19:32.985302 kernel: smpboot: x86: Booting SMP configuration: Oct 9 07:19:32.985311 kernel: .... node #0, CPUs: #1 Oct 9 07:19:32.985320 kernel: smp: Brought up 1 node, 2 CPUs Oct 9 07:19:32.985329 kernel: smpboot: Max logical packages: 1 Oct 9 07:19:32.985338 kernel: smpboot: Total of 2 processors activated (9976.55 BogoMIPS) Oct 9 07:19:32.985351 kernel: devtmpfs: initialized Oct 9 07:19:32.985360 kernel: x86/mm: Memory block size: 128MB Oct 9 07:19:32.985369 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 9 07:19:32.985378 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Oct 9 07:19:32.985387 kernel: pinctrl core: initialized pinctrl subsystem Oct 9 07:19:32.985396 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 9 07:19:32.985404 kernel: audit: initializing netlink subsys (disabled) Oct 9 07:19:32.985413 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 9 07:19:32.985421 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 9 07:19:32.985434 kernel: cpuidle: using governor menu Oct 9 07:19:32.985442 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 9 07:19:32.985451 kernel: audit: type=2000 audit(1728458372.085:1): state=initialized audit_enabled=0 res=1 Oct 9 07:19:32.985460 kernel: dca service started, version 1.12.1 Oct 9 07:19:32.985468 kernel: PCI: Using configuration type 1 for base access Oct 9 07:19:32.985477 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 9 07:19:32.985486 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 9 07:19:32.985495 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 9 07:19:32.985503 kernel: ACPI: Added _OSI(Module Device) Oct 9 07:19:32.985515 kernel: ACPI: Added _OSI(Processor Device) Oct 9 07:19:32.985524 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 9 07:19:32.985533 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 9 07:19:32.985541 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 9 07:19:32.985550 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Oct 9 07:19:32.985558 kernel: ACPI: Interpreter enabled Oct 9 07:19:32.985567 kernel: ACPI: PM: (supports S0 S5) Oct 9 07:19:32.985576 kernel: ACPI: Using IOAPIC for interrupt routing Oct 9 07:19:32.985585 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 9 07:19:32.985597 kernel: PCI: Using E820 reservations for host bridge windows Oct 9 07:19:32.985605 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Oct 9 07:19:32.985614 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 9 07:19:32.985894 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Oct 9 07:19:32.986000 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Oct 9 07:19:32.986109 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Oct 9 07:19:32.986121 kernel: acpiphp: Slot [3] registered Oct 9 07:19:32.986135 kernel: acpiphp: Slot [4] registered Oct 9 07:19:32.986144 kernel: acpiphp: Slot [5] registered Oct 9 07:19:32.986153 kernel: acpiphp: Slot [6] registered Oct 9 07:19:32.986162 kernel: acpiphp: Slot [7] registered Oct 9 07:19:32.986170 kernel: acpiphp: Slot [8] registered Oct 9 07:19:32.986179 kernel: acpiphp: Slot [9] registered Oct 9 07:19:32.986187 kernel: acpiphp: Slot [10] registered Oct 9 07:19:32.986196 kernel: acpiphp: Slot [11] registered Oct 9 07:19:32.986205 kernel: acpiphp: Slot [12] registered Oct 9 07:19:32.986214 kernel: acpiphp: Slot [13] registered Oct 9 07:19:32.986226 kernel: acpiphp: Slot [14] registered Oct 9 07:19:32.986235 kernel: acpiphp: Slot [15] registered Oct 9 07:19:32.986244 kernel: acpiphp: Slot [16] registered Oct 9 07:19:32.986252 kernel: acpiphp: Slot [17] registered Oct 9 07:19:32.986261 kernel: acpiphp: Slot [18] registered Oct 9 07:19:32.986269 kernel: acpiphp: Slot [19] registered Oct 9 07:19:32.986278 kernel: acpiphp: Slot [20] registered Oct 9 07:19:32.986287 kernel: acpiphp: Slot [21] registered Oct 9 07:19:32.986295 kernel: acpiphp: Slot [22] registered Oct 9 07:19:32.986308 kernel: acpiphp: Slot [23] registered Oct 9 07:19:32.986321 kernel: acpiphp: Slot [24] registered Oct 9 07:19:32.986332 kernel: acpiphp: Slot [25] registered Oct 9 07:19:32.986345 kernel: acpiphp: Slot [26] registered Oct 9 07:19:32.986357 kernel: acpiphp: Slot [27] registered Oct 9 07:19:32.986369 kernel: acpiphp: Slot [28] registered Oct 9 07:19:32.986381 kernel: acpiphp: Slot [29] registered Oct 9 07:19:32.986393 kernel: acpiphp: Slot [30] registered Oct 9 07:19:32.986405 kernel: acpiphp: Slot [31] registered Oct 9 07:19:32.986418 kernel: PCI host bridge to bus 0000:00 Oct 9 07:19:32.986630 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 9 07:19:32.986764 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 9 07:19:32.986857 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 9 07:19:32.986941 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Oct 9 07:19:32.987021 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Oct 9 07:19:32.989579 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 9 07:19:32.989753 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Oct 9 07:19:32.989907 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Oct 9 07:19:32.990019 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Oct 9 07:19:32.990903 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Oct 9 07:19:32.991017 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Oct 9 07:19:32.991127 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Oct 9 07:19:32.991219 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Oct 9 07:19:32.991341 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Oct 9 07:19:32.991469 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Oct 9 07:19:32.991562 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Oct 9 07:19:32.991665 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Oct 9 07:19:32.991792 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Oct 9 07:19:32.994169 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Oct 9 07:19:32.994335 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Oct 9 07:19:32.994436 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Oct 9 07:19:32.994531 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Oct 9 07:19:32.994625 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Oct 9 07:19:32.994717 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Oct 9 07:19:32.994810 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 9 07:19:32.994913 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Oct 9 07:19:32.995013 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Oct 9 07:19:32.995132 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Oct 9 07:19:32.995224 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Oct 9 07:19:32.995334 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Oct 9 07:19:32.995426 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Oct 9 07:19:32.995517 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Oct 9 07:19:32.995608 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Oct 9 07:19:32.995720 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Oct 9 07:19:32.995829 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Oct 9 07:19:32.995964 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Oct 9 07:19:32.998149 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Oct 9 07:19:32.998318 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Oct 9 07:19:32.998419 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Oct 9 07:19:32.998517 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Oct 9 07:19:32.998663 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Oct 9 07:19:32.998798 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Oct 9 07:19:32.998893 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Oct 9 07:19:32.998990 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Oct 9 07:19:33.000148 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Oct 9 07:19:33.000278 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Oct 9 07:19:33.000378 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Oct 9 07:19:33.000483 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Oct 9 07:19:33.000495 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 9 07:19:33.000504 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 9 07:19:33.000513 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 9 07:19:33.000522 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 9 07:19:33.000531 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Oct 9 07:19:33.000540 kernel: iommu: Default domain type: Translated Oct 9 07:19:33.000553 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 9 07:19:33.000562 kernel: PCI: Using ACPI for IRQ routing Oct 9 07:19:33.000571 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 9 07:19:33.000580 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Oct 9 07:19:33.000589 kernel: e820: reserve RAM buffer [mem 0x7ffd8000-0x7fffffff] Oct 9 07:19:33.000686 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Oct 9 07:19:33.000778 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Oct 9 07:19:33.000870 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 9 07:19:33.000886 kernel: vgaarb: loaded Oct 9 07:19:33.000895 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 9 07:19:33.000904 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 9 07:19:33.000913 kernel: clocksource: Switched to clocksource kvm-clock Oct 9 07:19:33.000922 kernel: VFS: Disk quotas dquot_6.6.0 Oct 9 07:19:33.000931 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 9 07:19:33.000940 kernel: pnp: PnP ACPI init Oct 9 07:19:33.000949 kernel: pnp: PnP ACPI: found 4 devices Oct 9 07:19:33.000959 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 9 07:19:33.000971 kernel: NET: Registered PF_INET protocol family Oct 9 07:19:33.000980 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 9 07:19:33.000992 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Oct 9 07:19:33.001005 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 9 07:19:33.001018 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Oct 9 07:19:33.001031 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Oct 9 07:19:33.001043 kernel: TCP: Hash tables configured (established 16384 bind 16384) Oct 9 07:19:33.002086 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 9 07:19:33.002110 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 9 07:19:33.002128 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 9 07:19:33.002138 kernel: NET: Registered PF_XDP protocol family Oct 9 07:19:33.002269 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 9 07:19:33.002359 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 9 07:19:33.002446 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 9 07:19:33.002530 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Oct 9 07:19:33.002612 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Oct 9 07:19:33.002716 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Oct 9 07:19:33.002823 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Oct 9 07:19:33.002836 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Oct 9 07:19:33.002933 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7b0 took 28860 usecs Oct 9 07:19:33.002945 kernel: PCI: CLS 0 bytes, default 64 Oct 9 07:19:33.002955 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Oct 9 07:19:33.002964 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns Oct 9 07:19:33.002973 kernel: Initialise system trusted keyrings Oct 9 07:19:33.002982 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Oct 9 07:19:33.002991 kernel: Key type asymmetric registered Oct 9 07:19:33.003004 kernel: Asymmetric key parser 'x509' registered Oct 9 07:19:33.003013 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Oct 9 07:19:33.003022 kernel: io scheduler mq-deadline registered Oct 9 07:19:33.003030 kernel: io scheduler kyber registered Oct 9 07:19:33.003039 kernel: io scheduler bfq registered Oct 9 07:19:33.003048 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 9 07:19:33.006096 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Oct 9 07:19:33.006124 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Oct 9 07:19:33.006134 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Oct 9 07:19:33.006151 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 9 07:19:33.006161 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 9 07:19:33.006170 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 9 07:19:33.006179 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 9 07:19:33.006189 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 9 07:19:33.006346 kernel: rtc_cmos 00:03: RTC can wake from S4 Oct 9 07:19:33.006361 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 9 07:19:33.006449 kernel: rtc_cmos 00:03: registered as rtc0 Oct 9 07:19:33.006541 kernel: rtc_cmos 00:03: setting system clock to 2024-10-09T07:19:32 UTC (1728458372) Oct 9 07:19:33.006627 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Oct 9 07:19:33.006639 kernel: intel_pstate: CPU model not supported Oct 9 07:19:33.006649 kernel: NET: Registered PF_INET6 protocol family Oct 9 07:19:33.006658 kernel: Segment Routing with IPv6 Oct 9 07:19:33.006666 kernel: In-situ OAM (IOAM) with IPv6 Oct 9 07:19:33.006675 kernel: NET: Registered PF_PACKET protocol family Oct 9 07:19:33.006684 kernel: Key type dns_resolver registered Oct 9 07:19:33.006693 kernel: IPI shorthand broadcast: enabled Oct 9 07:19:33.006706 kernel: sched_clock: Marking stable (919005517, 95388403)->(1039686114, -25292194) Oct 9 07:19:33.006715 kernel: registered taskstats version 1 Oct 9 07:19:33.006723 kernel: Loading compiled-in X.509 certificates Oct 9 07:19:33.006732 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.54-flatcar: 0b7ba59a46acf969bcd97270f441857501641c76' Oct 9 07:19:33.006741 kernel: Key type .fscrypt registered Oct 9 07:19:33.006749 kernel: Key type fscrypt-provisioning registered Oct 9 07:19:33.006758 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 9 07:19:33.006767 kernel: ima: Allocated hash algorithm: sha1 Oct 9 07:19:33.006779 kernel: ima: No architecture policies found Oct 9 07:19:33.006788 kernel: clk: Disabling unused clocks Oct 9 07:19:33.006797 kernel: Freeing unused kernel image (initmem) memory: 49452K Oct 9 07:19:33.006806 kernel: Write protecting the kernel read-only data: 36864k Oct 9 07:19:33.006815 kernel: Freeing unused kernel image (rodata/data gap) memory: 1928K Oct 9 07:19:33.006845 kernel: Run /init as init process Oct 9 07:19:33.006858 kernel: with arguments: Oct 9 07:19:33.006868 kernel: /init Oct 9 07:19:33.006877 kernel: with environment: Oct 9 07:19:33.006890 kernel: HOME=/ Oct 9 07:19:33.006899 kernel: TERM=linux Oct 9 07:19:33.006907 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 9 07:19:33.006920 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 9 07:19:33.006932 systemd[1]: Detected virtualization kvm. Oct 9 07:19:33.006942 systemd[1]: Detected architecture x86-64. Oct 9 07:19:33.006951 systemd[1]: Running in initrd. Oct 9 07:19:33.006960 systemd[1]: No hostname configured, using default hostname. Oct 9 07:19:33.006974 systemd[1]: Hostname set to . Oct 9 07:19:33.006984 systemd[1]: Initializing machine ID from VM UUID. Oct 9 07:19:33.006994 systemd[1]: Queued start job for default target initrd.target. Oct 9 07:19:33.007003 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 07:19:33.007016 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 07:19:33.007026 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 9 07:19:33.007036 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 9 07:19:33.007045 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 9 07:19:33.007630 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 9 07:19:33.007652 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 9 07:19:33.007663 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 9 07:19:33.007672 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 07:19:33.007682 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 9 07:19:33.007693 systemd[1]: Reached target paths.target - Path Units. Oct 9 07:19:33.007703 systemd[1]: Reached target slices.target - Slice Units. Oct 9 07:19:33.007720 systemd[1]: Reached target swap.target - Swaps. Oct 9 07:19:33.007730 systemd[1]: Reached target timers.target - Timer Units. Oct 9 07:19:33.007744 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 9 07:19:33.007753 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 9 07:19:33.007763 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 9 07:19:33.007776 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 9 07:19:33.007787 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 9 07:19:33.007797 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 9 07:19:33.007807 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 07:19:33.007835 systemd[1]: Reached target sockets.target - Socket Units. Oct 9 07:19:33.007849 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 9 07:19:33.007863 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 9 07:19:33.007876 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 9 07:19:33.007888 systemd[1]: Starting systemd-fsck-usr.service... Oct 9 07:19:33.007909 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 9 07:19:33.007922 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 9 07:19:33.007935 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 07:19:33.007949 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 9 07:19:33.007963 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 07:19:33.007978 systemd[1]: Finished systemd-fsck-usr.service. Oct 9 07:19:33.008000 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 9 07:19:33.008079 systemd-journald[183]: Collecting audit messages is disabled. Oct 9 07:19:33.008117 systemd-journald[183]: Journal started Oct 9 07:19:33.008146 systemd-journald[183]: Runtime Journal (/run/log/journal/5ab27bcb1be34a539c0674af9637504f) is 4.9M, max 39.3M, 34.4M free. Oct 9 07:19:32.992504 systemd-modules-load[184]: Inserted module 'overlay' Oct 9 07:19:33.010084 systemd[1]: Started systemd-journald.service - Journal Service. Oct 9 07:19:33.015617 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 07:19:33.030989 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 07:19:33.039349 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Oct 9 07:19:33.050744 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 9 07:19:33.050800 kernel: Bridge firewalling registered Oct 9 07:19:33.042325 systemd-modules-load[184]: Inserted module 'br_netfilter' Oct 9 07:19:33.052317 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 9 07:19:33.054128 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 9 07:19:33.060992 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 07:19:33.065332 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 9 07:19:33.066846 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 9 07:19:33.076324 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 9 07:19:33.079149 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Oct 9 07:19:33.091303 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 07:19:33.099306 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 9 07:19:33.101893 dracut-cmdline[208]: dracut-dracut-053 Oct 9 07:19:33.105097 dracut-cmdline[208]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=1839da262570fb938be558d95db7fc3d986a0d71e1b77d40d35a3e2a1bac7dcd Oct 9 07:19:33.108697 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 9 07:19:33.153768 systemd-resolved[226]: Positive Trust Anchors: Oct 9 07:19:33.153784 systemd-resolved[226]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 9 07:19:33.153821 systemd-resolved[226]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Oct 9 07:19:33.157271 systemd-resolved[226]: Defaulting to hostname 'linux'. Oct 9 07:19:33.158717 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 9 07:19:33.160364 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 9 07:19:33.230130 kernel: SCSI subsystem initialized Oct 9 07:19:33.242135 kernel: Loading iSCSI transport class v2.0-870. Oct 9 07:19:33.256104 kernel: iscsi: registered transport (tcp) Oct 9 07:19:33.283262 kernel: iscsi: registered transport (qla4xxx) Oct 9 07:19:33.283370 kernel: QLogic iSCSI HBA Driver Oct 9 07:19:33.341202 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 9 07:19:33.352446 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 9 07:19:33.387163 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 9 07:19:33.387293 kernel: device-mapper: uevent: version 1.0.3 Oct 9 07:19:33.387312 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Oct 9 07:19:33.441165 kernel: raid6: avx2x4 gen() 22192 MB/s Oct 9 07:19:33.458144 kernel: raid6: avx2x2 gen() 22488 MB/s Oct 9 07:19:33.475383 kernel: raid6: avx2x1 gen() 18459 MB/s Oct 9 07:19:33.475499 kernel: raid6: using algorithm avx2x2 gen() 22488 MB/s Oct 9 07:19:33.493490 kernel: raid6: .... xor() 19366 MB/s, rmw enabled Oct 9 07:19:33.493617 kernel: raid6: using avx2x2 recovery algorithm Oct 9 07:19:33.524149 kernel: xor: automatically using best checksumming function avx Oct 9 07:19:33.784126 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 9 07:19:33.801423 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 9 07:19:33.810577 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 07:19:33.838901 systemd-udevd[400]: Using default interface naming scheme 'v255'. Oct 9 07:19:33.844635 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 07:19:33.850364 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 9 07:19:33.875746 dracut-pre-trigger[405]: rd.md=0: removing MD RAID activation Oct 9 07:19:33.924957 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 9 07:19:33.930377 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 9 07:19:34.010468 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 07:19:34.023513 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 9 07:19:34.045664 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 9 07:19:34.049414 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 9 07:19:34.051299 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 07:19:34.052187 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 9 07:19:34.060399 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 9 07:19:34.085244 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 9 07:19:34.108108 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Oct 9 07:19:34.123097 kernel: libata version 3.00 loaded. Oct 9 07:19:34.126113 kernel: ata_piix 0000:00:01.1: version 2.13 Oct 9 07:19:34.128087 kernel: scsi host1: ata_piix Oct 9 07:19:34.128388 kernel: scsi host2: ata_piix Oct 9 07:19:34.129798 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Oct 9 07:19:34.129846 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Oct 9 07:19:34.133409 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Oct 9 07:19:34.135100 kernel: scsi host0: Virtio SCSI HBA Oct 9 07:19:34.147123 kernel: cryptd: max_cpu_qlen set to 1000 Oct 9 07:19:34.157845 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 9 07:19:34.157947 kernel: GPT:9289727 != 125829119 Oct 9 07:19:34.157960 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 9 07:19:34.157973 kernel: GPT:9289727 != 125829119 Oct 9 07:19:34.157985 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 9 07:19:34.157997 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 07:19:34.166462 kernel: ACPI: bus type USB registered Oct 9 07:19:34.166549 kernel: usbcore: registered new interface driver usbfs Oct 9 07:19:34.166564 kernel: usbcore: registered new interface driver hub Oct 9 07:19:34.167380 kernel: usbcore: registered new device driver usb Oct 9 07:19:34.171092 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Oct 9 07:19:34.172099 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 9 07:19:34.175316 kernel: virtio_blk virtio5: [vdb] 968 512-byte logical blocks (496 kB/484 KiB) Oct 9 07:19:34.172243 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 07:19:34.174916 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 07:19:34.175373 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 07:19:34.175636 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 07:19:34.176702 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 07:19:34.187557 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 07:19:34.232527 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 07:19:34.237431 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 07:19:34.263464 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 07:19:34.340988 kernel: AVX2 version of gcm_enc/dec engaged. Oct 9 07:19:34.341127 kernel: AES CTR mode by8 optimization enabled Oct 9 07:19:34.348103 kernel: BTRFS: device fsid a442e753-4749-4732-ba27-ea845965fe4a devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (452) Oct 9 07:19:34.356984 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 9 07:19:34.369955 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (464) Oct 9 07:19:34.394094 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Oct 9 07:19:34.395571 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 9 07:19:34.406246 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Oct 9 07:19:34.406607 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Oct 9 07:19:34.413366 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 9 07:19:34.418093 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Oct 9 07:19:34.423942 kernel: hub 1-0:1.0: USB hub found Oct 9 07:19:34.426483 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 9 07:19:34.428117 kernel: hub 1-0:1.0: 2 ports detected Oct 9 07:19:34.427564 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Oct 9 07:19:34.439431 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 9 07:19:34.448237 disk-uuid[548]: Primary Header is updated. Oct 9 07:19:34.448237 disk-uuid[548]: Secondary Entries is updated. Oct 9 07:19:34.448237 disk-uuid[548]: Secondary Header is updated. Oct 9 07:19:34.455361 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 07:19:34.464116 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 07:19:35.466109 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 07:19:35.466939 disk-uuid[549]: The operation has completed successfully. Oct 9 07:19:35.512924 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 9 07:19:35.513054 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 9 07:19:35.520384 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 9 07:19:35.526406 sh[560]: Success Oct 9 07:19:35.546498 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Oct 9 07:19:35.617405 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 9 07:19:35.622266 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 9 07:19:35.623104 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 9 07:19:35.659206 kernel: BTRFS info (device dm-0): first mount of filesystem a442e753-4749-4732-ba27-ea845965fe4a Oct 9 07:19:35.659297 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 9 07:19:35.659324 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Oct 9 07:19:35.661327 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 9 07:19:35.661427 kernel: BTRFS info (device dm-0): using free space tree Oct 9 07:19:35.670513 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 9 07:19:35.671719 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 9 07:19:35.678327 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 9 07:19:35.681248 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 9 07:19:35.694397 kernel: BTRFS info (device vda6): first mount of filesystem aa256cb8-f25c-41d0-8582-dc8cedfde7ce Oct 9 07:19:35.694476 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 9 07:19:35.695282 kernel: BTRFS info (device vda6): using free space tree Oct 9 07:19:35.701099 kernel: BTRFS info (device vda6): auto enabling async discard Oct 9 07:19:35.712612 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 9 07:19:35.713724 kernel: BTRFS info (device vda6): last unmount of filesystem aa256cb8-f25c-41d0-8582-dc8cedfde7ce Oct 9 07:19:35.719402 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 9 07:19:35.727316 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 9 07:19:35.841165 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 9 07:19:35.849389 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 9 07:19:35.862917 ignition[646]: Ignition 2.18.0 Oct 9 07:19:35.862925 ignition[646]: Stage: fetch-offline Oct 9 07:19:35.862988 ignition[646]: no configs at "/usr/lib/ignition/base.d" Oct 9 07:19:35.863000 ignition[646]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 9 07:19:35.863263 ignition[646]: parsed url from cmdline: "" Oct 9 07:19:35.863268 ignition[646]: no config URL provided Oct 9 07:19:35.863275 ignition[646]: reading system config file "/usr/lib/ignition/user.ign" Oct 9 07:19:35.866345 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 9 07:19:35.863286 ignition[646]: no config at "/usr/lib/ignition/user.ign" Oct 9 07:19:35.863295 ignition[646]: failed to fetch config: resource requires networking Oct 9 07:19:35.863624 ignition[646]: Ignition finished successfully Oct 9 07:19:35.887753 systemd-networkd[749]: lo: Link UP Oct 9 07:19:35.887769 systemd-networkd[749]: lo: Gained carrier Oct 9 07:19:35.890312 systemd-networkd[749]: Enumeration completed Oct 9 07:19:35.890718 systemd-networkd[749]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Oct 9 07:19:35.890722 systemd-networkd[749]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Oct 9 07:19:35.892467 systemd-networkd[749]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 07:19:35.892471 systemd-networkd[749]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 9 07:19:35.892472 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 9 07:19:35.893565 systemd[1]: Reached target network.target - Network. Oct 9 07:19:35.893656 systemd-networkd[749]: eth0: Link UP Oct 9 07:19:35.893661 systemd-networkd[749]: eth0: Gained carrier Oct 9 07:19:35.893669 systemd-networkd[749]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Oct 9 07:19:35.898121 systemd-networkd[749]: eth1: Link UP Oct 9 07:19:35.898125 systemd-networkd[749]: eth1: Gained carrier Oct 9 07:19:35.898139 systemd-networkd[749]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 07:19:35.899410 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Oct 9 07:19:35.907154 systemd-networkd[749]: eth0: DHCPv4 address 161.35.237.80/20, gateway 161.35.224.1 acquired from 169.254.169.253 Oct 9 07:19:35.912139 systemd-networkd[749]: eth1: DHCPv4 address 10.124.0.14/20 acquired from 169.254.169.253 Oct 9 07:19:35.917636 ignition[753]: Ignition 2.18.0 Oct 9 07:19:35.917660 ignition[753]: Stage: fetch Oct 9 07:19:35.917924 ignition[753]: no configs at "/usr/lib/ignition/base.d" Oct 9 07:19:35.917938 ignition[753]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 9 07:19:35.918040 ignition[753]: parsed url from cmdline: "" Oct 9 07:19:35.918044 ignition[753]: no config URL provided Oct 9 07:19:35.918050 ignition[753]: reading system config file "/usr/lib/ignition/user.ign" Oct 9 07:19:35.918073 ignition[753]: no config at "/usr/lib/ignition/user.ign" Oct 9 07:19:35.918100 ignition[753]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Oct 9 07:19:35.932491 ignition[753]: GET result: OK Oct 9 07:19:35.932774 ignition[753]: parsing config with SHA512: 1ce5a9244185850ff40c086567017b6f67b5fee8d2b73d27f54475761a8d29689c388d93334817f217bd3c3024e011eb70643b6f3ef112633fb3523c76be9bd5 Oct 9 07:19:35.938575 unknown[753]: fetched base config from "system" Oct 9 07:19:35.938592 unknown[753]: fetched base config from "system" Oct 9 07:19:35.938598 unknown[753]: fetched user config from "digitalocean" Oct 9 07:19:35.941343 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Oct 9 07:19:35.939612 ignition[753]: fetch: fetch complete Oct 9 07:19:35.939618 ignition[753]: fetch: fetch passed Oct 9 07:19:35.939673 ignition[753]: Ignition finished successfully Oct 9 07:19:35.947329 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 9 07:19:35.966615 ignition[761]: Ignition 2.18.0 Oct 9 07:19:35.966630 ignition[761]: Stage: kargs Oct 9 07:19:35.966850 ignition[761]: no configs at "/usr/lib/ignition/base.d" Oct 9 07:19:35.966862 ignition[761]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 9 07:19:35.968025 ignition[761]: kargs: kargs passed Oct 9 07:19:35.969216 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 9 07:19:35.968106 ignition[761]: Ignition finished successfully Oct 9 07:19:35.974281 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 9 07:19:36.008789 ignition[768]: Ignition 2.18.0 Oct 9 07:19:36.008802 ignition[768]: Stage: disks Oct 9 07:19:36.009098 ignition[768]: no configs at "/usr/lib/ignition/base.d" Oct 9 07:19:36.009114 ignition[768]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 9 07:19:36.013304 ignition[768]: disks: disks passed Oct 9 07:19:36.013367 ignition[768]: Ignition finished successfully Oct 9 07:19:36.014318 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 9 07:19:36.015327 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 9 07:19:36.020299 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 9 07:19:36.020692 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 9 07:19:36.021009 systemd[1]: Reached target sysinit.target - System Initialization. Oct 9 07:19:36.021343 systemd[1]: Reached target basic.target - Basic System. Oct 9 07:19:36.033314 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 9 07:19:36.053822 systemd-fsck[777]: ROOT: clean, 14/553520 files, 52654/553472 blocks Oct 9 07:19:36.058005 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 9 07:19:36.061221 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 9 07:19:36.206097 kernel: EXT4-fs (vda9): mounted filesystem ef891253-2811-499a-a9aa-02f0764c1b95 r/w with ordered data mode. Quota mode: none. Oct 9 07:19:36.207573 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 9 07:19:36.209298 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 9 07:19:36.220256 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 9 07:19:36.223287 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 9 07:19:36.232374 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Oct 9 07:19:36.233597 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (785) Oct 9 07:19:36.239100 kernel: BTRFS info (device vda6): first mount of filesystem aa256cb8-f25c-41d0-8582-dc8cedfde7ce Oct 9 07:19:36.239285 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Oct 9 07:19:36.245223 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 9 07:19:36.245258 kernel: BTRFS info (device vda6): using free space tree Oct 9 07:19:36.245434 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 9 07:19:36.245476 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 9 07:19:36.249917 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 9 07:19:36.255104 kernel: BTRFS info (device vda6): auto enabling async discard Oct 9 07:19:36.267213 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 9 07:19:36.274001 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 9 07:19:36.316999 coreos-metadata[787]: Oct 09 07:19:36.316 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Oct 9 07:19:36.323480 coreos-metadata[788]: Oct 09 07:19:36.323 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Oct 9 07:19:36.328106 coreos-metadata[787]: Oct 09 07:19:36.328 INFO Fetch successful Oct 9 07:19:36.334861 coreos-metadata[788]: Oct 09 07:19:36.334 INFO Fetch successful Oct 9 07:19:36.337829 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Oct 9 07:19:36.337942 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Oct 9 07:19:36.348500 coreos-metadata[788]: Oct 09 07:19:36.347 INFO wrote hostname ci-3975.2.2-f-f6e42a54cc to /sysroot/etc/hostname Oct 9 07:19:36.348943 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Oct 9 07:19:36.353390 initrd-setup-root[817]: cut: /sysroot/etc/passwd: No such file or directory Oct 9 07:19:36.358216 initrd-setup-root[824]: cut: /sysroot/etc/group: No such file or directory Oct 9 07:19:36.363232 initrd-setup-root[831]: cut: /sysroot/etc/shadow: No such file or directory Oct 9 07:19:36.367960 initrd-setup-root[838]: cut: /sysroot/etc/gshadow: No such file or directory Oct 9 07:19:36.480086 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 9 07:19:36.495337 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 9 07:19:36.499428 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 9 07:19:36.511102 kernel: BTRFS info (device vda6): last unmount of filesystem aa256cb8-f25c-41d0-8582-dc8cedfde7ce Oct 9 07:19:36.555496 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 9 07:19:36.557885 ignition[905]: INFO : Ignition 2.18.0 Oct 9 07:19:36.557885 ignition[905]: INFO : Stage: mount Oct 9 07:19:36.559049 ignition[905]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 07:19:36.559049 ignition[905]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 9 07:19:36.560298 ignition[905]: INFO : mount: mount passed Oct 9 07:19:36.560298 ignition[905]: INFO : Ignition finished successfully Oct 9 07:19:36.561274 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 9 07:19:36.566288 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 9 07:19:36.656401 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 9 07:19:36.673420 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 9 07:19:36.685136 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (918) Oct 9 07:19:36.688636 kernel: BTRFS info (device vda6): first mount of filesystem aa256cb8-f25c-41d0-8582-dc8cedfde7ce Oct 9 07:19:36.688712 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 9 07:19:36.688726 kernel: BTRFS info (device vda6): using free space tree Oct 9 07:19:36.694101 kernel: BTRFS info (device vda6): auto enabling async discard Oct 9 07:19:36.697428 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 9 07:19:36.729478 ignition[935]: INFO : Ignition 2.18.0 Oct 9 07:19:36.729478 ignition[935]: INFO : Stage: files Oct 9 07:19:36.730598 ignition[935]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 07:19:36.730598 ignition[935]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 9 07:19:36.731501 ignition[935]: DEBUG : files: compiled without relabeling support, skipping Oct 9 07:19:36.732549 ignition[935]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 9 07:19:36.733247 ignition[935]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 9 07:19:36.736106 ignition[935]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 9 07:19:36.736764 ignition[935]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 9 07:19:36.737493 ignition[935]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 9 07:19:36.736799 unknown[935]: wrote ssh authorized keys file for user: core Oct 9 07:19:36.740005 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Oct 9 07:19:36.740769 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Oct 9 07:19:36.740769 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 9 07:19:36.740769 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Oct 9 07:19:36.788807 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Oct 9 07:19:36.892988 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 9 07:19:36.892988 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Oct 9 07:19:36.895483 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Oct 9 07:19:36.895483 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 9 07:19:36.895483 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 9 07:19:36.895483 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 9 07:19:36.895483 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 9 07:19:36.895483 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 9 07:19:36.895483 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 9 07:19:36.895483 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 9 07:19:36.895483 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 9 07:19:36.895483 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Oct 9 07:19:36.895483 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Oct 9 07:19:36.895483 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Oct 9 07:19:36.895483 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Oct 9 07:19:37.326518 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Oct 9 07:19:37.446353 systemd-networkd[749]: eth1: Gained IPv6LL Oct 9 07:19:37.561571 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Oct 9 07:19:37.562633 ignition[935]: INFO : files: op(c): [started] processing unit "containerd.service" Oct 9 07:19:37.564891 ignition[935]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Oct 9 07:19:37.564891 ignition[935]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Oct 9 07:19:37.567799 ignition[935]: INFO : files: op(c): [finished] processing unit "containerd.service" Oct 9 07:19:37.567799 ignition[935]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Oct 9 07:19:37.567799 ignition[935]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 9 07:19:37.567799 ignition[935]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 9 07:19:37.567799 ignition[935]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Oct 9 07:19:37.567799 ignition[935]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Oct 9 07:19:37.567799 ignition[935]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Oct 9 07:19:37.567799 ignition[935]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 9 07:19:37.567799 ignition[935]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 9 07:19:37.567799 ignition[935]: INFO : files: files passed Oct 9 07:19:37.567799 ignition[935]: INFO : Ignition finished successfully Oct 9 07:19:37.568421 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 9 07:19:37.573642 systemd-networkd[749]: eth0: Gained IPv6LL Oct 9 07:19:37.575295 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 9 07:19:37.584314 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 9 07:19:37.590208 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 9 07:19:37.590355 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 9 07:19:37.598174 initrd-setup-root-after-ignition[964]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 9 07:19:37.598174 initrd-setup-root-after-ignition[964]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 9 07:19:37.601153 initrd-setup-root-after-ignition[968]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 9 07:19:37.603261 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 9 07:19:37.604612 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 9 07:19:37.616392 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 9 07:19:37.658255 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 9 07:19:37.658420 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 9 07:19:37.660135 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 9 07:19:37.660739 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 9 07:19:37.661699 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 9 07:19:37.672376 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 9 07:19:37.690706 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 9 07:19:37.697368 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 9 07:19:37.722562 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 9 07:19:37.723713 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 07:19:37.724264 systemd[1]: Stopped target timers.target - Timer Units. Oct 9 07:19:37.724748 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 9 07:19:37.724925 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 9 07:19:37.726095 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 9 07:19:37.726658 systemd[1]: Stopped target basic.target - Basic System. Oct 9 07:19:37.727484 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 9 07:19:37.728572 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 9 07:19:37.729645 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 9 07:19:37.730768 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 9 07:19:37.732103 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 9 07:19:37.732826 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 9 07:19:37.733881 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 9 07:19:37.734825 systemd[1]: Stopped target swap.target - Swaps. Oct 9 07:19:37.735700 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 9 07:19:37.735957 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 9 07:19:37.737390 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 9 07:19:37.738526 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 07:19:37.739352 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 9 07:19:37.739496 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 07:19:37.740464 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 9 07:19:37.740617 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 9 07:19:37.741935 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 9 07:19:37.742202 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 9 07:19:37.743354 systemd[1]: ignition-files.service: Deactivated successfully. Oct 9 07:19:37.743482 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 9 07:19:37.744399 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Oct 9 07:19:37.744568 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Oct 9 07:19:37.755507 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 9 07:19:37.757301 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 9 07:19:37.757563 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 07:19:37.762429 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 9 07:19:37.763564 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 9 07:19:37.764347 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 07:19:37.765404 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 9 07:19:37.765529 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 9 07:19:37.774966 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 9 07:19:37.777293 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 9 07:19:37.781008 ignition[988]: INFO : Ignition 2.18.0 Oct 9 07:19:37.781008 ignition[988]: INFO : Stage: umount Oct 9 07:19:37.789659 ignition[988]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 07:19:37.789659 ignition[988]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 9 07:19:37.789659 ignition[988]: INFO : umount: umount passed Oct 9 07:19:37.789659 ignition[988]: INFO : Ignition finished successfully Oct 9 07:19:37.787110 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 9 07:19:37.787243 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 9 07:19:37.788632 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 9 07:19:37.788762 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 9 07:19:37.790354 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 9 07:19:37.790431 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 9 07:19:37.791461 systemd[1]: ignition-fetch.service: Deactivated successfully. Oct 9 07:19:37.791516 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Oct 9 07:19:37.792831 systemd[1]: Stopped target network.target - Network. Oct 9 07:19:37.793619 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 9 07:19:37.793690 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 9 07:19:37.794262 systemd[1]: Stopped target paths.target - Path Units. Oct 9 07:19:37.795087 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 9 07:19:37.795828 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 07:19:37.796488 systemd[1]: Stopped target slices.target - Slice Units. Oct 9 07:19:37.797481 systemd[1]: Stopped target sockets.target - Socket Units. Oct 9 07:19:37.798365 systemd[1]: iscsid.socket: Deactivated successfully. Oct 9 07:19:37.798431 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 9 07:19:37.799767 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 9 07:19:37.799827 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 9 07:19:37.800718 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 9 07:19:37.800794 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 9 07:19:37.802041 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 9 07:19:37.802419 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 9 07:19:37.803253 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 9 07:19:37.825221 systemd-networkd[749]: eth1: DHCPv6 lease lost Oct 9 07:19:37.830423 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 9 07:19:37.831726 systemd-networkd[749]: eth0: DHCPv6 lease lost Oct 9 07:19:37.832623 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 9 07:19:37.840227 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 9 07:19:37.840419 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 9 07:19:37.857950 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 9 07:19:37.858357 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 9 07:19:37.862556 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 9 07:19:37.864847 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 9 07:19:37.864957 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 9 07:19:37.868223 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 07:19:37.873885 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 9 07:19:37.874165 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 9 07:19:37.881907 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 9 07:19:37.883320 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 9 07:19:37.889741 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 9 07:19:37.890010 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 07:19:37.894718 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 9 07:19:37.895425 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 9 07:19:37.896439 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 9 07:19:37.897002 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 07:19:37.897986 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 9 07:19:37.898049 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 9 07:19:37.898802 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 9 07:19:37.898850 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 9 07:19:37.900003 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 9 07:19:37.900097 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 07:19:37.900592 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 9 07:19:37.900638 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 9 07:19:37.904352 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 9 07:19:37.904801 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 9 07:19:37.904868 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 9 07:19:37.905876 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 9 07:19:37.905947 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 9 07:19:37.906646 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 9 07:19:37.906697 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 07:19:37.907928 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 9 07:19:37.907982 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Oct 9 07:19:37.910724 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 07:19:37.910799 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 07:19:37.916528 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 9 07:19:37.916718 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 9 07:19:37.926413 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 9 07:19:37.926604 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 9 07:19:37.928448 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 9 07:19:37.934379 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 9 07:19:37.958534 systemd[1]: Switching root. Oct 9 07:19:38.016136 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Oct 9 07:19:38.016284 systemd-journald[183]: Journal stopped Oct 9 07:19:39.216035 kernel: SELinux: policy capability network_peer_controls=1 Oct 9 07:19:39.217181 kernel: SELinux: policy capability open_perms=1 Oct 9 07:19:39.217211 kernel: SELinux: policy capability extended_socket_class=1 Oct 9 07:19:39.217238 kernel: SELinux: policy capability always_check_network=0 Oct 9 07:19:39.217268 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 9 07:19:39.217294 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 9 07:19:39.217350 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 9 07:19:39.217371 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 9 07:19:39.217407 systemd[1]: Successfully loaded SELinux policy in 40.209ms. Oct 9 07:19:39.217443 kernel: audit: type=1403 audit(1728458378.216:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 9 07:19:39.217464 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.264ms. Oct 9 07:19:39.217488 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 9 07:19:39.217513 systemd[1]: Detected virtualization kvm. Oct 9 07:19:39.217533 systemd[1]: Detected architecture x86-64. Oct 9 07:19:39.217552 systemd[1]: Detected first boot. Oct 9 07:19:39.217573 systemd[1]: Hostname set to . Oct 9 07:19:39.217594 systemd[1]: Initializing machine ID from VM UUID. Oct 9 07:19:39.217615 zram_generator::config[1047]: No configuration found. Oct 9 07:19:39.217642 systemd[1]: Populated /etc with preset unit settings. Oct 9 07:19:39.217664 systemd[1]: Queued start job for default target multi-user.target. Oct 9 07:19:39.217687 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 9 07:19:39.217709 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 9 07:19:39.217730 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 9 07:19:39.217751 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 9 07:19:39.217769 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 9 07:19:39.217790 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 9 07:19:39.217811 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 9 07:19:39.217832 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 9 07:19:39.217862 systemd[1]: Created slice user.slice - User and Session Slice. Oct 9 07:19:39.217884 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 07:19:39.217906 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 07:19:39.217927 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 9 07:19:39.217949 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 9 07:19:39.217971 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 9 07:19:39.217992 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 9 07:19:39.218014 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 9 07:19:39.218034 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 07:19:39.218072 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 9 07:19:39.218093 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 07:19:39.218112 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 9 07:19:39.218136 systemd[1]: Reached target slices.target - Slice Units. Oct 9 07:19:39.218192 systemd[1]: Reached target swap.target - Swaps. Oct 9 07:19:39.218209 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 9 07:19:39.218227 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 9 07:19:39.218251 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 9 07:19:39.218269 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 9 07:19:39.218287 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 9 07:19:39.218304 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 9 07:19:39.218322 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 07:19:39.218342 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 9 07:19:39.218360 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 9 07:19:39.218380 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 9 07:19:39.218402 systemd[1]: Mounting media.mount - External Media Directory... Oct 9 07:19:39.218424 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:19:39.218441 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 9 07:19:39.218461 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 9 07:19:39.218480 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 9 07:19:39.218501 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 9 07:19:39.218522 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 07:19:39.218541 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 9 07:19:39.218563 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 9 07:19:39.218613 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 07:19:39.218648 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 9 07:19:39.218670 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 07:19:39.218690 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 9 07:19:39.218711 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 07:19:39.218732 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 9 07:19:39.218750 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Oct 9 07:19:39.218769 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Oct 9 07:19:39.218788 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 9 07:19:39.218811 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 9 07:19:39.218830 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 9 07:19:39.218850 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 9 07:19:39.218869 kernel: loop: module loaded Oct 9 07:19:39.218889 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 9 07:19:39.218908 kernel: fuse: init (API version 7.39) Oct 9 07:19:39.218927 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:19:39.218946 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 9 07:19:39.218969 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 9 07:19:39.218989 systemd[1]: Mounted media.mount - External Media Directory. Oct 9 07:19:39.219012 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 9 07:19:39.219032 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 9 07:19:39.221115 systemd-journald[1137]: Collecting audit messages is disabled. Oct 9 07:19:39.221203 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 9 07:19:39.221233 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 07:19:39.221255 systemd-journald[1137]: Journal started Oct 9 07:19:39.221292 systemd-journald[1137]: Runtime Journal (/run/log/journal/5ab27bcb1be34a539c0674af9637504f) is 4.9M, max 39.3M, 34.4M free. Oct 9 07:19:39.235140 systemd[1]: Started systemd-journald.service - Journal Service. Oct 9 07:19:39.234554 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 9 07:19:39.234816 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 9 07:19:39.235890 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 07:19:39.237186 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 07:19:39.238749 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 07:19:39.238992 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 07:19:39.240068 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 9 07:19:39.240309 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 9 07:19:39.241200 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 07:19:39.241431 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 07:19:39.243662 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 9 07:19:39.244660 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 9 07:19:39.245721 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 9 07:19:39.261937 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 9 07:19:39.275371 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 9 07:19:39.281595 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 9 07:19:39.282624 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 9 07:19:39.289375 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 9 07:19:39.302433 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 9 07:19:39.304625 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 9 07:19:39.316336 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 9 07:19:39.317036 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 9 07:19:39.321558 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 9 07:19:39.344310 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 9 07:19:39.381188 systemd-journald[1137]: Time spent on flushing to /var/log/journal/5ab27bcb1be34a539c0674af9637504f is 75.171ms for 967 entries. Oct 9 07:19:39.381188 systemd-journald[1137]: System Journal (/var/log/journal/5ab27bcb1be34a539c0674af9637504f) is 8.0M, max 195.6M, 187.6M free. Oct 9 07:19:39.496523 systemd-journald[1137]: Received client request to flush runtime journal. Oct 9 07:19:39.496627 kernel: ACPI: bus type drm_connector registered Oct 9 07:19:39.380848 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 9 07:19:39.388827 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 9 07:19:39.389136 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 9 07:19:39.389986 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 9 07:19:39.393184 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 9 07:19:39.399604 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 9 07:19:39.407354 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 9 07:19:39.481006 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 9 07:19:39.501948 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 9 07:19:39.506250 systemd-tmpfiles[1185]: ACLs are not supported, ignoring. Oct 9 07:19:39.506272 systemd-tmpfiles[1185]: ACLs are not supported, ignoring. Oct 9 07:19:39.514723 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 07:19:39.524460 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Oct 9 07:19:39.525540 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 9 07:19:39.536996 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 9 07:19:39.558390 udevadm[1208]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Oct 9 07:19:39.585308 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 9 07:19:39.599346 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 9 07:19:39.619442 systemd-tmpfiles[1215]: ACLs are not supported, ignoring. Oct 9 07:19:39.619464 systemd-tmpfiles[1215]: ACLs are not supported, ignoring. Oct 9 07:19:39.625387 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 07:19:40.249168 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 9 07:19:40.259350 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 07:19:40.291076 systemd-udevd[1221]: Using default interface naming scheme 'v255'. Oct 9 07:19:40.320910 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 07:19:40.333365 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 9 07:19:40.360338 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 9 07:19:40.430143 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1230) Oct 9 07:19:40.452318 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Oct 9 07:19:40.453465 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 9 07:19:40.462393 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:19:40.462576 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 07:19:40.472290 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 07:19:40.484306 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 07:19:40.495194 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1222) Oct 9 07:19:40.492700 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 07:19:40.512634 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 9 07:19:40.512713 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 9 07:19:40.512778 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:19:40.518740 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 07:19:40.522940 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 07:19:40.536179 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 9 07:19:40.543643 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 07:19:40.543890 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 07:19:40.547929 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 07:19:40.551344 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 07:19:40.553529 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 9 07:19:40.607633 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 9 07:19:40.634257 systemd-networkd[1226]: lo: Link UP Oct 9 07:19:40.634266 systemd-networkd[1226]: lo: Gained carrier Oct 9 07:19:40.639023 systemd-networkd[1226]: Enumeration completed Oct 9 07:19:40.639807 systemd-networkd[1226]: eth0: Configuring with /run/systemd/network/10-9a:49:68:b6:b5:d7.network. Oct 9 07:19:40.641039 systemd-networkd[1226]: eth1: Configuring with /run/systemd/network/10-72:66:a3:ed:55:c8.network. Oct 9 07:19:40.641541 systemd-networkd[1226]: eth0: Link UP Oct 9 07:19:40.641546 systemd-networkd[1226]: eth0: Gained carrier Oct 9 07:19:40.641730 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 9 07:19:40.646467 systemd-networkd[1226]: eth1: Link UP Oct 9 07:19:40.646478 systemd-networkd[1226]: eth1: Gained carrier Oct 9 07:19:40.649952 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 9 07:19:40.687146 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Oct 9 07:19:40.702125 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Oct 9 07:19:40.717211 kernel: ACPI: button: Power Button [PWRF] Oct 9 07:19:40.723202 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Oct 9 07:19:40.762090 kernel: mousedev: PS/2 mouse device common for all mice Oct 9 07:19:40.768466 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 07:19:40.778155 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Oct 9 07:19:40.784106 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Oct 9 07:19:40.802118 kernel: Console: switching to colour dummy device 80x25 Oct 9 07:19:40.802237 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Oct 9 07:19:40.802292 kernel: [drm] features: -context_init Oct 9 07:19:40.804276 kernel: [drm] number of scanouts: 1 Oct 9 07:19:40.804375 kernel: [drm] number of cap sets: 0 Oct 9 07:19:40.804406 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Oct 9 07:19:40.824287 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 07:19:40.824607 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 07:19:40.828817 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Oct 9 07:19:40.828930 kernel: Console: switching to colour frame buffer device 128x48 Oct 9 07:19:40.836736 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Oct 9 07:19:40.836354 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 07:19:40.927359 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 07:19:40.927672 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 07:19:40.942522 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 07:19:40.970183 kernel: EDAC MC: Ver: 3.0.0 Oct 9 07:19:41.005672 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Oct 9 07:19:41.016419 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Oct 9 07:19:41.047607 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 07:19:41.049127 lvm[1288]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 9 07:19:41.092271 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Oct 9 07:19:41.093054 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 9 07:19:41.100387 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Oct 9 07:19:41.121538 lvm[1295]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 9 07:19:41.152926 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Oct 9 07:19:41.153565 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 9 07:19:41.160288 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Oct 9 07:19:41.160748 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 9 07:19:41.160802 systemd[1]: Reached target machines.target - Containers. Oct 9 07:19:41.164387 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 9 07:19:41.188364 kernel: ISO 9660 Extensions: RRIP_1991A Oct 9 07:19:41.192587 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Oct 9 07:19:41.193995 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 9 07:19:41.198592 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Oct 9 07:19:41.213676 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 9 07:19:41.216126 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 9 07:19:41.220467 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 07:19:41.235398 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Oct 9 07:19:41.241356 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 9 07:19:41.245646 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 9 07:19:41.253934 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 9 07:19:41.281817 kernel: loop0: detected capacity change from 0 to 8 Oct 9 07:19:41.282045 kernel: block loop0: the capability attribute has been deprecated. Oct 9 07:19:41.296155 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 9 07:19:41.306889 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 9 07:19:41.310555 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Oct 9 07:19:41.321259 kernel: loop1: detected capacity change from 0 to 211296 Oct 9 07:19:41.354097 kernel: loop2: detected capacity change from 0 to 139904 Oct 9 07:19:41.402167 kernel: loop3: detected capacity change from 0 to 80568 Oct 9 07:19:41.436267 kernel: loop4: detected capacity change from 0 to 8 Oct 9 07:19:41.440293 kernel: loop5: detected capacity change from 0 to 211296 Oct 9 07:19:41.472757 kernel: loop6: detected capacity change from 0 to 139904 Oct 9 07:19:41.497187 kernel: loop7: detected capacity change from 0 to 80568 Oct 9 07:19:41.516813 (sd-merge)[1320]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Oct 9 07:19:41.517467 (sd-merge)[1320]: Merged extensions into '/usr'. Oct 9 07:19:41.523367 systemd[1]: Reloading requested from client PID 1309 ('systemd-sysext') (unit systemd-sysext.service)... Oct 9 07:19:41.523392 systemd[1]: Reloading... Oct 9 07:19:41.630823 zram_generator::config[1346]: No configuration found. Oct 9 07:19:41.877288 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 07:19:41.892095 ldconfig[1306]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 9 07:19:41.962181 systemd[1]: Reloading finished in 438 ms. Oct 9 07:19:41.978962 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 9 07:19:41.981381 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 9 07:19:41.993471 systemd[1]: Starting ensure-sysext.service... Oct 9 07:19:41.999415 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Oct 9 07:19:42.009624 systemd[1]: Reloading requested from client PID 1396 ('systemctl') (unit ensure-sysext.service)... Oct 9 07:19:42.009833 systemd[1]: Reloading... Oct 9 07:19:42.039683 systemd-tmpfiles[1397]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 9 07:19:42.040218 systemd-tmpfiles[1397]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 9 07:19:42.041161 systemd-tmpfiles[1397]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 9 07:19:42.043330 systemd-tmpfiles[1397]: ACLs are not supported, ignoring. Oct 9 07:19:42.043400 systemd-tmpfiles[1397]: ACLs are not supported, ignoring. Oct 9 07:19:42.046523 systemd-tmpfiles[1397]: Detected autofs mount point /boot during canonicalization of boot. Oct 9 07:19:42.046539 systemd-tmpfiles[1397]: Skipping /boot Oct 9 07:19:42.053321 systemd-networkd[1226]: eth1: Gained IPv6LL Oct 9 07:19:42.064185 systemd-tmpfiles[1397]: Detected autofs mount point /boot during canonicalization of boot. Oct 9 07:19:42.064202 systemd-tmpfiles[1397]: Skipping /boot Oct 9 07:19:42.105253 zram_generator::config[1425]: No configuration found. Oct 9 07:19:42.274184 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 07:19:42.309287 systemd-networkd[1226]: eth0: Gained IPv6LL Oct 9 07:19:42.348658 systemd[1]: Reloading finished in 338 ms. Oct 9 07:19:42.369350 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 9 07:19:42.376982 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Oct 9 07:19:42.395409 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 9 07:19:42.407485 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 9 07:19:42.415400 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 9 07:19:42.435409 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 9 07:19:42.450504 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 9 07:19:42.469486 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:19:42.471454 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 07:19:42.476505 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 07:19:42.502713 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 07:19:42.522812 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 07:19:42.523661 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 07:19:42.523817 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:19:42.536549 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 07:19:42.536866 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 07:19:42.548019 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 07:19:42.552533 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 07:19:42.562786 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 9 07:19:42.575772 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 9 07:19:42.588700 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 9 07:19:42.594190 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 07:19:42.594430 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 07:19:42.607361 augenrules[1510]: No rules Oct 9 07:19:42.611320 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 9 07:19:42.620799 systemd[1]: Finished ensure-sysext.service. Oct 9 07:19:42.631646 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:19:42.631873 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 07:19:42.641437 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 07:19:42.647348 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 9 07:19:42.659332 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 07:19:42.672391 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 07:19:42.676302 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 07:19:42.682933 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 9 07:19:42.693090 systemd-resolved[1487]: Positive Trust Anchors: Oct 9 07:19:42.695136 systemd-resolved[1487]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 9 07:19:42.695199 systemd-resolved[1487]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Oct 9 07:19:42.704453 systemd-resolved[1487]: Using system hostname 'ci-3975.2.2-f-f6e42a54cc'. Oct 9 07:19:42.711348 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 9 07:19:42.713917 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 9 07:19:42.713976 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:19:42.714689 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 9 07:19:42.724751 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 07:19:42.725885 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 07:19:42.728394 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 9 07:19:42.728684 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 9 07:19:42.730908 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 07:19:42.732422 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 07:19:42.734900 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 07:19:42.736180 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 07:19:42.742516 systemd[1]: Reached target network.target - Network. Oct 9 07:19:42.745778 systemd[1]: Reached target network-online.target - Network is Online. Oct 9 07:19:42.748976 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 9 07:19:42.749672 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 9 07:19:42.749775 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 9 07:19:42.755996 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 9 07:19:42.842855 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 9 07:19:42.844401 systemd[1]: Reached target sysinit.target - System Initialization. Oct 9 07:19:42.845916 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 9 07:19:42.846773 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 9 07:19:42.847524 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 9 07:19:42.849950 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 9 07:19:42.850017 systemd[1]: Reached target paths.target - Path Units. Oct 9 07:19:42.850979 systemd[1]: Reached target time-set.target - System Time Set. Oct 9 07:19:42.851697 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 9 07:19:42.852923 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 9 07:19:42.853599 systemd[1]: Reached target timers.target - Timer Units. Oct 9 07:19:42.856162 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 9 07:19:42.860170 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 9 07:19:42.865024 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 9 07:19:42.869978 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 9 07:19:42.870689 systemd[1]: Reached target sockets.target - Socket Units. Oct 9 07:19:42.871247 systemd[1]: Reached target basic.target - Basic System. Oct 9 07:19:42.872980 systemd[1]: System is tainted: cgroupsv1 Oct 9 07:19:42.873072 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 9 07:19:42.873107 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 9 07:19:42.875268 systemd[1]: Starting containerd.service - containerd container runtime... Oct 9 07:19:42.885442 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Oct 9 07:19:42.900342 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 9 07:19:42.904720 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 9 07:19:42.919335 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 9 07:19:42.921467 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 9 07:19:42.932665 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:19:42.940381 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 9 07:19:42.949708 dbus-daemon[1546]: [system] SELinux support is enabled Oct 9 07:19:42.958374 jq[1547]: false Oct 9 07:19:42.963370 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 9 07:19:42.974345 coreos-metadata[1544]: Oct 09 07:19:42.974 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Oct 9 07:19:42.981421 coreos-metadata[1544]: Oct 09 07:19:42.976 INFO Fetch successful Oct 9 07:19:42.981504 extend-filesystems[1550]: Found loop4 Oct 9 07:19:42.981504 extend-filesystems[1550]: Found loop5 Oct 9 07:19:42.981504 extend-filesystems[1550]: Found loop6 Oct 9 07:19:42.981504 extend-filesystems[1550]: Found loop7 Oct 9 07:19:42.981504 extend-filesystems[1550]: Found vda Oct 9 07:19:42.981504 extend-filesystems[1550]: Found vda1 Oct 9 07:19:42.981504 extend-filesystems[1550]: Found vda2 Oct 9 07:19:42.981504 extend-filesystems[1550]: Found vda3 Oct 9 07:19:42.981504 extend-filesystems[1550]: Found usr Oct 9 07:19:42.981504 extend-filesystems[1550]: Found vda4 Oct 9 07:19:42.981504 extend-filesystems[1550]: Found vda6 Oct 9 07:19:42.981504 extend-filesystems[1550]: Found vda7 Oct 9 07:19:42.981504 extend-filesystems[1550]: Found vda9 Oct 9 07:19:42.981504 extend-filesystems[1550]: Checking size of /dev/vda9 Oct 9 07:19:42.989395 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 9 07:19:43.018238 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 9 07:19:43.027309 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 9 07:19:43.036283 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 9 07:19:43.038776 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 9 07:19:43.051331 systemd[1]: Starting update-engine.service - Update Engine... Oct 9 07:19:43.061667 extend-filesystems[1550]: Resized partition /dev/vda9 Oct 9 07:19:43.072777 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 9 07:19:43.079390 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 9 07:19:43.082805 extend-filesystems[1584]: resize2fs 1.47.0 (5-Feb-2023) Oct 9 07:19:43.103596 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Oct 9 07:19:43.114677 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 9 07:19:43.114943 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 9 07:19:43.119797 systemd[1]: motdgen.service: Deactivated successfully. Oct 9 07:19:43.121195 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 9 07:19:43.125888 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 9 07:19:43.138151 jq[1579]: true Oct 9 07:19:43.141521 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 9 07:19:43.141785 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 9 07:19:43.640695 systemd-timesyncd[1528]: Contacted time server 23.186.168.2:123 (0.flatcar.pool.ntp.org). Oct 9 07:19:43.640762 systemd-timesyncd[1528]: Initial clock synchronization to Wed 2024-10-09 07:19:43.640477 UTC. Oct 9 07:19:43.642974 systemd-resolved[1487]: Clock change detected. Flushing caches. Oct 9 07:19:43.659010 update_engine[1577]: I1009 07:19:43.658864 1577 main.cc:92] Flatcar Update Engine starting Oct 9 07:19:43.686606 update_engine[1577]: I1009 07:19:43.685490 1577 update_check_scheduler.cc:74] Next update check in 11m58s Oct 9 07:19:43.714885 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1237) Oct 9 07:19:43.723281 (ntainerd)[1594]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 9 07:19:43.738751 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Oct 9 07:19:43.764147 tar[1591]: linux-amd64/helm Oct 9 07:19:43.769969 systemd[1]: Started update-engine.service - Update Engine. Oct 9 07:19:43.774500 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 9 07:19:43.774693 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 9 07:19:43.774736 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 9 07:19:43.776027 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 9 07:19:43.776121 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Oct 9 07:19:43.776145 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 9 07:19:43.777936 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 9 07:19:43.786760 jq[1593]: true Oct 9 07:19:43.784781 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 9 07:19:43.829075 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Oct 9 07:19:43.856666 extend-filesystems[1584]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 9 07:19:43.856666 extend-filesystems[1584]: old_desc_blocks = 1, new_desc_blocks = 8 Oct 9 07:19:43.856666 extend-filesystems[1584]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Oct 9 07:19:43.877870 extend-filesystems[1550]: Resized filesystem in /dev/vda9 Oct 9 07:19:43.877870 extend-filesystems[1550]: Found vdb Oct 9 07:19:43.857039 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 9 07:19:43.857327 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 9 07:19:43.882801 systemd-logind[1573]: New seat seat0. Oct 9 07:19:43.888677 systemd-logind[1573]: Watching system buttons on /dev/input/event1 (Power Button) Oct 9 07:19:43.888703 systemd-logind[1573]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 9 07:19:43.889032 systemd[1]: Started systemd-logind.service - User Login Management. Oct 9 07:19:43.923107 bash[1629]: Updated "/home/core/.ssh/authorized_keys" Oct 9 07:19:43.926289 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 9 07:19:43.948241 systemd[1]: Starting sshkeys.service... Oct 9 07:19:44.015819 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Oct 9 07:19:44.028180 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Oct 9 07:19:44.194679 sshd_keygen[1588]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 9 07:19:44.199051 coreos-metadata[1639]: Oct 09 07:19:44.197 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Oct 9 07:19:44.217430 coreos-metadata[1639]: Oct 09 07:19:44.215 INFO Fetch successful Oct 9 07:19:44.237792 unknown[1639]: wrote ssh authorized keys file for user: core Oct 9 07:19:44.296618 locksmithd[1612]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 9 07:19:44.304900 update-ssh-keys[1660]: Updated "/home/core/.ssh/authorized_keys" Oct 9 07:19:44.308058 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Oct 9 07:19:44.318932 systemd[1]: Finished sshkeys.service. Oct 9 07:19:44.348958 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 9 07:19:44.373309 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 9 07:19:44.409668 systemd[1]: issuegen.service: Deactivated successfully. Oct 9 07:19:44.409977 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 9 07:19:44.430011 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 9 07:19:44.485294 containerd[1594]: time="2024-10-09T07:19:44.485159170Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Oct 9 07:19:44.498873 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 9 07:19:44.516205 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 9 07:19:44.544410 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 9 07:19:44.548248 systemd[1]: Reached target getty.target - Login Prompts. Oct 9 07:19:44.554995 containerd[1594]: time="2024-10-09T07:19:44.554813706Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Oct 9 07:19:44.555610 containerd[1594]: time="2024-10-09T07:19:44.555367735Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 9 07:19:44.557495 containerd[1594]: time="2024-10-09T07:19:44.557422846Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.54-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 9 07:19:44.557655 containerd[1594]: time="2024-10-09T07:19:44.557637436Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 9 07:19:44.558087 containerd[1594]: time="2024-10-09T07:19:44.558064496Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 07:19:44.558567 containerd[1594]: time="2024-10-09T07:19:44.558142575Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 9 07:19:44.558567 containerd[1594]: time="2024-10-09T07:19:44.558236563Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Oct 9 07:19:44.558567 containerd[1594]: time="2024-10-09T07:19:44.558284806Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 07:19:44.558567 containerd[1594]: time="2024-10-09T07:19:44.558312713Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 9 07:19:44.558567 containerd[1594]: time="2024-10-09T07:19:44.558406262Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 9 07:19:44.558956 containerd[1594]: time="2024-10-09T07:19:44.558934341Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 9 07:19:44.559031 containerd[1594]: time="2024-10-09T07:19:44.559018158Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Oct 9 07:19:44.559086 containerd[1594]: time="2024-10-09T07:19:44.559076613Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 9 07:19:44.559324 containerd[1594]: time="2024-10-09T07:19:44.559308952Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 07:19:44.559409 containerd[1594]: time="2024-10-09T07:19:44.559393051Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 9 07:19:44.559825 containerd[1594]: time="2024-10-09T07:19:44.559634665Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Oct 9 07:19:44.559825 containerd[1594]: time="2024-10-09T07:19:44.559662806Z" level=info msg="metadata content store policy set" policy=shared Oct 9 07:19:44.572478 containerd[1594]: time="2024-10-09T07:19:44.572412494Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 9 07:19:44.573699 containerd[1594]: time="2024-10-09T07:19:44.572728581Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 9 07:19:44.573699 containerd[1594]: time="2024-10-09T07:19:44.572750890Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 9 07:19:44.573699 containerd[1594]: time="2024-10-09T07:19:44.572834716Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Oct 9 07:19:44.573699 containerd[1594]: time="2024-10-09T07:19:44.572852441Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Oct 9 07:19:44.573699 containerd[1594]: time="2024-10-09T07:19:44.572864926Z" level=info msg="NRI interface is disabled by configuration." Oct 9 07:19:44.573699 containerd[1594]: time="2024-10-09T07:19:44.572924193Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 9 07:19:44.573699 containerd[1594]: time="2024-10-09T07:19:44.573162312Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Oct 9 07:19:44.573699 containerd[1594]: time="2024-10-09T07:19:44.573184365Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Oct 9 07:19:44.573699 containerd[1594]: time="2024-10-09T07:19:44.573217172Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Oct 9 07:19:44.573699 containerd[1594]: time="2024-10-09T07:19:44.573232817Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Oct 9 07:19:44.573699 containerd[1594]: time="2024-10-09T07:19:44.573252403Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 9 07:19:44.573699 containerd[1594]: time="2024-10-09T07:19:44.573272435Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 9 07:19:44.573699 containerd[1594]: time="2024-10-09T07:19:44.573289109Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 9 07:19:44.573699 containerd[1594]: time="2024-10-09T07:19:44.573302953Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 9 07:19:44.574091 containerd[1594]: time="2024-10-09T07:19:44.573318224Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 9 07:19:44.574091 containerd[1594]: time="2024-10-09T07:19:44.573333849Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 9 07:19:44.574091 containerd[1594]: time="2024-10-09T07:19:44.573346341Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 9 07:19:44.574091 containerd[1594]: time="2024-10-09T07:19:44.573359561Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 9 07:19:44.574091 containerd[1594]: time="2024-10-09T07:19:44.573531836Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 9 07:19:44.578645 containerd[1594]: time="2024-10-09T07:19:44.576348594Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 9 07:19:44.578645 containerd[1594]: time="2024-10-09T07:19:44.576412566Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 9 07:19:44.578645 containerd[1594]: time="2024-10-09T07:19:44.576428687Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Oct 9 07:19:44.578645 containerd[1594]: time="2024-10-09T07:19:44.576471611Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 9 07:19:44.578645 containerd[1594]: time="2024-10-09T07:19:44.576544867Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 9 07:19:44.578645 containerd[1594]: time="2024-10-09T07:19:44.576571740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 9 07:19:44.578645 containerd[1594]: time="2024-10-09T07:19:44.576585470Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 9 07:19:44.578645 containerd[1594]: time="2024-10-09T07:19:44.576597731Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 9 07:19:44.578645 containerd[1594]: time="2024-10-09T07:19:44.576610949Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 9 07:19:44.578645 containerd[1594]: time="2024-10-09T07:19:44.576624745Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 9 07:19:44.578645 containerd[1594]: time="2024-10-09T07:19:44.576637569Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 9 07:19:44.578645 containerd[1594]: time="2024-10-09T07:19:44.576649452Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 9 07:19:44.578645 containerd[1594]: time="2024-10-09T07:19:44.576661906Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 9 07:19:44.578645 containerd[1594]: time="2024-10-09T07:19:44.576885411Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Oct 9 07:19:44.579140 containerd[1594]: time="2024-10-09T07:19:44.576920821Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Oct 9 07:19:44.579140 containerd[1594]: time="2024-10-09T07:19:44.576934137Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 9 07:19:44.579140 containerd[1594]: time="2024-10-09T07:19:44.576948294Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Oct 9 07:19:44.579140 containerd[1594]: time="2024-10-09T07:19:44.576962749Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 9 07:19:44.579140 containerd[1594]: time="2024-10-09T07:19:44.576977786Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Oct 9 07:19:44.579140 containerd[1594]: time="2024-10-09T07:19:44.576991636Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 9 07:19:44.579140 containerd[1594]: time="2024-10-09T07:19:44.577003300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 9 07:19:44.579315 containerd[1594]: time="2024-10-09T07:19:44.577330100Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 9 07:19:44.579315 containerd[1594]: time="2024-10-09T07:19:44.577389795Z" level=info msg="Connect containerd service" Oct 9 07:19:44.579315 containerd[1594]: time="2024-10-09T07:19:44.577453703Z" level=info msg="using legacy CRI server" Oct 9 07:19:44.579315 containerd[1594]: time="2024-10-09T07:19:44.577462480Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 9 07:19:44.579315 containerd[1594]: time="2024-10-09T07:19:44.577579971Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 9 07:19:44.579315 containerd[1594]: time="2024-10-09T07:19:44.578289223Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 9 07:19:44.579315 containerd[1594]: time="2024-10-09T07:19:44.578371628Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 9 07:19:44.579315 containerd[1594]: time="2024-10-09T07:19:44.578402257Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Oct 9 07:19:44.579315 containerd[1594]: time="2024-10-09T07:19:44.578419568Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 9 07:19:44.579315 containerd[1594]: time="2024-10-09T07:19:44.578437792Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Oct 9 07:19:44.588618 containerd[1594]: time="2024-10-09T07:19:44.582658652Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 9 07:19:44.588618 containerd[1594]: time="2024-10-09T07:19:44.582763252Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 9 07:19:44.588618 containerd[1594]: time="2024-10-09T07:19:44.582931810Z" level=info msg="Start subscribing containerd event" Oct 9 07:19:44.588618 containerd[1594]: time="2024-10-09T07:19:44.582986169Z" level=info msg="Start recovering state" Oct 9 07:19:44.588618 containerd[1594]: time="2024-10-09T07:19:44.583075543Z" level=info msg="Start event monitor" Oct 9 07:19:44.588618 containerd[1594]: time="2024-10-09T07:19:44.583090736Z" level=info msg="Start snapshots syncer" Oct 9 07:19:44.588618 containerd[1594]: time="2024-10-09T07:19:44.583106065Z" level=info msg="Start cni network conf syncer for default" Oct 9 07:19:44.588618 containerd[1594]: time="2024-10-09T07:19:44.583118574Z" level=info msg="Start streaming server" Oct 9 07:19:44.583404 systemd[1]: Started containerd.service - containerd container runtime. Oct 9 07:19:44.592597 containerd[1594]: time="2024-10-09T07:19:44.589718431Z" level=info msg="containerd successfully booted in 0.105838s" Oct 9 07:19:44.891371 tar[1591]: linux-amd64/LICENSE Oct 9 07:19:44.892358 tar[1591]: linux-amd64/README.md Oct 9 07:19:44.912298 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 9 07:19:45.284200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:19:45.288578 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 9 07:19:45.294662 systemd[1]: Startup finished in 6.572s (kernel) + 6.619s (userspace) = 13.191s. Oct 9 07:19:45.299015 (kubelet)[1700]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 07:19:46.161361 kubelet[1700]: E1009 07:19:46.161221 1700 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 07:19:46.165693 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 07:19:46.166022 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 07:19:46.282885 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 9 07:19:46.288966 systemd[1]: Started sshd@0-161.35.237.80:22-147.75.109.163:57900.service - OpenSSH per-connection server daemon (147.75.109.163:57900). Oct 9 07:19:46.360963 sshd[1713]: Accepted publickey for core from 147.75.109.163 port 57900 ssh2: RSA SHA256:OOTuok04LPMhCB4st0aqyl5Dfz9DReS3qIQDSGH1S/w Oct 9 07:19:46.364567 sshd[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:19:46.381614 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 9 07:19:46.388999 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 9 07:19:46.392219 systemd-logind[1573]: New session 1 of user core. Oct 9 07:19:46.424779 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 9 07:19:46.438112 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 9 07:19:46.444793 (systemd)[1719]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:19:46.576385 systemd[1719]: Queued start job for default target default.target. Oct 9 07:19:46.577451 systemd[1719]: Created slice app.slice - User Application Slice. Oct 9 07:19:46.577493 systemd[1719]: Reached target paths.target - Paths. Oct 9 07:19:46.577506 systemd[1719]: Reached target timers.target - Timers. Oct 9 07:19:46.585797 systemd[1719]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 9 07:19:46.596929 systemd[1719]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 9 07:19:46.597013 systemd[1719]: Reached target sockets.target - Sockets. Oct 9 07:19:46.597028 systemd[1719]: Reached target basic.target - Basic System. Oct 9 07:19:46.597086 systemd[1719]: Reached target default.target - Main User Target. Oct 9 07:19:46.597122 systemd[1719]: Startup finished in 140ms. Oct 9 07:19:46.597277 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 9 07:19:46.609101 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 9 07:19:46.679485 systemd[1]: Started sshd@1-161.35.237.80:22-147.75.109.163:57916.service - OpenSSH per-connection server daemon (147.75.109.163:57916). Oct 9 07:19:46.732515 sshd[1731]: Accepted publickey for core from 147.75.109.163 port 57916 ssh2: RSA SHA256:OOTuok04LPMhCB4st0aqyl5Dfz9DReS3qIQDSGH1S/w Oct 9 07:19:46.734914 sshd[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:19:46.740770 systemd-logind[1573]: New session 2 of user core. Oct 9 07:19:46.745942 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 9 07:19:46.814783 sshd[1731]: pam_unix(sshd:session): session closed for user core Oct 9 07:19:46.818311 systemd[1]: sshd@1-161.35.237.80:22-147.75.109.163:57916.service: Deactivated successfully. Oct 9 07:19:46.823394 systemd-logind[1573]: Session 2 logged out. Waiting for processes to exit. Oct 9 07:19:46.828905 systemd[1]: Started sshd@2-161.35.237.80:22-147.75.109.163:57920.service - OpenSSH per-connection server daemon (147.75.109.163:57920). Oct 9 07:19:46.829238 systemd[1]: session-2.scope: Deactivated successfully. Oct 9 07:19:46.831090 systemd-logind[1573]: Removed session 2. Oct 9 07:19:46.879101 sshd[1739]: Accepted publickey for core from 147.75.109.163 port 57920 ssh2: RSA SHA256:OOTuok04LPMhCB4st0aqyl5Dfz9DReS3qIQDSGH1S/w Oct 9 07:19:46.881324 sshd[1739]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:19:46.889253 systemd-logind[1573]: New session 3 of user core. Oct 9 07:19:46.895087 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 9 07:19:46.958902 sshd[1739]: pam_unix(sshd:session): session closed for user core Oct 9 07:19:46.969012 systemd[1]: Started sshd@3-161.35.237.80:22-147.75.109.163:57928.service - OpenSSH per-connection server daemon (147.75.109.163:57928). Oct 9 07:19:46.969487 systemd[1]: sshd@2-161.35.237.80:22-147.75.109.163:57920.service: Deactivated successfully. Oct 9 07:19:46.978994 systemd[1]: session-3.scope: Deactivated successfully. Oct 9 07:19:46.980745 systemd-logind[1573]: Session 3 logged out. Waiting for processes to exit. Oct 9 07:19:46.982311 systemd-logind[1573]: Removed session 3. Oct 9 07:19:47.014304 sshd[1744]: Accepted publickey for core from 147.75.109.163 port 57928 ssh2: RSA SHA256:OOTuok04LPMhCB4st0aqyl5Dfz9DReS3qIQDSGH1S/w Oct 9 07:19:47.016378 sshd[1744]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:19:47.023452 systemd-logind[1573]: New session 4 of user core. Oct 9 07:19:47.035827 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 9 07:19:47.101744 sshd[1744]: pam_unix(sshd:session): session closed for user core Oct 9 07:19:47.112037 systemd[1]: Started sshd@4-161.35.237.80:22-147.75.109.163:38160.service - OpenSSH per-connection server daemon (147.75.109.163:38160). Oct 9 07:19:47.112646 systemd[1]: sshd@3-161.35.237.80:22-147.75.109.163:57928.service: Deactivated successfully. Oct 9 07:19:47.114945 systemd[1]: session-4.scope: Deactivated successfully. Oct 9 07:19:47.116918 systemd-logind[1573]: Session 4 logged out. Waiting for processes to exit. Oct 9 07:19:47.119760 systemd-logind[1573]: Removed session 4. Oct 9 07:19:47.158144 sshd[1752]: Accepted publickey for core from 147.75.109.163 port 38160 ssh2: RSA SHA256:OOTuok04LPMhCB4st0aqyl5Dfz9DReS3qIQDSGH1S/w Oct 9 07:19:47.160034 sshd[1752]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:19:47.165485 systemd-logind[1573]: New session 5 of user core. Oct 9 07:19:47.173036 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 9 07:19:47.244984 sudo[1759]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 9 07:19:47.245880 sudo[1759]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 9 07:19:47.263358 sudo[1759]: pam_unix(sudo:session): session closed for user root Oct 9 07:19:47.267845 sshd[1752]: pam_unix(sshd:session): session closed for user core Oct 9 07:19:47.277981 systemd[1]: Started sshd@5-161.35.237.80:22-147.75.109.163:38166.service - OpenSSH per-connection server daemon (147.75.109.163:38166). Oct 9 07:19:47.278699 systemd[1]: sshd@4-161.35.237.80:22-147.75.109.163:38160.service: Deactivated successfully. Oct 9 07:19:47.286263 systemd[1]: session-5.scope: Deactivated successfully. Oct 9 07:19:47.286960 systemd-logind[1573]: Session 5 logged out. Waiting for processes to exit. Oct 9 07:19:47.288779 systemd-logind[1573]: Removed session 5. Oct 9 07:19:47.327474 sshd[1761]: Accepted publickey for core from 147.75.109.163 port 38166 ssh2: RSA SHA256:OOTuok04LPMhCB4st0aqyl5Dfz9DReS3qIQDSGH1S/w Oct 9 07:19:47.329479 sshd[1761]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:19:47.336373 systemd-logind[1573]: New session 6 of user core. Oct 9 07:19:47.346097 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 9 07:19:47.409580 sudo[1769]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 9 07:19:47.410264 sudo[1769]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 9 07:19:47.414522 sudo[1769]: pam_unix(sudo:session): session closed for user root Oct 9 07:19:47.421134 sudo[1768]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 9 07:19:47.421430 sudo[1768]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 9 07:19:47.438975 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Oct 9 07:19:47.452865 auditctl[1772]: No rules Oct 9 07:19:47.453522 systemd[1]: audit-rules.service: Deactivated successfully. Oct 9 07:19:47.453835 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Oct 9 07:19:47.472525 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 9 07:19:47.506632 augenrules[1791]: No rules Oct 9 07:19:47.507242 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 9 07:19:47.509957 sudo[1768]: pam_unix(sudo:session): session closed for user root Oct 9 07:19:47.513990 sshd[1761]: pam_unix(sshd:session): session closed for user core Oct 9 07:19:47.523984 systemd[1]: Started sshd@6-161.35.237.80:22-147.75.109.163:38178.service - OpenSSH per-connection server daemon (147.75.109.163:38178). Oct 9 07:19:47.524739 systemd[1]: sshd@5-161.35.237.80:22-147.75.109.163:38166.service: Deactivated successfully. Oct 9 07:19:47.528229 systemd[1]: session-6.scope: Deactivated successfully. Oct 9 07:19:47.529258 systemd-logind[1573]: Session 6 logged out. Waiting for processes to exit. Oct 9 07:19:47.532341 systemd-logind[1573]: Removed session 6. Oct 9 07:19:47.570775 sshd[1798]: Accepted publickey for core from 147.75.109.163 port 38178 ssh2: RSA SHA256:OOTuok04LPMhCB4st0aqyl5Dfz9DReS3qIQDSGH1S/w Oct 9 07:19:47.572714 sshd[1798]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:19:47.579549 systemd-logind[1573]: New session 7 of user core. Oct 9 07:19:47.590159 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 9 07:19:47.653638 sudo[1804]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 9 07:19:47.653940 sudo[1804]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 9 07:19:47.838914 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 9 07:19:47.851275 (dockerd)[1814]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 9 07:19:48.275909 dockerd[1814]: time="2024-10-09T07:19:48.275310072Z" level=info msg="Starting up" Oct 9 07:19:48.461581 dockerd[1814]: time="2024-10-09T07:19:48.461170029Z" level=info msg="Loading containers: start." Oct 9 07:19:48.628580 kernel: Initializing XFRM netlink socket Oct 9 07:19:48.733705 systemd-networkd[1226]: docker0: Link UP Oct 9 07:19:48.761176 dockerd[1814]: time="2024-10-09T07:19:48.761133720Z" level=info msg="Loading containers: done." Oct 9 07:19:48.846188 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4198587147-merged.mount: Deactivated successfully. Oct 9 07:19:48.852281 dockerd[1814]: time="2024-10-09T07:19:48.852203353Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 9 07:19:48.852547 dockerd[1814]: time="2024-10-09T07:19:48.852505428Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Oct 9 07:19:48.852714 dockerd[1814]: time="2024-10-09T07:19:48.852686861Z" level=info msg="Daemon has completed initialization" Oct 9 07:19:48.904891 dockerd[1814]: time="2024-10-09T07:19:48.904788951Z" level=info msg="API listen on /run/docker.sock" Oct 9 07:19:48.905067 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 9 07:19:50.043999 containerd[1594]: time="2024-10-09T07:19:50.043648625Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.9\"" Oct 9 07:19:50.663958 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3414756790.mount: Deactivated successfully. Oct 9 07:19:52.229115 containerd[1594]: time="2024-10-09T07:19:52.229057851Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:19:52.230966 containerd[1594]: time="2024-10-09T07:19:52.230907840Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.9: active requests=0, bytes read=35213841" Oct 9 07:19:52.231250 containerd[1594]: time="2024-10-09T07:19:52.231159932Z" level=info msg="ImageCreate event name:\"sha256:bc1ec5c2b6c60a3b18e7f54a99f0452c038400ecaaa2576931fd5342a0586abb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:19:52.235563 containerd[1594]: time="2024-10-09T07:19:52.235071989Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b88538e7fdf73583c8670540eec5b3620af75c9ec200434a5815ee7fba5021f3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:19:52.236995 containerd[1594]: time="2024-10-09T07:19:52.236417625Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.9\" with image id \"sha256:bc1ec5c2b6c60a3b18e7f54a99f0452c038400ecaaa2576931fd5342a0586abb\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b88538e7fdf73583c8670540eec5b3620af75c9ec200434a5815ee7fba5021f3\", size \"35210641\" in 2.192716016s" Oct 9 07:19:52.236995 containerd[1594]: time="2024-10-09T07:19:52.236479842Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.9\" returns image reference \"sha256:bc1ec5c2b6c60a3b18e7f54a99f0452c038400ecaaa2576931fd5342a0586abb\"" Oct 9 07:19:52.265041 containerd[1594]: time="2024-10-09T07:19:52.264747156Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.9\"" Oct 9 07:19:53.997465 containerd[1594]: time="2024-10-09T07:19:53.997345815Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:19:54.000256 containerd[1594]: time="2024-10-09T07:19:53.999483424Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.9: active requests=0, bytes read=32208673" Oct 9 07:19:54.002579 containerd[1594]: time="2024-10-09T07:19:54.001730925Z" level=info msg="ImageCreate event name:\"sha256:5abda0d0a9153cd1f90fd828be379f7a16a6c814e6efbbbf31e247e13c3843e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:19:54.006715 containerd[1594]: time="2024-10-09T07:19:54.006639388Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f2f18973ccb6996687d10ba5bd1b8f303e3dd2fed80f831a44d2ac8191e5bb9b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:19:54.008407 containerd[1594]: time="2024-10-09T07:19:54.008333166Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.9\" with image id \"sha256:5abda0d0a9153cd1f90fd828be379f7a16a6c814e6efbbbf31e247e13c3843e5\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f2f18973ccb6996687d10ba5bd1b8f303e3dd2fed80f831a44d2ac8191e5bb9b\", size \"33739229\" in 1.743531498s" Oct 9 07:19:54.008733 containerd[1594]: time="2024-10-09T07:19:54.008693029Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.9\" returns image reference \"sha256:5abda0d0a9153cd1f90fd828be379f7a16a6c814e6efbbbf31e247e13c3843e5\"" Oct 9 07:19:54.045399 containerd[1594]: time="2024-10-09T07:19:54.045336129Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.9\"" Oct 9 07:19:55.316570 containerd[1594]: time="2024-10-09T07:19:55.315115045Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:19:55.318009 containerd[1594]: time="2024-10-09T07:19:55.316820414Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.9: active requests=0, bytes read=17320456" Oct 9 07:19:55.318009 containerd[1594]: time="2024-10-09T07:19:55.317440827Z" level=info msg="ImageCreate event name:\"sha256:059957505b3370d4c57d793e79cc70f9063d7ab75767f7040f5cc85572fe7e8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:19:55.321054 containerd[1594]: time="2024-10-09T07:19:55.320971585Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c164076eebaefdaebad46a5ccd550e9f38c63588c02d35163c6a09e164ab8a8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:19:55.322590 containerd[1594]: time="2024-10-09T07:19:55.322506018Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.9\" with image id \"sha256:059957505b3370d4c57d793e79cc70f9063d7ab75767f7040f5cc85572fe7e8d\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c164076eebaefdaebad46a5ccd550e9f38c63588c02d35163c6a09e164ab8a8\", size \"18851030\" in 1.277105493s" Oct 9 07:19:55.322872 containerd[1594]: time="2024-10-09T07:19:55.322747193Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.9\" returns image reference \"sha256:059957505b3370d4c57d793e79cc70f9063d7ab75767f7040f5cc85572fe7e8d\"" Oct 9 07:19:55.355795 containerd[1594]: time="2024-10-09T07:19:55.355757305Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.9\"" Oct 9 07:19:56.416064 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 9 07:19:56.423511 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:19:56.510135 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount387727989.mount: Deactivated successfully. Oct 9 07:19:56.592783 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:19:56.595613 (kubelet)[2050]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 07:19:56.691017 kubelet[2050]: E1009 07:19:56.689355 2050 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 07:19:56.697743 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 07:19:56.697941 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 07:19:57.104222 containerd[1594]: time="2024-10-09T07:19:57.103750075Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:19:57.105341 containerd[1594]: time="2024-10-09T07:19:57.105059948Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.9: active requests=0, bytes read=28601750" Oct 9 07:19:57.106681 containerd[1594]: time="2024-10-09T07:19:57.106275811Z" level=info msg="ImageCreate event name:\"sha256:dd650d127e51776919ec1622a4469a8b141b2dfee5a33fbc5cb9729372e0dcfa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:19:57.108925 containerd[1594]: time="2024-10-09T07:19:57.108884074Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:124040dbe6b5294352355f5d34c692ecbc940cdc57a8fd06d0f38f76b6138906\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:19:57.110041 containerd[1594]: time="2024-10-09T07:19:57.109994020Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.9\" with image id \"sha256:dd650d127e51776919ec1622a4469a8b141b2dfee5a33fbc5cb9729372e0dcfa\", repo tag \"registry.k8s.io/kube-proxy:v1.29.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:124040dbe6b5294352355f5d34c692ecbc940cdc57a8fd06d0f38f76b6138906\", size \"28600769\" in 1.754015395s" Oct 9 07:19:57.110212 containerd[1594]: time="2024-10-09T07:19:57.110182382Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.9\" returns image reference \"sha256:dd650d127e51776919ec1622a4469a8b141b2dfee5a33fbc5cb9729372e0dcfa\"" Oct 9 07:19:57.147086 containerd[1594]: time="2024-10-09T07:19:57.147048230Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Oct 9 07:19:57.744917 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1322539144.mount: Deactivated successfully. Oct 9 07:19:58.715561 containerd[1594]: time="2024-10-09T07:19:58.715450565Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:19:58.717002 containerd[1594]: time="2024-10-09T07:19:58.716929863Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Oct 9 07:19:58.718180 containerd[1594]: time="2024-10-09T07:19:58.718045954Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:19:58.722081 containerd[1594]: time="2024-10-09T07:19:58.722003285Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:19:58.723459 containerd[1594]: time="2024-10-09T07:19:58.723270196Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.575988468s" Oct 9 07:19:58.723459 containerd[1594]: time="2024-10-09T07:19:58.723314357Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Oct 9 07:19:58.749778 containerd[1594]: time="2024-10-09T07:19:58.749727371Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Oct 9 07:19:59.260200 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount901240012.mount: Deactivated successfully. Oct 9 07:19:59.271278 containerd[1594]: time="2024-10-09T07:19:59.271063938Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:19:59.272100 containerd[1594]: time="2024-10-09T07:19:59.272053197Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Oct 9 07:19:59.272822 containerd[1594]: time="2024-10-09T07:19:59.272766205Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:19:59.277720 containerd[1594]: time="2024-10-09T07:19:59.277602485Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:19:59.279400 containerd[1594]: time="2024-10-09T07:19:59.278746874Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 528.970189ms" Oct 9 07:19:59.279400 containerd[1594]: time="2024-10-09T07:19:59.278808847Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Oct 9 07:19:59.310301 containerd[1594]: time="2024-10-09T07:19:59.310252635Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Oct 9 07:19:59.935246 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1028747479.mount: Deactivated successfully. Oct 9 07:20:02.304019 containerd[1594]: time="2024-10-09T07:20:02.303901539Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:20:02.312804 containerd[1594]: time="2024-10-09T07:20:02.312680948Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Oct 9 07:20:02.314988 containerd[1594]: time="2024-10-09T07:20:02.314820180Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:20:02.324022 containerd[1594]: time="2024-10-09T07:20:02.323896520Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:20:02.326620 containerd[1594]: time="2024-10-09T07:20:02.326183490Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 3.015591727s" Oct 9 07:20:02.326620 containerd[1594]: time="2024-10-09T07:20:02.326257867Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Oct 9 07:20:05.788503 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:20:05.796987 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:20:05.839341 systemd[1]: Reloading requested from client PID 2229 ('systemctl') (unit session-7.scope)... Oct 9 07:20:05.839367 systemd[1]: Reloading... Oct 9 07:20:06.015567 zram_generator::config[2269]: No configuration found. Oct 9 07:20:06.200209 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 07:20:06.289086 systemd[1]: Reloading finished in 448 ms. Oct 9 07:20:06.337366 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 9 07:20:06.337492 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 9 07:20:06.337813 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:20:06.341901 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:20:06.519836 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:20:06.539208 (kubelet)[2328]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 9 07:20:06.601772 kubelet[2328]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 07:20:06.601772 kubelet[2328]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 9 07:20:06.601772 kubelet[2328]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 07:20:06.602378 kubelet[2328]: I1009 07:20:06.602186 2328 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 9 07:20:06.824728 kubelet[2328]: I1009 07:20:06.824595 2328 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Oct 9 07:20:06.824728 kubelet[2328]: I1009 07:20:06.824629 2328 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 9 07:20:06.824980 kubelet[2328]: I1009 07:20:06.824947 2328 server.go:919] "Client rotation is on, will bootstrap in background" Oct 9 07:20:06.848667 kubelet[2328]: E1009 07:20:06.848597 2328 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://161.35.237.80:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 161.35.237.80:6443: connect: connection refused Oct 9 07:20:06.848846 kubelet[2328]: I1009 07:20:06.848816 2328 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 9 07:20:06.867917 kubelet[2328]: I1009 07:20:06.867834 2328 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 9 07:20:06.870082 kubelet[2328]: I1009 07:20:06.870002 2328 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 9 07:20:06.871491 kubelet[2328]: I1009 07:20:06.871431 2328 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 9 07:20:06.871491 kubelet[2328]: I1009 07:20:06.871487 2328 topology_manager.go:138] "Creating topology manager with none policy" Oct 9 07:20:06.871491 kubelet[2328]: I1009 07:20:06.871498 2328 container_manager_linux.go:301] "Creating device plugin manager" Oct 9 07:20:06.871734 kubelet[2328]: I1009 07:20:06.871689 2328 state_mem.go:36] "Initialized new in-memory state store" Oct 9 07:20:06.872572 kubelet[2328]: I1009 07:20:06.871861 2328 kubelet.go:396] "Attempting to sync node with API server" Oct 9 07:20:06.872572 kubelet[2328]: I1009 07:20:06.871886 2328 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 9 07:20:06.872572 kubelet[2328]: I1009 07:20:06.871919 2328 kubelet.go:312] "Adding apiserver pod source" Oct 9 07:20:06.872572 kubelet[2328]: I1009 07:20:06.871939 2328 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 9 07:20:06.872572 kubelet[2328]: W1009 07:20:06.872465 2328 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://161.35.237.80:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.2.2-f-f6e42a54cc&limit=500&resourceVersion=0": dial tcp 161.35.237.80:6443: connect: connection refused Oct 9 07:20:06.872833 kubelet[2328]: E1009 07:20:06.872817 2328 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://161.35.237.80:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.2.2-f-f6e42a54cc&limit=500&resourceVersion=0": dial tcp 161.35.237.80:6443: connect: connection refused Oct 9 07:20:06.873812 kubelet[2328]: W1009 07:20:06.873767 2328 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://161.35.237.80:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 161.35.237.80:6443: connect: connection refused Oct 9 07:20:06.873906 kubelet[2328]: E1009 07:20:06.873821 2328 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://161.35.237.80:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 161.35.237.80:6443: connect: connection refused Oct 9 07:20:06.874201 kubelet[2328]: I1009 07:20:06.874182 2328 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Oct 9 07:20:06.880376 kubelet[2328]: I1009 07:20:06.880299 2328 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 9 07:20:06.882417 kubelet[2328]: W1009 07:20:06.882350 2328 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 9 07:20:06.884395 kubelet[2328]: I1009 07:20:06.884346 2328 server.go:1256] "Started kubelet" Oct 9 07:20:06.889168 kubelet[2328]: I1009 07:20:06.888497 2328 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 9 07:20:06.896948 kubelet[2328]: E1009 07:20:06.896774 2328 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://161.35.237.80:6443/api/v1/namespaces/default/events\": dial tcp 161.35.237.80:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3975.2.2-f-f6e42a54cc.17fcb7c367597c41 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3975.2.2-f-f6e42a54cc,UID:ci-3975.2.2-f-f6e42a54cc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3975.2.2-f-f6e42a54cc,},FirstTimestamp:2024-10-09 07:20:06.883851329 +0000 UTC m=+0.339229321,LastTimestamp:2024-10-09 07:20:06.883851329 +0000 UTC m=+0.339229321,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3975.2.2-f-f6e42a54cc,}" Oct 9 07:20:06.897182 kubelet[2328]: I1009 07:20:06.896985 2328 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Oct 9 07:20:06.898682 kubelet[2328]: I1009 07:20:06.897923 2328 server.go:461] "Adding debug handlers to kubelet server" Oct 9 07:20:06.899777 kubelet[2328]: I1009 07:20:06.899172 2328 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 9 07:20:06.900029 kubelet[2328]: I1009 07:20:06.900010 2328 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 9 07:20:06.900105 kubelet[2328]: I1009 07:20:06.900069 2328 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 9 07:20:06.902104 kubelet[2328]: I1009 07:20:06.902076 2328 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Oct 9 07:20:06.902190 kubelet[2328]: I1009 07:20:06.902160 2328 reconciler_new.go:29] "Reconciler: start to sync state" Oct 9 07:20:06.902322 kubelet[2328]: E1009 07:20:06.902305 2328 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://161.35.237.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.2.2-f-f6e42a54cc?timeout=10s\": dial tcp 161.35.237.80:6443: connect: connection refused" interval="200ms" Oct 9 07:20:06.903193 kubelet[2328]: W1009 07:20:06.903146 2328 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://161.35.237.80:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 161.35.237.80:6443: connect: connection refused Oct 9 07:20:06.903193 kubelet[2328]: E1009 07:20:06.903195 2328 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://161.35.237.80:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 161.35.237.80:6443: connect: connection refused Oct 9 07:20:06.903948 kubelet[2328]: I1009 07:20:06.903931 2328 factory.go:221] Registration of the systemd container factory successfully Oct 9 07:20:06.904382 kubelet[2328]: I1009 07:20:06.904322 2328 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 9 07:20:06.905082 kubelet[2328]: E1009 07:20:06.905037 2328 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 9 07:20:06.906531 kubelet[2328]: I1009 07:20:06.906508 2328 factory.go:221] Registration of the containerd container factory successfully Oct 9 07:20:06.928293 kubelet[2328]: I1009 07:20:06.928099 2328 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 9 07:20:06.930573 kubelet[2328]: I1009 07:20:06.929841 2328 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 9 07:20:06.930573 kubelet[2328]: I1009 07:20:06.929890 2328 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 9 07:20:06.930573 kubelet[2328]: I1009 07:20:06.929922 2328 kubelet.go:2329] "Starting kubelet main sync loop" Oct 9 07:20:06.930573 kubelet[2328]: E1009 07:20:06.930000 2328 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 9 07:20:06.944500 kubelet[2328]: W1009 07:20:06.944414 2328 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://161.35.237.80:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 161.35.237.80:6443: connect: connection refused Oct 9 07:20:06.944732 kubelet[2328]: E1009 07:20:06.944715 2328 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://161.35.237.80:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 161.35.237.80:6443: connect: connection refused Oct 9 07:20:06.949172 kubelet[2328]: I1009 07:20:06.949135 2328 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 9 07:20:06.949172 kubelet[2328]: I1009 07:20:06.949164 2328 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 9 07:20:06.949172 kubelet[2328]: I1009 07:20:06.949185 2328 state_mem.go:36] "Initialized new in-memory state store" Oct 9 07:20:06.954233 kubelet[2328]: I1009 07:20:06.954157 2328 policy_none.go:49] "None policy: Start" Oct 9 07:20:06.955679 kubelet[2328]: I1009 07:20:06.955650 2328 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 9 07:20:06.955819 kubelet[2328]: I1009 07:20:06.955697 2328 state_mem.go:35] "Initializing new in-memory state store" Oct 9 07:20:06.965091 kubelet[2328]: I1009 07:20:06.964123 2328 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 9 07:20:06.965091 kubelet[2328]: I1009 07:20:06.964529 2328 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 9 07:20:06.969517 kubelet[2328]: E1009 07:20:06.968869 2328 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3975.2.2-f-f6e42a54cc\" not found" Oct 9 07:20:07.002974 kubelet[2328]: I1009 07:20:07.002902 2328 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975.2.2-f-f6e42a54cc" Oct 9 07:20:07.003631 kubelet[2328]: E1009 07:20:07.003595 2328 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://161.35.237.80:6443/api/v1/nodes\": dial tcp 161.35.237.80:6443: connect: connection refused" node="ci-3975.2.2-f-f6e42a54cc" Oct 9 07:20:07.032777 kubelet[2328]: I1009 07:20:07.032135 2328 topology_manager.go:215] "Topology Admit Handler" podUID="33925205588d5dc4d39490bbce992fdd" podNamespace="kube-system" podName="kube-apiserver-ci-3975.2.2-f-f6e42a54cc" Oct 9 07:20:07.033988 kubelet[2328]: I1009 07:20:07.033807 2328 topology_manager.go:215] "Topology Admit Handler" podUID="bf33477c8052ff6d0ce927c9d8aec6ad" podNamespace="kube-system" podName="kube-controller-manager-ci-3975.2.2-f-f6e42a54cc" Oct 9 07:20:07.035374 kubelet[2328]: I1009 07:20:07.035021 2328 topology_manager.go:215] "Topology Admit Handler" podUID="40cbb93dda30f1a4eda7c3aea1fb8928" podNamespace="kube-system" podName="kube-scheduler-ci-3975.2.2-f-f6e42a54cc" Oct 9 07:20:07.104072 kubelet[2328]: I1009 07:20:07.103721 2328 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/33925205588d5dc4d39490bbce992fdd-k8s-certs\") pod \"kube-apiserver-ci-3975.2.2-f-f6e42a54cc\" (UID: \"33925205588d5dc4d39490bbce992fdd\") " pod="kube-system/kube-apiserver-ci-3975.2.2-f-f6e42a54cc" Oct 9 07:20:07.104072 kubelet[2328]: I1009 07:20:07.103786 2328 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/bf33477c8052ff6d0ce927c9d8aec6ad-flexvolume-dir\") pod \"kube-controller-manager-ci-3975.2.2-f-f6e42a54cc\" (UID: \"bf33477c8052ff6d0ce927c9d8aec6ad\") " pod="kube-system/kube-controller-manager-ci-3975.2.2-f-f6e42a54cc" Oct 9 07:20:07.104072 kubelet[2328]: E1009 07:20:07.103813 2328 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://161.35.237.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.2.2-f-f6e42a54cc?timeout=10s\": dial tcp 161.35.237.80:6443: connect: connection refused" interval="400ms" Oct 9 07:20:07.104072 kubelet[2328]: I1009 07:20:07.103838 2328 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bf33477c8052ff6d0ce927c9d8aec6ad-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3975.2.2-f-f6e42a54cc\" (UID: \"bf33477c8052ff6d0ce927c9d8aec6ad\") " pod="kube-system/kube-controller-manager-ci-3975.2.2-f-f6e42a54cc" Oct 9 07:20:07.104072 kubelet[2328]: I1009 07:20:07.103899 2328 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bf33477c8052ff6d0ce927c9d8aec6ad-k8s-certs\") pod \"kube-controller-manager-ci-3975.2.2-f-f6e42a54cc\" (UID: \"bf33477c8052ff6d0ce927c9d8aec6ad\") " pod="kube-system/kube-controller-manager-ci-3975.2.2-f-f6e42a54cc" Oct 9 07:20:07.104384 kubelet[2328]: I1009 07:20:07.103934 2328 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bf33477c8052ff6d0ce927c9d8aec6ad-kubeconfig\") pod \"kube-controller-manager-ci-3975.2.2-f-f6e42a54cc\" (UID: \"bf33477c8052ff6d0ce927c9d8aec6ad\") " pod="kube-system/kube-controller-manager-ci-3975.2.2-f-f6e42a54cc" Oct 9 07:20:07.104869 kubelet[2328]: I1009 07:20:07.104658 2328 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/40cbb93dda30f1a4eda7c3aea1fb8928-kubeconfig\") pod \"kube-scheduler-ci-3975.2.2-f-f6e42a54cc\" (UID: \"40cbb93dda30f1a4eda7c3aea1fb8928\") " pod="kube-system/kube-scheduler-ci-3975.2.2-f-f6e42a54cc" Oct 9 07:20:07.104869 kubelet[2328]: I1009 07:20:07.104720 2328 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/33925205588d5dc4d39490bbce992fdd-ca-certs\") pod \"kube-apiserver-ci-3975.2.2-f-f6e42a54cc\" (UID: \"33925205588d5dc4d39490bbce992fdd\") " pod="kube-system/kube-apiserver-ci-3975.2.2-f-f6e42a54cc" Oct 9 07:20:07.104869 kubelet[2328]: I1009 07:20:07.104758 2328 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/33925205588d5dc4d39490bbce992fdd-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3975.2.2-f-f6e42a54cc\" (UID: \"33925205588d5dc4d39490bbce992fdd\") " pod="kube-system/kube-apiserver-ci-3975.2.2-f-f6e42a54cc" Oct 9 07:20:07.104869 kubelet[2328]: I1009 07:20:07.104789 2328 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bf33477c8052ff6d0ce927c9d8aec6ad-ca-certs\") pod \"kube-controller-manager-ci-3975.2.2-f-f6e42a54cc\" (UID: \"bf33477c8052ff6d0ce927c9d8aec6ad\") " pod="kube-system/kube-controller-manager-ci-3975.2.2-f-f6e42a54cc" Oct 9 07:20:07.205872 kubelet[2328]: I1009 07:20:07.205811 2328 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975.2.2-f-f6e42a54cc" Oct 9 07:20:07.206389 kubelet[2328]: E1009 07:20:07.206358 2328 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://161.35.237.80:6443/api/v1/nodes\": dial tcp 161.35.237.80:6443: connect: connection refused" node="ci-3975.2.2-f-f6e42a54cc" Oct 9 07:20:07.340818 kubelet[2328]: E1009 07:20:07.340710 2328 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:20:07.342601 containerd[1594]: time="2024-10-09T07:20:07.341931289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3975.2.2-f-f6e42a54cc,Uid:33925205588d5dc4d39490bbce992fdd,Namespace:kube-system,Attempt:0,}" Oct 9 07:20:07.344307 kubelet[2328]: E1009 07:20:07.342725 2328 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:20:07.346097 kubelet[2328]: E1009 07:20:07.345632 2328 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:20:07.347369 containerd[1594]: time="2024-10-09T07:20:07.346885165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3975.2.2-f-f6e42a54cc,Uid:40cbb93dda30f1a4eda7c3aea1fb8928,Namespace:kube-system,Attempt:0,}" Oct 9 07:20:07.347369 containerd[1594]: time="2024-10-09T07:20:07.346885670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3975.2.2-f-f6e42a54cc,Uid:bf33477c8052ff6d0ce927c9d8aec6ad,Namespace:kube-system,Attempt:0,}" Oct 9 07:20:07.504475 kubelet[2328]: E1009 07:20:07.504352 2328 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://161.35.237.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.2.2-f-f6e42a54cc?timeout=10s\": dial tcp 161.35.237.80:6443: connect: connection refused" interval="800ms" Oct 9 07:20:07.608477 kubelet[2328]: I1009 07:20:07.608245 2328 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975.2.2-f-f6e42a54cc" Oct 9 07:20:07.609109 kubelet[2328]: E1009 07:20:07.608744 2328 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://161.35.237.80:6443/api/v1/nodes\": dial tcp 161.35.237.80:6443: connect: connection refused" node="ci-3975.2.2-f-f6e42a54cc" Oct 9 07:20:07.849508 kubelet[2328]: W1009 07:20:07.849312 2328 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://161.35.237.80:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 161.35.237.80:6443: connect: connection refused Oct 9 07:20:07.849508 kubelet[2328]: E1009 07:20:07.849413 2328 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://161.35.237.80:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 161.35.237.80:6443: connect: connection refused Oct 9 07:20:07.939772 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount159900866.mount: Deactivated successfully. Oct 9 07:20:07.948605 containerd[1594]: time="2024-10-09T07:20:07.948178971Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 07:20:07.952344 containerd[1594]: time="2024-10-09T07:20:07.952249008Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Oct 9 07:20:07.953564 containerd[1594]: time="2024-10-09T07:20:07.953450312Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 07:20:07.955569 containerd[1594]: time="2024-10-09T07:20:07.955486124Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 07:20:07.957184 containerd[1594]: time="2024-10-09T07:20:07.957100200Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 9 07:20:07.960098 containerd[1594]: time="2024-10-09T07:20:07.958288845Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 07:20:07.963939 containerd[1594]: time="2024-10-09T07:20:07.963871319Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 9 07:20:07.967926 containerd[1594]: time="2024-10-09T07:20:07.967825980Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 07:20:07.968732 containerd[1594]: time="2024-10-09T07:20:07.968621871Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 621.583998ms" Oct 9 07:20:07.973639 containerd[1594]: time="2024-10-09T07:20:07.973470845Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 626.31393ms" Oct 9 07:20:07.975729 containerd[1594]: time="2024-10-09T07:20:07.975651694Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 633.568168ms" Oct 9 07:20:08.030844 kubelet[2328]: W1009 07:20:08.030691 2328 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://161.35.237.80:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 161.35.237.80:6443: connect: connection refused Oct 9 07:20:08.030844 kubelet[2328]: E1009 07:20:08.030803 2328 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://161.35.237.80:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 161.35.237.80:6443: connect: connection refused Oct 9 07:20:08.187371 containerd[1594]: time="2024-10-09T07:20:08.186574916Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:20:08.187371 containerd[1594]: time="2024-10-09T07:20:08.186674749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:20:08.187371 containerd[1594]: time="2024-10-09T07:20:08.186701919Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:20:08.187371 containerd[1594]: time="2024-10-09T07:20:08.186720395Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:20:08.190072 containerd[1594]: time="2024-10-09T07:20:08.189610925Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:20:08.190072 containerd[1594]: time="2024-10-09T07:20:08.189691111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:20:08.190072 containerd[1594]: time="2024-10-09T07:20:08.189706797Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:20:08.190072 containerd[1594]: time="2024-10-09T07:20:08.189825676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:20:08.195683 containerd[1594]: time="2024-10-09T07:20:08.192398423Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:20:08.195683 containerd[1594]: time="2024-10-09T07:20:08.195418996Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:20:08.195683 containerd[1594]: time="2024-10-09T07:20:08.195460478Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:20:08.195683 containerd[1594]: time="2024-10-09T07:20:08.195482006Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:20:08.299614 kubelet[2328]: W1009 07:20:08.297716 2328 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://161.35.237.80:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.2.2-f-f6e42a54cc&limit=500&resourceVersion=0": dial tcp 161.35.237.80:6443: connect: connection refused Oct 9 07:20:08.299614 kubelet[2328]: E1009 07:20:08.299049 2328 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://161.35.237.80:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.2.2-f-f6e42a54cc&limit=500&resourceVersion=0": dial tcp 161.35.237.80:6443: connect: connection refused Oct 9 07:20:08.306624 kubelet[2328]: E1009 07:20:08.305824 2328 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://161.35.237.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.2.2-f-f6e42a54cc?timeout=10s\": dial tcp 161.35.237.80:6443: connect: connection refused" interval="1.6s" Oct 9 07:20:08.341382 containerd[1594]: time="2024-10-09T07:20:08.341245187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3975.2.2-f-f6e42a54cc,Uid:33925205588d5dc4d39490bbce992fdd,Namespace:kube-system,Attempt:0,} returns sandbox id \"78c5b7b295bf827fbd69c8709653f93a3c52c187b9150c5a366dbbf240bc6f86\"" Oct 9 07:20:08.343700 kubelet[2328]: E1009 07:20:08.343672 2328 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:20:08.353880 containerd[1594]: time="2024-10-09T07:20:08.353828743Z" level=info msg="CreateContainer within sandbox \"78c5b7b295bf827fbd69c8709653f93a3c52c187b9150c5a366dbbf240bc6f86\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 9 07:20:08.355324 containerd[1594]: time="2024-10-09T07:20:08.355284096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3975.2.2-f-f6e42a54cc,Uid:bf33477c8052ff6d0ce927c9d8aec6ad,Namespace:kube-system,Attempt:0,} returns sandbox id \"79c92b70d8037dac6b129523fe4fffa96369923856113e1f40b7aa74f583632e\"" Oct 9 07:20:08.357822 kubelet[2328]: E1009 07:20:08.357784 2328 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:20:08.364303 containerd[1594]: time="2024-10-09T07:20:08.363990490Z" level=info msg="CreateContainer within sandbox \"79c92b70d8037dac6b129523fe4fffa96369923856113e1f40b7aa74f583632e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 9 07:20:08.367748 containerd[1594]: time="2024-10-09T07:20:08.367415072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3975.2.2-f-f6e42a54cc,Uid:40cbb93dda30f1a4eda7c3aea1fb8928,Namespace:kube-system,Attempt:0,} returns sandbox id \"4a71a77cb8a128cfecdb58e8d17821598f91136e698c4db821eda23470696341\"" Oct 9 07:20:08.369118 kubelet[2328]: E1009 07:20:08.368881 2328 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:20:08.374353 containerd[1594]: time="2024-10-09T07:20:08.374139092Z" level=info msg="CreateContainer within sandbox \"4a71a77cb8a128cfecdb58e8d17821598f91136e698c4db821eda23470696341\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 9 07:20:08.392451 containerd[1594]: time="2024-10-09T07:20:08.391970309Z" level=info msg="CreateContainer within sandbox \"79c92b70d8037dac6b129523fe4fffa96369923856113e1f40b7aa74f583632e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"eab33d3ecc6d497558adb774bf68d415c6fc288b3f2ea5f5bf3ff322e0d868c2\"" Oct 9 07:20:08.393863 containerd[1594]: time="2024-10-09T07:20:08.393291153Z" level=info msg="StartContainer for \"eab33d3ecc6d497558adb774bf68d415c6fc288b3f2ea5f5bf3ff322e0d868c2\"" Oct 9 07:20:08.411673 kubelet[2328]: I1009 07:20:08.411617 2328 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975.2.2-f-f6e42a54cc" Oct 9 07:20:08.412418 kubelet[2328]: E1009 07:20:08.412381 2328 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://161.35.237.80:6443/api/v1/nodes\": dial tcp 161.35.237.80:6443: connect: connection refused" node="ci-3975.2.2-f-f6e42a54cc" Oct 9 07:20:08.416365 containerd[1594]: time="2024-10-09T07:20:08.416185465Z" level=info msg="CreateContainer within sandbox \"78c5b7b295bf827fbd69c8709653f93a3c52c187b9150c5a366dbbf240bc6f86\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"520325e1cc7bc6052b78858804df150d5d46b0588b7111c309d081bf25d3f312\"" Oct 9 07:20:08.417576 containerd[1594]: time="2024-10-09T07:20:08.416946696Z" level=info msg="StartContainer for \"520325e1cc7bc6052b78858804df150d5d46b0588b7111c309d081bf25d3f312\"" Oct 9 07:20:08.424152 containerd[1594]: time="2024-10-09T07:20:08.424086640Z" level=info msg="CreateContainer within sandbox \"4a71a77cb8a128cfecdb58e8d17821598f91136e698c4db821eda23470696341\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1729e978f544362c3bed0c38bac403e65cc59d9304683eb1cf33a53926ec445c\"" Oct 9 07:20:08.425107 containerd[1594]: time="2024-10-09T07:20:08.425068558Z" level=info msg="StartContainer for \"1729e978f544362c3bed0c38bac403e65cc59d9304683eb1cf33a53926ec445c\"" Oct 9 07:20:08.453446 kubelet[2328]: W1009 07:20:08.453110 2328 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://161.35.237.80:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 161.35.237.80:6443: connect: connection refused Oct 9 07:20:08.453446 kubelet[2328]: E1009 07:20:08.453173 2328 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://161.35.237.80:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 161.35.237.80:6443: connect: connection refused Oct 9 07:20:08.592199 containerd[1594]: time="2024-10-09T07:20:08.592143932Z" level=info msg="StartContainer for \"eab33d3ecc6d497558adb774bf68d415c6fc288b3f2ea5f5bf3ff322e0d868c2\" returns successfully" Oct 9 07:20:08.636025 containerd[1594]: time="2024-10-09T07:20:08.635306109Z" level=info msg="StartContainer for \"520325e1cc7bc6052b78858804df150d5d46b0588b7111c309d081bf25d3f312\" returns successfully" Oct 9 07:20:08.636424 containerd[1594]: time="2024-10-09T07:20:08.636259891Z" level=info msg="StartContainer for \"1729e978f544362c3bed0c38bac403e65cc59d9304683eb1cf33a53926ec445c\" returns successfully" Oct 9 07:20:08.876091 kubelet[2328]: E1009 07:20:08.875965 2328 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://161.35.237.80:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 161.35.237.80:6443: connect: connection refused Oct 9 07:20:08.959105 kubelet[2328]: E1009 07:20:08.959037 2328 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:20:08.976012 kubelet[2328]: E1009 07:20:08.975637 2328 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:20:08.985042 kubelet[2328]: E1009 07:20:08.984988 2328 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:20:09.985483 kubelet[2328]: E1009 07:20:09.985455 2328 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:20:10.016885 kubelet[2328]: I1009 07:20:10.013941 2328 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975.2.2-f-f6e42a54cc" Oct 9 07:20:10.856249 kubelet[2328]: I1009 07:20:10.855964 2328 kubelet_node_status.go:76] "Successfully registered node" node="ci-3975.2.2-f-f6e42a54cc" Oct 9 07:20:10.875292 kubelet[2328]: E1009 07:20:10.875246 2328 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3975.2.2-f-f6e42a54cc\" not found" Oct 9 07:20:10.976532 kubelet[2328]: E1009 07:20:10.976488 2328 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3975.2.2-f-f6e42a54cc\" not found" Oct 9 07:20:11.077557 kubelet[2328]: E1009 07:20:11.077294 2328 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3975.2.2-f-f6e42a54cc\" not found" Oct 9 07:20:11.875791 kubelet[2328]: I1009 07:20:11.875652 2328 apiserver.go:52] "Watching apiserver" Oct 9 07:20:11.902688 kubelet[2328]: I1009 07:20:11.902586 2328 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Oct 9 07:20:13.132882 kubelet[2328]: W1009 07:20:13.132503 2328 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 9 07:20:13.133790 kubelet[2328]: E1009 07:20:13.133762 2328 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:20:13.991493 kubelet[2328]: E1009 07:20:13.991422 2328 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:20:14.220019 systemd[1]: Reloading requested from client PID 2602 ('systemctl') (unit session-7.scope)... Oct 9 07:20:14.220046 systemd[1]: Reloading... Oct 9 07:20:14.332580 zram_generator::config[2642]: No configuration found. Oct 9 07:20:14.485388 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 07:20:14.560800 kubelet[2328]: W1009 07:20:14.560105 2328 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 9 07:20:14.560800 kubelet[2328]: E1009 07:20:14.560624 2328 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:20:14.576460 systemd[1]: Reloading finished in 355 ms. Oct 9 07:20:14.611127 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:20:14.623329 systemd[1]: kubelet.service: Deactivated successfully. Oct 9 07:20:14.623715 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:20:14.633109 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:20:14.783856 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:20:14.796389 (kubelet)[2700]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 9 07:20:14.906568 kubelet[2700]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 07:20:14.906568 kubelet[2700]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 9 07:20:14.906568 kubelet[2700]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 07:20:14.906568 kubelet[2700]: I1009 07:20:14.904831 2700 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 9 07:20:14.913307 kubelet[2700]: I1009 07:20:14.913271 2700 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Oct 9 07:20:14.913483 kubelet[2700]: I1009 07:20:14.913474 2700 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 9 07:20:14.914173 kubelet[2700]: I1009 07:20:14.914115 2700 server.go:919] "Client rotation is on, will bootstrap in background" Oct 9 07:20:14.917762 kubelet[2700]: I1009 07:20:14.917731 2700 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 9 07:20:14.925747 kubelet[2700]: I1009 07:20:14.925106 2700 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 9 07:20:14.939853 kubelet[2700]: I1009 07:20:14.939823 2700 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 9 07:20:14.941565 kubelet[2700]: I1009 07:20:14.940948 2700 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 9 07:20:14.941565 kubelet[2700]: I1009 07:20:14.941142 2700 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 9 07:20:14.941565 kubelet[2700]: I1009 07:20:14.941177 2700 topology_manager.go:138] "Creating topology manager with none policy" Oct 9 07:20:14.941565 kubelet[2700]: I1009 07:20:14.941188 2700 container_manager_linux.go:301] "Creating device plugin manager" Oct 9 07:20:14.941565 kubelet[2700]: I1009 07:20:14.941227 2700 state_mem.go:36] "Initialized new in-memory state store" Oct 9 07:20:14.941565 kubelet[2700]: I1009 07:20:14.941336 2700 kubelet.go:396] "Attempting to sync node with API server" Oct 9 07:20:14.941865 kubelet[2700]: I1009 07:20:14.941354 2700 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 9 07:20:14.941865 kubelet[2700]: I1009 07:20:14.941390 2700 kubelet.go:312] "Adding apiserver pod source" Oct 9 07:20:14.941865 kubelet[2700]: I1009 07:20:14.941412 2700 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 9 07:20:14.948735 kubelet[2700]: I1009 07:20:14.948687 2700 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Oct 9 07:20:14.949067 kubelet[2700]: I1009 07:20:14.949044 2700 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 9 07:20:14.954857 kubelet[2700]: I1009 07:20:14.954818 2700 server.go:1256] "Started kubelet" Oct 9 07:20:14.962655 kubelet[2700]: I1009 07:20:14.961986 2700 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 9 07:20:14.965197 kubelet[2700]: I1009 07:20:14.965164 2700 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 9 07:20:14.965741 kubelet[2700]: I1009 07:20:14.964238 2700 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 9 07:20:14.971940 kubelet[2700]: I1009 07:20:14.971709 2700 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Oct 9 07:20:14.975572 kubelet[2700]: I1009 07:20:14.975273 2700 server.go:461] "Adding debug handlers to kubelet server" Oct 9 07:20:14.978677 kubelet[2700]: I1009 07:20:14.978627 2700 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 9 07:20:14.978841 kubelet[2700]: I1009 07:20:14.978750 2700 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Oct 9 07:20:14.979052 kubelet[2700]: I1009 07:20:14.978887 2700 reconciler_new.go:29] "Reconciler: start to sync state" Oct 9 07:20:14.983219 kubelet[2700]: I1009 07:20:14.983183 2700 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 9 07:20:14.985202 kubelet[2700]: I1009 07:20:14.984858 2700 factory.go:221] Registration of the systemd container factory successfully Oct 9 07:20:14.986634 kubelet[2700]: I1009 07:20:14.986103 2700 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 9 07:20:14.988685 kubelet[2700]: I1009 07:20:14.987291 2700 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 9 07:20:14.988685 kubelet[2700]: I1009 07:20:14.987337 2700 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 9 07:20:14.988685 kubelet[2700]: I1009 07:20:14.987362 2700 kubelet.go:2329] "Starting kubelet main sync loop" Oct 9 07:20:14.988685 kubelet[2700]: E1009 07:20:14.987441 2700 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 9 07:20:14.994019 kubelet[2700]: E1009 07:20:14.993984 2700 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 9 07:20:14.998389 kubelet[2700]: I1009 07:20:14.996619 2700 factory.go:221] Registration of the containerd container factory successfully Oct 9 07:20:15.076464 kubelet[2700]: I1009 07:20:15.075907 2700 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975.2.2-f-f6e42a54cc" Oct 9 07:20:15.088421 kubelet[2700]: E1009 07:20:15.088306 2700 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 9 07:20:15.090712 kubelet[2700]: I1009 07:20:15.090323 2700 kubelet_node_status.go:112] "Node was previously registered" node="ci-3975.2.2-f-f6e42a54cc" Oct 9 07:20:15.090712 kubelet[2700]: I1009 07:20:15.090410 2700 kubelet_node_status.go:76] "Successfully registered node" node="ci-3975.2.2-f-f6e42a54cc" Oct 9 07:20:15.112326 kubelet[2700]: I1009 07:20:15.112291 2700 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 9 07:20:15.112823 kubelet[2700]: I1009 07:20:15.112522 2700 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 9 07:20:15.112823 kubelet[2700]: I1009 07:20:15.112577 2700 state_mem.go:36] "Initialized new in-memory state store" Oct 9 07:20:15.112823 kubelet[2700]: I1009 07:20:15.112737 2700 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 9 07:20:15.112823 kubelet[2700]: I1009 07:20:15.112759 2700 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 9 07:20:15.112823 kubelet[2700]: I1009 07:20:15.112766 2700 policy_none.go:49] "None policy: Start" Oct 9 07:20:15.113908 kubelet[2700]: I1009 07:20:15.113889 2700 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 9 07:20:15.114675 kubelet[2700]: I1009 07:20:15.114240 2700 state_mem.go:35] "Initializing new in-memory state store" Oct 9 07:20:15.114675 kubelet[2700]: I1009 07:20:15.114420 2700 state_mem.go:75] "Updated machine memory state" Oct 9 07:20:15.116412 kubelet[2700]: I1009 07:20:15.116382 2700 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 9 07:20:15.119701 kubelet[2700]: I1009 07:20:15.119672 2700 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 9 07:20:15.291408 kubelet[2700]: I1009 07:20:15.289191 2700 topology_manager.go:215] "Topology Admit Handler" podUID="33925205588d5dc4d39490bbce992fdd" podNamespace="kube-system" podName="kube-apiserver-ci-3975.2.2-f-f6e42a54cc" Oct 9 07:20:15.291408 kubelet[2700]: I1009 07:20:15.289361 2700 topology_manager.go:215] "Topology Admit Handler" podUID="bf33477c8052ff6d0ce927c9d8aec6ad" podNamespace="kube-system" podName="kube-controller-manager-ci-3975.2.2-f-f6e42a54cc" Oct 9 07:20:15.291408 kubelet[2700]: I1009 07:20:15.289401 2700 topology_manager.go:215] "Topology Admit Handler" podUID="40cbb93dda30f1a4eda7c3aea1fb8928" podNamespace="kube-system" podName="kube-scheduler-ci-3975.2.2-f-f6e42a54cc" Oct 9 07:20:15.303941 kubelet[2700]: W1009 07:20:15.302788 2700 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 9 07:20:15.303941 kubelet[2700]: W1009 07:20:15.302858 2700 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 9 07:20:15.303941 kubelet[2700]: E1009 07:20:15.303087 2700 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3975.2.2-f-f6e42a54cc\" already exists" pod="kube-system/kube-scheduler-ci-3975.2.2-f-f6e42a54cc" Oct 9 07:20:15.304227 kubelet[2700]: W1009 07:20:15.303982 2700 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 9 07:20:15.304227 kubelet[2700]: E1009 07:20:15.304067 2700 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3975.2.2-f-f6e42a54cc\" already exists" pod="kube-system/kube-apiserver-ci-3975.2.2-f-f6e42a54cc" Oct 9 07:20:15.381715 kubelet[2700]: I1009 07:20:15.381657 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/33925205588d5dc4d39490bbce992fdd-k8s-certs\") pod \"kube-apiserver-ci-3975.2.2-f-f6e42a54cc\" (UID: \"33925205588d5dc4d39490bbce992fdd\") " pod="kube-system/kube-apiserver-ci-3975.2.2-f-f6e42a54cc" Oct 9 07:20:15.381715 kubelet[2700]: I1009 07:20:15.381721 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/bf33477c8052ff6d0ce927c9d8aec6ad-flexvolume-dir\") pod \"kube-controller-manager-ci-3975.2.2-f-f6e42a54cc\" (UID: \"bf33477c8052ff6d0ce927c9d8aec6ad\") " pod="kube-system/kube-controller-manager-ci-3975.2.2-f-f6e42a54cc" Oct 9 07:20:15.382058 kubelet[2700]: I1009 07:20:15.381749 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bf33477c8052ff6d0ce927c9d8aec6ad-k8s-certs\") pod \"kube-controller-manager-ci-3975.2.2-f-f6e42a54cc\" (UID: \"bf33477c8052ff6d0ce927c9d8aec6ad\") " pod="kube-system/kube-controller-manager-ci-3975.2.2-f-f6e42a54cc" Oct 9 07:20:15.382058 kubelet[2700]: I1009 07:20:15.381774 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bf33477c8052ff6d0ce927c9d8aec6ad-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3975.2.2-f-f6e42a54cc\" (UID: \"bf33477c8052ff6d0ce927c9d8aec6ad\") " pod="kube-system/kube-controller-manager-ci-3975.2.2-f-f6e42a54cc" Oct 9 07:20:15.382058 kubelet[2700]: I1009 07:20:15.381796 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/33925205588d5dc4d39490bbce992fdd-ca-certs\") pod \"kube-apiserver-ci-3975.2.2-f-f6e42a54cc\" (UID: \"33925205588d5dc4d39490bbce992fdd\") " pod="kube-system/kube-apiserver-ci-3975.2.2-f-f6e42a54cc" Oct 9 07:20:15.382058 kubelet[2700]: I1009 07:20:15.381815 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/33925205588d5dc4d39490bbce992fdd-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3975.2.2-f-f6e42a54cc\" (UID: \"33925205588d5dc4d39490bbce992fdd\") " pod="kube-system/kube-apiserver-ci-3975.2.2-f-f6e42a54cc" Oct 9 07:20:15.382058 kubelet[2700]: I1009 07:20:15.381836 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bf33477c8052ff6d0ce927c9d8aec6ad-ca-certs\") pod \"kube-controller-manager-ci-3975.2.2-f-f6e42a54cc\" (UID: \"bf33477c8052ff6d0ce927c9d8aec6ad\") " pod="kube-system/kube-controller-manager-ci-3975.2.2-f-f6e42a54cc" Oct 9 07:20:15.382348 kubelet[2700]: I1009 07:20:15.381874 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bf33477c8052ff6d0ce927c9d8aec6ad-kubeconfig\") pod \"kube-controller-manager-ci-3975.2.2-f-f6e42a54cc\" (UID: \"bf33477c8052ff6d0ce927c9d8aec6ad\") " pod="kube-system/kube-controller-manager-ci-3975.2.2-f-f6e42a54cc" Oct 9 07:20:15.382348 kubelet[2700]: I1009 07:20:15.381905 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/40cbb93dda30f1a4eda7c3aea1fb8928-kubeconfig\") pod \"kube-scheduler-ci-3975.2.2-f-f6e42a54cc\" (UID: \"40cbb93dda30f1a4eda7c3aea1fb8928\") " pod="kube-system/kube-scheduler-ci-3975.2.2-f-f6e42a54cc" Oct 9 07:20:15.604954 kubelet[2700]: E1009 07:20:15.604176 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:20:15.604954 kubelet[2700]: E1009 07:20:15.604215 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:20:15.604954 kubelet[2700]: E1009 07:20:15.604858 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:20:15.944693 kubelet[2700]: I1009 07:20:15.942674 2700 apiserver.go:52] "Watching apiserver" Oct 9 07:20:15.979047 kubelet[2700]: I1009 07:20:15.978965 2700 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Oct 9 07:20:16.049207 kubelet[2700]: E1009 07:20:16.049168 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:20:16.050035 kubelet[2700]: E1009 07:20:16.050000 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:20:16.054796 kubelet[2700]: E1009 07:20:16.054747 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:20:16.135706 kubelet[2700]: I1009 07:20:16.135658 2700 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3975.2.2-f-f6e42a54cc" podStartSLOduration=3.135585884 podStartE2EDuration="3.135585884s" podCreationTimestamp="2024-10-09 07:20:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 07:20:16.135076573 +0000 UTC m=+1.326782528" watchObservedRunningTime="2024-10-09 07:20:16.135585884 +0000 UTC m=+1.327291835" Oct 9 07:20:16.190934 kubelet[2700]: I1009 07:20:16.190886 2700 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3975.2.2-f-f6e42a54cc" podStartSLOduration=2.190844777 podStartE2EDuration="2.190844777s" podCreationTimestamp="2024-10-09 07:20:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 07:20:16.173465723 +0000 UTC m=+1.365171678" watchObservedRunningTime="2024-10-09 07:20:16.190844777 +0000 UTC m=+1.382550733" Oct 9 07:20:17.056287 kubelet[2700]: E1009 07:20:17.056240 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:20:17.877478 kubelet[2700]: E1009 07:20:17.877023 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:20:18.058130 kubelet[2700]: E1009 07:20:18.058099 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:20:20.236683 sudo[1804]: pam_unix(sudo:session): session closed for user root Oct 9 07:20:20.241912 sshd[1798]: pam_unix(sshd:session): session closed for user core Oct 9 07:20:20.248850 systemd[1]: sshd@6-161.35.237.80:22-147.75.109.163:38178.service: Deactivated successfully. Oct 9 07:20:20.252169 systemd-logind[1573]: Session 7 logged out. Waiting for processes to exit. Oct 9 07:20:20.252803 systemd[1]: session-7.scope: Deactivated successfully. Oct 9 07:20:20.254937 systemd-logind[1573]: Removed session 7. Oct 9 07:20:25.129045 kubelet[2700]: E1009 07:20:25.127371 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:20:25.152478 kubelet[2700]: I1009 07:20:25.152193 2700 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3975.2.2-f-f6e42a54cc" podStartSLOduration=10.152147811 podStartE2EDuration="10.152147811s" podCreationTimestamp="2024-10-09 07:20:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 07:20:16.192313647 +0000 UTC m=+1.384019606" watchObservedRunningTime="2024-10-09 07:20:25.152147811 +0000 UTC m=+10.343853770" Oct 9 07:20:26.061981 kubelet[2700]: E1009 07:20:26.061942 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:20:26.077231 kubelet[2700]: E1009 07:20:26.076859 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:20:26.077231 kubelet[2700]: E1009 07:20:26.076987 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:20:27.289941 kubelet[2700]: I1009 07:20:27.289896 2700 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 9 07:20:27.291313 containerd[1594]: time="2024-10-09T07:20:27.291179592Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 9 07:20:27.291833 kubelet[2700]: I1009 07:20:27.291731 2700 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 9 07:20:27.887121 kubelet[2700]: E1009 07:20:27.886837 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:20:27.895289 kubelet[2700]: I1009 07:20:27.895239 2700 topology_manager.go:215] "Topology Admit Handler" podUID="b5a88a0d-1080-44a2-a9b9-6e194ef59558" podNamespace="kube-system" podName="kube-proxy-6bjcv" Oct 9 07:20:27.968433 kubelet[2700]: I1009 07:20:27.968315 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b5a88a0d-1080-44a2-a9b9-6e194ef59558-lib-modules\") pod \"kube-proxy-6bjcv\" (UID: \"b5a88a0d-1080-44a2-a9b9-6e194ef59558\") " pod="kube-system/kube-proxy-6bjcv" Oct 9 07:20:27.968704 kubelet[2700]: I1009 07:20:27.968493 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b5a88a0d-1080-44a2-a9b9-6e194ef59558-xtables-lock\") pod \"kube-proxy-6bjcv\" (UID: \"b5a88a0d-1080-44a2-a9b9-6e194ef59558\") " pod="kube-system/kube-proxy-6bjcv" Oct 9 07:20:27.968704 kubelet[2700]: I1009 07:20:27.968560 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4lr9\" (UniqueName: \"kubernetes.io/projected/b5a88a0d-1080-44a2-a9b9-6e194ef59558-kube-api-access-g4lr9\") pod \"kube-proxy-6bjcv\" (UID: \"b5a88a0d-1080-44a2-a9b9-6e194ef59558\") " pod="kube-system/kube-proxy-6bjcv" Oct 9 07:20:27.968704 kubelet[2700]: I1009 07:20:27.968583 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b5a88a0d-1080-44a2-a9b9-6e194ef59558-kube-proxy\") pod \"kube-proxy-6bjcv\" (UID: \"b5a88a0d-1080-44a2-a9b9-6e194ef59558\") " pod="kube-system/kube-proxy-6bjcv" Oct 9 07:20:28.209695 kubelet[2700]: E1009 07:20:28.209119 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:20:28.213750 containerd[1594]: time="2024-10-09T07:20:28.210054613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6bjcv,Uid:b5a88a0d-1080-44a2-a9b9-6e194ef59558,Namespace:kube-system,Attempt:0,}" Oct 9 07:20:28.298756 containerd[1594]: time="2024-10-09T07:20:28.298601723Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:20:28.300528 containerd[1594]: time="2024-10-09T07:20:28.298699978Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:20:28.300528 containerd[1594]: time="2024-10-09T07:20:28.300399837Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:20:28.300528 containerd[1594]: time="2024-10-09T07:20:28.300427031Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:20:28.367433 systemd[1]: run-containerd-runc-k8s.io-6be68369197099d37bd2e9de9ab67cc8758ddc83be34325dfa1561efda205056-runc.9uJuSq.mount: Deactivated successfully. Oct 9 07:20:28.475286 kubelet[2700]: I1009 07:20:28.473969 2700 topology_manager.go:215] "Topology Admit Handler" podUID="55322c37-3adc-4455-b78d-3be25b3db6bd" podNamespace="tigera-operator" podName="tigera-operator-5d56685c77-xjl6q" Oct 9 07:20:28.490974 containerd[1594]: time="2024-10-09T07:20:28.489049059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6bjcv,Uid:b5a88a0d-1080-44a2-a9b9-6e194ef59558,Namespace:kube-system,Attempt:0,} returns sandbox id \"6be68369197099d37bd2e9de9ab67cc8758ddc83be34325dfa1561efda205056\"" Oct 9 07:20:28.491623 kubelet[2700]: E1009 07:20:28.491100 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:20:28.497048 containerd[1594]: time="2024-10-09T07:20:28.496980574Z" level=info msg="CreateContainer within sandbox \"6be68369197099d37bd2e9de9ab67cc8758ddc83be34325dfa1561efda205056\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 9 07:20:28.529030 containerd[1594]: time="2024-10-09T07:20:28.528956645Z" level=info msg="CreateContainer within sandbox \"6be68369197099d37bd2e9de9ab67cc8758ddc83be34325dfa1561efda205056\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"08b0ce75acf26c9ea405c8adb562515e2aeb72de5bda163f99b5f9803bab3633\"" Oct 9 07:20:28.530085 containerd[1594]: time="2024-10-09T07:20:28.530023109Z" level=info msg="StartContainer for \"08b0ce75acf26c9ea405c8adb562515e2aeb72de5bda163f99b5f9803bab3633\"" Oct 9 07:20:28.573592 kubelet[2700]: I1009 07:20:28.573111 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/55322c37-3adc-4455-b78d-3be25b3db6bd-var-lib-calico\") pod \"tigera-operator-5d56685c77-xjl6q\" (UID: \"55322c37-3adc-4455-b78d-3be25b3db6bd\") " pod="tigera-operator/tigera-operator-5d56685c77-xjl6q" Oct 9 07:20:28.573592 kubelet[2700]: I1009 07:20:28.573181 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vdg6\" (UniqueName: \"kubernetes.io/projected/55322c37-3adc-4455-b78d-3be25b3db6bd-kube-api-access-8vdg6\") pod \"tigera-operator-5d56685c77-xjl6q\" (UID: \"55322c37-3adc-4455-b78d-3be25b3db6bd\") " pod="tigera-operator/tigera-operator-5d56685c77-xjl6q" Oct 9 07:20:28.624468 containerd[1594]: time="2024-10-09T07:20:28.624020358Z" level=info msg="StartContainer for \"08b0ce75acf26c9ea405c8adb562515e2aeb72de5bda163f99b5f9803bab3633\" returns successfully" Oct 9 07:20:28.789310 containerd[1594]: time="2024-10-09T07:20:28.786428523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-xjl6q,Uid:55322c37-3adc-4455-b78d-3be25b3db6bd,Namespace:tigera-operator,Attempt:0,}" Oct 9 07:20:28.826798 containerd[1594]: time="2024-10-09T07:20:28.826612648Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:20:28.826798 containerd[1594]: time="2024-10-09T07:20:28.826743407Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:20:28.826798 containerd[1594]: time="2024-10-09T07:20:28.826772172Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:20:28.827305 containerd[1594]: time="2024-10-09T07:20:28.827115143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:20:28.924360 containerd[1594]: time="2024-10-09T07:20:28.924291094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-xjl6q,Uid:55322c37-3adc-4455-b78d-3be25b3db6bd,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"8b61844952371d150cd3a39e39d5ed11eaa99e3fe282432bbf6db6c3cdbf360c\"" Oct 9 07:20:28.929712 containerd[1594]: time="2024-10-09T07:20:28.929240958Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\"" Oct 9 07:20:29.094455 kubelet[2700]: E1009 07:20:29.093326 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:20:29.114582 update_engine[1577]: I1009 07:20:29.112695 1577 update_attempter.cc:509] Updating boot flags... Oct 9 07:20:29.179178 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (3022) Oct 9 07:20:30.247932 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount927687555.mount: Deactivated successfully. Oct 9 07:20:30.846287 containerd[1594]: time="2024-10-09T07:20:30.845809826Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:20:30.847594 containerd[1594]: time="2024-10-09T07:20:30.847327286Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.3: active requests=0, bytes read=22136573" Oct 9 07:20:30.848292 containerd[1594]: time="2024-10-09T07:20:30.847873235Z" level=info msg="ImageCreate event name:\"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:20:30.850703 containerd[1594]: time="2024-10-09T07:20:30.850654645Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:20:30.851452 containerd[1594]: time="2024-10-09T07:20:30.851412123Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.3\" with image id \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\", repo tag \"quay.io/tigera/operator:v1.34.3\", repo digest \"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\", size \"22130728\" in 1.922115692s" Oct 9 07:20:30.851452 containerd[1594]: time="2024-10-09T07:20:30.851455727Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\" returns image reference \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\"" Oct 9 07:20:30.856889 containerd[1594]: time="2024-10-09T07:20:30.856835486Z" level=info msg="CreateContainer within sandbox \"8b61844952371d150cd3a39e39d5ed11eaa99e3fe282432bbf6db6c3cdbf360c\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 9 07:20:30.880435 containerd[1594]: time="2024-10-09T07:20:30.880200112Z" level=info msg="CreateContainer within sandbox \"8b61844952371d150cd3a39e39d5ed11eaa99e3fe282432bbf6db6c3cdbf360c\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"4a8b4f6c0aad22043f4a9eeb171a663ad91700ec98960d3a86f60c2138187555\"" Oct 9 07:20:30.882195 containerd[1594]: time="2024-10-09T07:20:30.881216071Z" level=info msg="StartContainer for \"4a8b4f6c0aad22043f4a9eeb171a663ad91700ec98960d3a86f60c2138187555\"" Oct 9 07:20:30.985128 containerd[1594]: time="2024-10-09T07:20:30.985059635Z" level=info msg="StartContainer for \"4a8b4f6c0aad22043f4a9eeb171a663ad91700ec98960d3a86f60c2138187555\" returns successfully" Oct 9 07:20:31.268844 kubelet[2700]: I1009 07:20:31.262227 2700 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-6bjcv" podStartSLOduration=4.260727416 podStartE2EDuration="4.260727416s" podCreationTimestamp="2024-10-09 07:20:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 07:20:29.135820677 +0000 UTC m=+14.327526633" watchObservedRunningTime="2024-10-09 07:20:31.260727416 +0000 UTC m=+16.452433371" Oct 9 07:20:34.243490 kubelet[2700]: I1009 07:20:34.241416 2700 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5d56685c77-xjl6q" podStartSLOduration=4.317183078 podStartE2EDuration="6.241285196s" podCreationTimestamp="2024-10-09 07:20:28 +0000 UTC" firstStartedPulling="2024-10-09 07:20:28.927771505 +0000 UTC m=+14.119477439" lastFinishedPulling="2024-10-09 07:20:30.85187361 +0000 UTC m=+16.043579557" observedRunningTime="2024-10-09 07:20:31.267233659 +0000 UTC m=+16.458939614" watchObservedRunningTime="2024-10-09 07:20:34.241285196 +0000 UTC m=+19.432991170" Oct 9 07:20:34.246486 kubelet[2700]: I1009 07:20:34.244064 2700 topology_manager.go:215] "Topology Admit Handler" podUID="5dbfb979-9107-431c-b5a6-1fcdd29ac8ca" podNamespace="calico-system" podName="calico-typha-74f8c4dccb-dlrg6" Oct 9 07:20:34.312611 kubelet[2700]: I1009 07:20:34.312561 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/5dbfb979-9107-431c-b5a6-1fcdd29ac8ca-typha-certs\") pod \"calico-typha-74f8c4dccb-dlrg6\" (UID: \"5dbfb979-9107-431c-b5a6-1fcdd29ac8ca\") " pod="calico-system/calico-typha-74f8c4dccb-dlrg6" Oct 9 07:20:34.312796 kubelet[2700]: I1009 07:20:34.312642 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbggq\" (UniqueName: \"kubernetes.io/projected/5dbfb979-9107-431c-b5a6-1fcdd29ac8ca-kube-api-access-mbggq\") pod \"calico-typha-74f8c4dccb-dlrg6\" (UID: \"5dbfb979-9107-431c-b5a6-1fcdd29ac8ca\") " pod="calico-system/calico-typha-74f8c4dccb-dlrg6" Oct 9 07:20:34.312796 kubelet[2700]: I1009 07:20:34.312676 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5dbfb979-9107-431c-b5a6-1fcdd29ac8ca-tigera-ca-bundle\") pod \"calico-typha-74f8c4dccb-dlrg6\" (UID: \"5dbfb979-9107-431c-b5a6-1fcdd29ac8ca\") " pod="calico-system/calico-typha-74f8c4dccb-dlrg6" Oct 9 07:20:34.437786 kubelet[2700]: I1009 07:20:34.437743 2700 topology_manager.go:215] "Topology Admit Handler" podUID="485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c" podNamespace="calico-system" podName="calico-node-lpcxj" Oct 9 07:20:34.514488 kubelet[2700]: I1009 07:20:34.513834 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c-cni-bin-dir\") pod \"calico-node-lpcxj\" (UID: \"485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c\") " pod="calico-system/calico-node-lpcxj" Oct 9 07:20:34.514488 kubelet[2700]: I1009 07:20:34.514217 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c-node-certs\") pod \"calico-node-lpcxj\" (UID: \"485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c\") " pod="calico-system/calico-node-lpcxj" Oct 9 07:20:34.514791 kubelet[2700]: I1009 07:20:34.514740 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c-lib-modules\") pod \"calico-node-lpcxj\" (UID: \"485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c\") " pod="calico-system/calico-node-lpcxj" Oct 9 07:20:34.514899 kubelet[2700]: I1009 07:20:34.514828 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c-cni-net-dir\") pod \"calico-node-lpcxj\" (UID: \"485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c\") " pod="calico-system/calico-node-lpcxj" Oct 9 07:20:34.514998 kubelet[2700]: I1009 07:20:34.514907 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7q9l\" (UniqueName: \"kubernetes.io/projected/485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c-kube-api-access-x7q9l\") pod \"calico-node-lpcxj\" (UID: \"485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c\") " pod="calico-system/calico-node-lpcxj" Oct 9 07:20:34.515124 kubelet[2700]: I1009 07:20:34.515090 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c-xtables-lock\") pod \"calico-node-lpcxj\" (UID: \"485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c\") " pod="calico-system/calico-node-lpcxj" Oct 9 07:20:34.515173 kubelet[2700]: I1009 07:20:34.515127 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c-policysync\") pod \"calico-node-lpcxj\" (UID: \"485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c\") " pod="calico-system/calico-node-lpcxj" Oct 9 07:20:34.515821 kubelet[2700]: I1009 07:20:34.515338 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c-cni-log-dir\") pod \"calico-node-lpcxj\" (UID: \"485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c\") " pod="calico-system/calico-node-lpcxj" Oct 9 07:20:34.515821 kubelet[2700]: I1009 07:20:34.515413 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c-flexvol-driver-host\") pod \"calico-node-lpcxj\" (UID: \"485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c\") " pod="calico-system/calico-node-lpcxj" Oct 9 07:20:34.515821 kubelet[2700]: I1009 07:20:34.515438 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c-tigera-ca-bundle\") pod \"calico-node-lpcxj\" (UID: \"485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c\") " pod="calico-system/calico-node-lpcxj" Oct 9 07:20:34.515821 kubelet[2700]: I1009 07:20:34.515487 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c-var-lib-calico\") pod \"calico-node-lpcxj\" (UID: \"485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c\") " pod="calico-system/calico-node-lpcxj" Oct 9 07:20:34.515821 kubelet[2700]: I1009 07:20:34.515512 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c-var-run-calico\") pod \"calico-node-lpcxj\" (UID: \"485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c\") " pod="calico-system/calico-node-lpcxj" Oct 9 07:20:34.555565 kubelet[2700]: E1009 07:20:34.553894 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:20:34.557634 containerd[1594]: time="2024-10-09T07:20:34.556018279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-74f8c4dccb-dlrg6,Uid:5dbfb979-9107-431c-b5a6-1fcdd29ac8ca,Namespace:calico-system,Attempt:0,}" Oct 9 07:20:34.576937 kubelet[2700]: I1009 07:20:34.575289 2700 topology_manager.go:215] "Topology Admit Handler" podUID="e1f970c3-0e6b-4c36-a7b6-7c163a15816e" podNamespace="calico-system" podName="csi-node-driver-h7tbg" Oct 9 07:20:34.578892 kubelet[2700]: E1009 07:20:34.578679 2700 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h7tbg" podUID="e1f970c3-0e6b-4c36-a7b6-7c163a15816e" Oct 9 07:20:34.628608 kubelet[2700]: I1009 07:20:34.616520 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e1f970c3-0e6b-4c36-a7b6-7c163a15816e-kubelet-dir\") pod \"csi-node-driver-h7tbg\" (UID: \"e1f970c3-0e6b-4c36-a7b6-7c163a15816e\") " pod="calico-system/csi-node-driver-h7tbg" Oct 9 07:20:34.628608 kubelet[2700]: I1009 07:20:34.617644 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxpkn\" (UniqueName: \"kubernetes.io/projected/e1f970c3-0e6b-4c36-a7b6-7c163a15816e-kube-api-access-bxpkn\") pod \"csi-node-driver-h7tbg\" (UID: \"e1f970c3-0e6b-4c36-a7b6-7c163a15816e\") " pod="calico-system/csi-node-driver-h7tbg" Oct 9 07:20:34.628608 kubelet[2700]: I1009 07:20:34.617798 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/e1f970c3-0e6b-4c36-a7b6-7c163a15816e-varrun\") pod \"csi-node-driver-h7tbg\" (UID: \"e1f970c3-0e6b-4c36-a7b6-7c163a15816e\") " pod="calico-system/csi-node-driver-h7tbg" Oct 9 07:20:34.628608 kubelet[2700]: I1009 07:20:34.617818 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e1f970c3-0e6b-4c36-a7b6-7c163a15816e-socket-dir\") pod \"csi-node-driver-h7tbg\" (UID: \"e1f970c3-0e6b-4c36-a7b6-7c163a15816e\") " pod="calico-system/csi-node-driver-h7tbg" Oct 9 07:20:34.628608 kubelet[2700]: I1009 07:20:34.617874 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e1f970c3-0e6b-4c36-a7b6-7c163a15816e-registration-dir\") pod \"csi-node-driver-h7tbg\" (UID: \"e1f970c3-0e6b-4c36-a7b6-7c163a15816e\") " pod="calico-system/csi-node-driver-h7tbg" Oct 9 07:20:34.641297 containerd[1594]: time="2024-10-09T07:20:34.637596565Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:20:34.641297 containerd[1594]: time="2024-10-09T07:20:34.637674802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:20:34.641297 containerd[1594]: time="2024-10-09T07:20:34.637708930Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:20:34.641297 containerd[1594]: time="2024-10-09T07:20:34.637724387Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:20:34.652479 kubelet[2700]: E1009 07:20:34.651599 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:34.652479 kubelet[2700]: W1009 07:20:34.651625 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:34.652479 kubelet[2700]: E1009 07:20:34.651700 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:34.721682 kubelet[2700]: E1009 07:20:34.721521 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:34.722971 kubelet[2700]: W1009 07:20:34.722772 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:34.724577 kubelet[2700]: E1009 07:20:34.723788 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:34.725642 kubelet[2700]: E1009 07:20:34.725619 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:34.725642 kubelet[2700]: W1009 07:20:34.725644 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:34.726016 kubelet[2700]: E1009 07:20:34.725992 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:34.727413 kubelet[2700]: E1009 07:20:34.727378 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:34.727413 kubelet[2700]: W1009 07:20:34.727397 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:34.728001 kubelet[2700]: E1009 07:20:34.727974 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:34.728403 kubelet[2700]: E1009 07:20:34.728374 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:34.728403 kubelet[2700]: W1009 07:20:34.728396 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:34.728507 kubelet[2700]: E1009 07:20:34.728421 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:34.728845 kubelet[2700]: E1009 07:20:34.728826 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:34.729282 kubelet[2700]: W1009 07:20:34.729245 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:34.729313 kubelet[2700]: E1009 07:20:34.729293 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:34.730227 kubelet[2700]: E1009 07:20:34.730206 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:34.730227 kubelet[2700]: W1009 07:20:34.730222 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:34.730518 kubelet[2700]: E1009 07:20:34.730489 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:34.731292 kubelet[2700]: E1009 07:20:34.731273 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:34.731292 kubelet[2700]: W1009 07:20:34.731289 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:34.731404 kubelet[2700]: E1009 07:20:34.731389 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:34.731655 kubelet[2700]: E1009 07:20:34.731633 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:34.731655 kubelet[2700]: W1009 07:20:34.731649 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:34.731759 kubelet[2700]: E1009 07:20:34.731735 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:34.731909 kubelet[2700]: E1009 07:20:34.731896 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:34.731909 kubelet[2700]: W1009 07:20:34.731907 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:34.732068 kubelet[2700]: E1009 07:20:34.731983 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:34.732121 kubelet[2700]: E1009 07:20:34.732097 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:34.732121 kubelet[2700]: W1009 07:20:34.732103 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:34.732192 kubelet[2700]: E1009 07:20:34.732178 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:34.732413 kubelet[2700]: E1009 07:20:34.732394 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:34.732413 kubelet[2700]: W1009 07:20:34.732410 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:34.732591 kubelet[2700]: E1009 07:20:34.732572 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:34.746312 kubelet[2700]: E1009 07:20:34.746258 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:34.746312 kubelet[2700]: W1009 07:20:34.746296 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:34.747373 kubelet[2700]: E1009 07:20:34.746905 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:34.747968 kubelet[2700]: E1009 07:20:34.747949 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:34.747968 kubelet[2700]: W1009 07:20:34.747966 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:34.748274 kubelet[2700]: E1009 07:20:34.748256 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:20:34.751449 kubelet[2700]: E1009 07:20:34.751407 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:34.751449 kubelet[2700]: W1009 07:20:34.751440 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:34.751922 kubelet[2700]: E1009 07:20:34.751899 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:34.751922 kubelet[2700]: W1009 07:20:34.751915 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:34.753570 containerd[1594]: time="2024-10-09T07:20:34.750858242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lpcxj,Uid:485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c,Namespace:calico-system,Attempt:0,}" Oct 9 07:20:34.754834 kubelet[2700]: E1009 07:20:34.754811 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:34.754834 kubelet[2700]: W1009 07:20:34.754830 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:34.754938 kubelet[2700]: E1009 07:20:34.754853 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:34.755922 kubelet[2700]: E1009 07:20:34.755770 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:34.755922 kubelet[2700]: W1009 07:20:34.755796 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:34.755922 kubelet[2700]: E1009 07:20:34.755821 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:34.755922 kubelet[2700]: E1009 07:20:34.755859 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:34.755922 kubelet[2700]: E1009 07:20:34.755884 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:34.755922 kubelet[2700]: E1009 07:20:34.755895 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:34.756203 kubelet[2700]: E1009 07:20:34.756145 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:34.756203 kubelet[2700]: W1009 07:20:34.756156 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:34.756203 kubelet[2700]: E1009 07:20:34.756185 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:34.757559 kubelet[2700]: E1009 07:20:34.756359 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:34.757559 kubelet[2700]: W1009 07:20:34.756370 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:34.757559 kubelet[2700]: E1009 07:20:34.756390 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:34.757559 kubelet[2700]: E1009 07:20:34.756714 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:34.757559 kubelet[2700]: W1009 07:20:34.756724 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:34.757559 kubelet[2700]: E1009 07:20:34.756751 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:34.757559 kubelet[2700]: E1009 07:20:34.756918 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:34.757559 kubelet[2700]: W1009 07:20:34.756925 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:34.757559 kubelet[2700]: E1009 07:20:34.756944 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:34.757559 kubelet[2700]: E1009 07:20:34.757148 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:34.759999 kubelet[2700]: W1009 07:20:34.757156 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:34.759999 kubelet[2700]: E1009 07:20:34.757168 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:34.759999 kubelet[2700]: E1009 07:20:34.757431 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:34.759999 kubelet[2700]: W1009 07:20:34.757440 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:34.759999 kubelet[2700]: E1009 07:20:34.757455 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:34.759999 kubelet[2700]: E1009 07:20:34.757649 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:34.759999 kubelet[2700]: W1009 07:20:34.757656 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:34.759999 kubelet[2700]: E1009 07:20:34.757668 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:34.760463 kubelet[2700]: E1009 07:20:34.760132 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:34.760463 kubelet[2700]: W1009 07:20:34.760155 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:34.760463 kubelet[2700]: E1009 07:20:34.760183 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:34.772097 kubelet[2700]: E1009 07:20:34.771970 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:34.772097 kubelet[2700]: W1009 07:20:34.771994 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:34.772097 kubelet[2700]: E1009 07:20:34.772019 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:34.825346 containerd[1594]: time="2024-10-09T07:20:34.825165585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-74f8c4dccb-dlrg6,Uid:5dbfb979-9107-431c-b5a6-1fcdd29ac8ca,Namespace:calico-system,Attempt:0,} returns sandbox id \"5c0fe1ebbad2b203912d58df78933c946718736c34a6120cabb87ca566fff530\"" Oct 9 07:20:34.827360 kubelet[2700]: E1009 07:20:34.826411 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:20:34.827837 containerd[1594]: time="2024-10-09T07:20:34.827069627Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:20:34.827837 containerd[1594]: time="2024-10-09T07:20:34.827158883Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:20:34.827837 containerd[1594]: time="2024-10-09T07:20:34.827205115Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:20:34.827837 containerd[1594]: time="2024-10-09T07:20:34.827220165Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:20:34.831478 containerd[1594]: time="2024-10-09T07:20:34.829695067Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\"" Oct 9 07:20:34.926083 containerd[1594]: time="2024-10-09T07:20:34.926013937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lpcxj,Uid:485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c,Namespace:calico-system,Attempt:0,} returns sandbox id \"23276d9274cfa1b681cf9cd4b97ce298ae45dc2d51c1949a35937a21499c617d\"" Oct 9 07:20:34.928275 kubelet[2700]: E1009 07:20:34.927433 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:20:35.988161 kubelet[2700]: E1009 07:20:35.988066 2700 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h7tbg" podUID="e1f970c3-0e6b-4c36-a7b6-7c163a15816e" Oct 9 07:20:37.796651 containerd[1594]: time="2024-10-09T07:20:37.796227848Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:20:37.799084 containerd[1594]: time="2024-10-09T07:20:37.798046115Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.1: active requests=0, bytes read=29471335" Oct 9 07:20:37.799697 containerd[1594]: time="2024-10-09T07:20:37.799620575Z" level=info msg="ImageCreate event name:\"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:20:37.803554 containerd[1594]: time="2024-10-09T07:20:37.803478539Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:20:37.806097 containerd[1594]: time="2024-10-09T07:20:37.805072316Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.1\" with image id \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\", size \"30963728\" in 2.975312965s" Oct 9 07:20:37.806097 containerd[1594]: time="2024-10-09T07:20:37.805130424Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\" returns image reference \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\"" Oct 9 07:20:37.807996 containerd[1594]: time="2024-10-09T07:20:37.807755149Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\"" Oct 9 07:20:37.842421 containerd[1594]: time="2024-10-09T07:20:37.842360322Z" level=info msg="CreateContainer within sandbox \"5c0fe1ebbad2b203912d58df78933c946718736c34a6120cabb87ca566fff530\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 9 07:20:37.875083 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1487340594.mount: Deactivated successfully. Oct 9 07:20:37.947225 containerd[1594]: time="2024-10-09T07:20:37.947135542Z" level=info msg="CreateContainer within sandbox \"5c0fe1ebbad2b203912d58df78933c946718736c34a6120cabb87ca566fff530\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"dbf572e429593678d6c3e84edf622a03173c6bbd989d688dc599f8ea2cbef5d5\"" Oct 9 07:20:37.948607 containerd[1594]: time="2024-10-09T07:20:37.948256864Z" level=info msg="StartContainer for \"dbf572e429593678d6c3e84edf622a03173c6bbd989d688dc599f8ea2cbef5d5\"" Oct 9 07:20:37.999190 kubelet[2700]: E1009 07:20:37.999115 2700 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h7tbg" podUID="e1f970c3-0e6b-4c36-a7b6-7c163a15816e" Oct 9 07:20:38.087574 containerd[1594]: time="2024-10-09T07:20:38.087398168Z" level=info msg="StartContainer for \"dbf572e429593678d6c3e84edf622a03173c6bbd989d688dc599f8ea2cbef5d5\" returns successfully" Oct 9 07:20:38.284172 containerd[1594]: time="2024-10-09T07:20:38.283436483Z" level=info msg="StopContainer for \"dbf572e429593678d6c3e84edf622a03173c6bbd989d688dc599f8ea2cbef5d5\" with timeout 300 (s)" Oct 9 07:20:38.284846 containerd[1594]: time="2024-10-09T07:20:38.284797291Z" level=info msg="Stop container \"dbf572e429593678d6c3e84edf622a03173c6bbd989d688dc599f8ea2cbef5d5\" with signal terminated" Oct 9 07:20:38.329397 kubelet[2700]: I1009 07:20:38.329243 2700 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-74f8c4dccb-dlrg6" podStartSLOduration=1.349027168 podStartE2EDuration="4.325388985s" podCreationTimestamp="2024-10-09 07:20:34 +0000 UTC" firstStartedPulling="2024-10-09 07:20:34.829248815 +0000 UTC m=+20.020954749" lastFinishedPulling="2024-10-09 07:20:37.805610633 +0000 UTC m=+22.997316566" observedRunningTime="2024-10-09 07:20:38.325048543 +0000 UTC m=+23.516754499" watchObservedRunningTime="2024-10-09 07:20:38.325388985 +0000 UTC m=+23.517094963" Oct 9 07:20:38.400855 containerd[1594]: time="2024-10-09T07:20:38.399738507Z" level=info msg="shim disconnected" id=dbf572e429593678d6c3e84edf622a03173c6bbd989d688dc599f8ea2cbef5d5 namespace=k8s.io Oct 9 07:20:38.400855 containerd[1594]: time="2024-10-09T07:20:38.399901501Z" level=warning msg="cleaning up after shim disconnected" id=dbf572e429593678d6c3e84edf622a03173c6bbd989d688dc599f8ea2cbef5d5 namespace=k8s.io Oct 9 07:20:38.400855 containerd[1594]: time="2024-10-09T07:20:38.399910757Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 07:20:38.439097 containerd[1594]: time="2024-10-09T07:20:38.438289489Z" level=info msg="StopContainer for \"dbf572e429593678d6c3e84edf622a03173c6bbd989d688dc599f8ea2cbef5d5\" returns successfully" Oct 9 07:20:38.442274 containerd[1594]: time="2024-10-09T07:20:38.442017436Z" level=info msg="StopPodSandbox for \"5c0fe1ebbad2b203912d58df78933c946718736c34a6120cabb87ca566fff530\"" Oct 9 07:20:38.442274 containerd[1594]: time="2024-10-09T07:20:38.442069379Z" level=info msg="Container to stop \"dbf572e429593678d6c3e84edf622a03173c6bbd989d688dc599f8ea2cbef5d5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 9 07:20:38.497309 containerd[1594]: time="2024-10-09T07:20:38.496987358Z" level=info msg="shim disconnected" id=5c0fe1ebbad2b203912d58df78933c946718736c34a6120cabb87ca566fff530 namespace=k8s.io Oct 9 07:20:38.497309 containerd[1594]: time="2024-10-09T07:20:38.497059422Z" level=warning msg="cleaning up after shim disconnected" id=5c0fe1ebbad2b203912d58df78933c946718736c34a6120cabb87ca566fff530 namespace=k8s.io Oct 9 07:20:38.497309 containerd[1594]: time="2024-10-09T07:20:38.497071627Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 07:20:38.526670 containerd[1594]: time="2024-10-09T07:20:38.526231986Z" level=info msg="TearDown network for sandbox \"5c0fe1ebbad2b203912d58df78933c946718736c34a6120cabb87ca566fff530\" successfully" Oct 9 07:20:38.526670 containerd[1594]: time="2024-10-09T07:20:38.526267504Z" level=info msg="StopPodSandbox for \"5c0fe1ebbad2b203912d58df78933c946718736c34a6120cabb87ca566fff530\" returns successfully" Oct 9 07:20:38.562579 kubelet[2700]: I1009 07:20:38.561626 2700 topology_manager.go:215] "Topology Admit Handler" podUID="0584163c-c8d9-439f-9de8-be7d546a1a37" podNamespace="calico-system" podName="calico-typha-7b8c8f8f6c-fkwt5" Oct 9 07:20:38.562579 kubelet[2700]: E1009 07:20:38.561711 2700 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5dbfb979-9107-431c-b5a6-1fcdd29ac8ca" containerName="calico-typha" Oct 9 07:20:38.564749 kubelet[2700]: I1009 07:20:38.563590 2700 memory_manager.go:354] "RemoveStaleState removing state" podUID="5dbfb979-9107-431c-b5a6-1fcdd29ac8ca" containerName="calico-typha" Oct 9 07:20:38.566173 kubelet[2700]: E1009 07:20:38.565366 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:38.566173 kubelet[2700]: W1009 07:20:38.565388 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:38.566173 kubelet[2700]: E1009 07:20:38.565486 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:38.566173 kubelet[2700]: I1009 07:20:38.566122 2700 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5dbfb979-9107-431c-b5a6-1fcdd29ac8ca-tigera-ca-bundle\") pod \"5dbfb979-9107-431c-b5a6-1fcdd29ac8ca\" (UID: \"5dbfb979-9107-431c-b5a6-1fcdd29ac8ca\") " Oct 9 07:20:38.570054 kubelet[2700]: E1009 07:20:38.569801 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:38.570054 kubelet[2700]: W1009 07:20:38.569832 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:38.570054 kubelet[2700]: E1009 07:20:38.569861 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:38.570054 kubelet[2700]: I1009 07:20:38.569911 2700 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/5dbfb979-9107-431c-b5a6-1fcdd29ac8ca-typha-certs\") pod \"5dbfb979-9107-431c-b5a6-1fcdd29ac8ca\" (UID: \"5dbfb979-9107-431c-b5a6-1fcdd29ac8ca\") " Oct 9 07:20:38.571680 kubelet[2700]: E1009 07:20:38.571273 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:38.571680 kubelet[2700]: W1009 07:20:38.571399 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:38.571680 kubelet[2700]: E1009 07:20:38.571436 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:38.573755 kubelet[2700]: E1009 07:20:38.572627 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:38.573755 kubelet[2700]: W1009 07:20:38.572644 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:38.573755 kubelet[2700]: E1009 07:20:38.572668 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:38.574600 kubelet[2700]: I1009 07:20:38.574313 2700 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mbggq\" (UniqueName: \"kubernetes.io/projected/5dbfb979-9107-431c-b5a6-1fcdd29ac8ca-kube-api-access-mbggq\") pod \"5dbfb979-9107-431c-b5a6-1fcdd29ac8ca\" (UID: \"5dbfb979-9107-431c-b5a6-1fcdd29ac8ca\") " Oct 9 07:20:38.580509 kubelet[2700]: E1009 07:20:38.580297 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:38.580509 kubelet[2700]: W1009 07:20:38.580323 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:38.580509 kubelet[2700]: E1009 07:20:38.580349 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:38.584292 kubelet[2700]: I1009 07:20:38.583638 2700 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5dbfb979-9107-431c-b5a6-1fcdd29ac8ca-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "5dbfb979-9107-431c-b5a6-1fcdd29ac8ca" (UID: "5dbfb979-9107-431c-b5a6-1fcdd29ac8ca"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 9 07:20:38.589756 kubelet[2700]: E1009 07:20:38.589715 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:38.589756 kubelet[2700]: W1009 07:20:38.589746 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:38.589952 kubelet[2700]: E1009 07:20:38.589779 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:38.590178 kubelet[2700]: I1009 07:20:38.590155 2700 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5dbfb979-9107-431c-b5a6-1fcdd29ac8ca-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "5dbfb979-9107-431c-b5a6-1fcdd29ac8ca" (UID: "5dbfb979-9107-431c-b5a6-1fcdd29ac8ca"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 9 07:20:38.591684 kubelet[2700]: I1009 07:20:38.591644 2700 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5dbfb979-9107-431c-b5a6-1fcdd29ac8ca-kube-api-access-mbggq" (OuterVolumeSpecName: "kube-api-access-mbggq") pod "5dbfb979-9107-431c-b5a6-1fcdd29ac8ca" (UID: "5dbfb979-9107-431c-b5a6-1fcdd29ac8ca"). InnerVolumeSpecName "kube-api-access-mbggq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 9 07:20:38.660345 kubelet[2700]: E1009 07:20:38.659155 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:38.660345 kubelet[2700]: W1009 07:20:38.659193 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:38.660345 kubelet[2700]: E1009 07:20:38.659218 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:38.661125 kubelet[2700]: E1009 07:20:38.660996 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:38.661125 kubelet[2700]: W1009 07:20:38.661014 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:38.661125 kubelet[2700]: E1009 07:20:38.661037 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:38.662228 kubelet[2700]: E1009 07:20:38.662210 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:38.663074 kubelet[2700]: W1009 07:20:38.662914 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:38.663074 kubelet[2700]: E1009 07:20:38.662950 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:38.663468 kubelet[2700]: E1009 07:20:38.663403 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:38.663468 kubelet[2700]: W1009 07:20:38.663420 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:38.663468 kubelet[2700]: E1009 07:20:38.663438 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:38.664584 kubelet[2700]: E1009 07:20:38.663917 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:38.664584 kubelet[2700]: W1009 07:20:38.663931 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:38.664584 kubelet[2700]: E1009 07:20:38.663952 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:38.665647 kubelet[2700]: E1009 07:20:38.665627 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:38.665647 kubelet[2700]: W1009 07:20:38.665642 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:38.665821 kubelet[2700]: E1009 07:20:38.665661 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:38.665920 kubelet[2700]: E1009 07:20:38.665906 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:38.665920 kubelet[2700]: W1009 07:20:38.665919 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:38.666047 kubelet[2700]: E1009 07:20:38.665933 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:38.666149 kubelet[2700]: E1009 07:20:38.666136 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:38.666212 kubelet[2700]: W1009 07:20:38.666150 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:38.666212 kubelet[2700]: E1009 07:20:38.666167 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:38.667801 kubelet[2700]: E1009 07:20:38.667638 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:38.667801 kubelet[2700]: W1009 07:20:38.667660 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:38.667801 kubelet[2700]: E1009 07:20:38.667687 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:38.668179 kubelet[2700]: E1009 07:20:38.668096 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:38.668179 kubelet[2700]: W1009 07:20:38.668110 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:38.668179 kubelet[2700]: E1009 07:20:38.668129 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:38.668698 kubelet[2700]: E1009 07:20:38.668580 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:38.668698 kubelet[2700]: W1009 07:20:38.668595 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:38.668698 kubelet[2700]: E1009 07:20:38.668613 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:38.669211 kubelet[2700]: E1009 07:20:38.669080 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:38.669211 kubelet[2700]: W1009 07:20:38.669095 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:38.669211 kubelet[2700]: E1009 07:20:38.669112 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:38.676102 kubelet[2700]: E1009 07:20:38.676073 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:38.676429 kubelet[2700]: W1009 07:20:38.676258 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:38.676429 kubelet[2700]: E1009 07:20:38.676296 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:38.676429 kubelet[2700]: I1009 07:20:38.676347 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmdb7\" (UniqueName: \"kubernetes.io/projected/0584163c-c8d9-439f-9de8-be7d546a1a37-kube-api-access-jmdb7\") pod \"calico-typha-7b8c8f8f6c-fkwt5\" (UID: \"0584163c-c8d9-439f-9de8-be7d546a1a37\") " pod="calico-system/calico-typha-7b8c8f8f6c-fkwt5" Oct 9 07:20:38.677416 kubelet[2700]: E1009 07:20:38.677371 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:38.677416 kubelet[2700]: W1009 07:20:38.677391 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:38.677937 kubelet[2700]: E1009 07:20:38.677787 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:38.677937 kubelet[2700]: I1009 07:20:38.677833 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/0584163c-c8d9-439f-9de8-be7d546a1a37-typha-certs\") pod \"calico-typha-7b8c8f8f6c-fkwt5\" (UID: \"0584163c-c8d9-439f-9de8-be7d546a1a37\") " pod="calico-system/calico-typha-7b8c8f8f6c-fkwt5" Oct 9 07:20:38.678163 kubelet[2700]: E1009 07:20:38.678140 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:38.678265 kubelet[2700]: W1009 07:20:38.678248 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:38.678614 kubelet[2700]: E1009 07:20:38.678525 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:38.679014 kubelet[2700]: E1009 07:20:38.678997 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:38.679087 kubelet[2700]: W1009 07:20:38.679017 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:38.679087 kubelet[2700]: E1009 07:20:38.679042 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:38.679087 kubelet[2700]: I1009 07:20:38.679071 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0584163c-c8d9-439f-9de8-be7d546a1a37-tigera-ca-bundle\") pod \"calico-typha-7b8c8f8f6c-fkwt5\" (UID: \"0584163c-c8d9-439f-9de8-be7d546a1a37\") " pod="calico-system/calico-typha-7b8c8f8f6c-fkwt5" Oct 9 07:20:38.679273 kubelet[2700]: E1009 07:20:38.679260 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:38.679273 kubelet[2700]: W1009 07:20:38.679271 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:38.679336 kubelet[2700]: E1009 07:20:38.679287 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:38.679450 kubelet[2700]: E1009 07:20:38.679440 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:38.679450 kubelet[2700]: W1009 07:20:38.679449 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:38.679510 kubelet[2700]: E1009 07:20:38.679459 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:38.679786 kubelet[2700]: E1009 07:20:38.679729 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:38.679786 kubelet[2700]: W1009 07:20:38.679741 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:38.679786 kubelet[2700]: E1009 07:20:38.679763 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:38.680634 kubelet[2700]: E1009 07:20:38.679971 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:38.680634 kubelet[2700]: W1009 07:20:38.679980 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:38.680634 kubelet[2700]: E1009 07:20:38.679992 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:38.680634 kubelet[2700]: E1009 07:20:38.680181 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:38.680634 kubelet[2700]: W1009 07:20:38.680191 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:38.680634 kubelet[2700]: E1009 07:20:38.680201 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:38.680634 kubelet[2700]: I1009 07:20:38.680322 2700 reconciler_common.go:300] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5dbfb979-9107-431c-b5a6-1fcdd29ac8ca-tigera-ca-bundle\") on node \"ci-3975.2.2-f-f6e42a54cc\" DevicePath \"\"" Oct 9 07:20:38.680634 kubelet[2700]: I1009 07:20:38.680337 2700 reconciler_common.go:300] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/5dbfb979-9107-431c-b5a6-1fcdd29ac8ca-typha-certs\") on node \"ci-3975.2.2-f-f6e42a54cc\" DevicePath \"\"" Oct 9 07:20:38.680634 kubelet[2700]: I1009 07:20:38.680349 2700 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-mbggq\" (UniqueName: \"kubernetes.io/projected/5dbfb979-9107-431c-b5a6-1fcdd29ac8ca-kube-api-access-mbggq\") on node \"ci-3975.2.2-f-f6e42a54cc\" DevicePath \"\"" Oct 9 07:20:38.781826 kubelet[2700]: E1009 07:20:38.781740 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:38.781826 kubelet[2700]: W1009 07:20:38.781766 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:38.781826 kubelet[2700]: E1009 07:20:38.781790 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:38.782739 kubelet[2700]: E1009 07:20:38.782204 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:38.782739 kubelet[2700]: W1009 07:20:38.782216 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:38.782739 kubelet[2700]: E1009 07:20:38.782232 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:38.783676 kubelet[2700]: E1009 07:20:38.783654 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:38.783762 kubelet[2700]: W1009 07:20:38.783684 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:38.783762 kubelet[2700]: E1009 07:20:38.783720 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:38.784018 kubelet[2700]: E1009 07:20:38.783991 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:38.784018 kubelet[2700]: W1009 07:20:38.784003 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:38.784018 kubelet[2700]: E1009 07:20:38.784019 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:38.784731 kubelet[2700]: E1009 07:20:38.784633 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:38.784731 kubelet[2700]: W1009 07:20:38.784645 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:38.784731 kubelet[2700]: E1009 07:20:38.784728 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:38.785284 kubelet[2700]: E1009 07:20:38.785267 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:38.785284 kubelet[2700]: W1009 07:20:38.785281 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:38.785424 kubelet[2700]: E1009 07:20:38.785318 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:38.785478 kubelet[2700]: E1009 07:20:38.785465 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:38.785566 kubelet[2700]: W1009 07:20:38.785477 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:38.785566 kubelet[2700]: E1009 07:20:38.785500 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:38.786585 kubelet[2700]: E1009 07:20:38.785939 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:38.786585 kubelet[2700]: W1009 07:20:38.785954 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:38.786585 kubelet[2700]: E1009 07:20:38.785973 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:38.786768 kubelet[2700]: E1009 07:20:38.786751 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:38.786807 kubelet[2700]: W1009 07:20:38.786766 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:38.787881 kubelet[2700]: E1009 07:20:38.787602 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:38.788207 kubelet[2700]: E1009 07:20:38.788193 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:38.788299 kubelet[2700]: W1009 07:20:38.788287 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:38.788549 kubelet[2700]: E1009 07:20:38.788492 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:38.788712 kubelet[2700]: E1009 07:20:38.788668 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:38.788712 kubelet[2700]: W1009 07:20:38.788680 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:38.788905 kubelet[2700]: E1009 07:20:38.788864 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:38.789334 kubelet[2700]: E1009 07:20:38.789266 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:38.789334 kubelet[2700]: W1009 07:20:38.789282 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:38.789627 kubelet[2700]: E1009 07:20:38.789578 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:38.789907 kubelet[2700]: E1009 07:20:38.789796 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:38.789907 kubelet[2700]: W1009 07:20:38.789807 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:38.789907 kubelet[2700]: E1009 07:20:38.789822 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:38.790236 kubelet[2700]: E1009 07:20:38.790152 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:38.790236 kubelet[2700]: W1009 07:20:38.790162 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:38.790236 kubelet[2700]: E1009 07:20:38.790183 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:38.791501 kubelet[2700]: E1009 07:20:38.791489 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:38.791604 kubelet[2700]: W1009 07:20:38.791593 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:38.791655 kubelet[2700]: E1009 07:20:38.791648 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:38.792593 kubelet[2700]: E1009 07:20:38.792578 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:38.792729 kubelet[2700]: W1009 07:20:38.792669 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:38.792729 kubelet[2700]: E1009 07:20:38.792687 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:38.799268 kubelet[2700]: E1009 07:20:38.799179 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:38.799268 kubelet[2700]: W1009 07:20:38.799200 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:38.799268 kubelet[2700]: E1009 07:20:38.799223 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:38.807037 kubelet[2700]: E1009 07:20:38.806939 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:38.807037 kubelet[2700]: W1009 07:20:38.806961 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:38.807037 kubelet[2700]: E1009 07:20:38.806988 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:38.832746 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dbf572e429593678d6c3e84edf622a03173c6bbd989d688dc599f8ea2cbef5d5-rootfs.mount: Deactivated successfully. Oct 9 07:20:38.833156 systemd[1]: var-lib-kubelet-pods-5dbfb979\x2d9107\x2d431c\x2db5a6\x2d1fcdd29ac8ca-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. Oct 9 07:20:38.833390 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5c0fe1ebbad2b203912d58df78933c946718736c34a6120cabb87ca566fff530-rootfs.mount: Deactivated successfully. Oct 9 07:20:38.833618 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5c0fe1ebbad2b203912d58df78933c946718736c34a6120cabb87ca566fff530-shm.mount: Deactivated successfully. Oct 9 07:20:38.833872 systemd[1]: var-lib-kubelet-pods-5dbfb979\x2d9107\x2d431c\x2db5a6\x2d1fcdd29ac8ca-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmbggq.mount: Deactivated successfully. Oct 9 07:20:38.834067 systemd[1]: var-lib-kubelet-pods-5dbfb979\x2d9107\x2d431c\x2db5a6\x2d1fcdd29ac8ca-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. Oct 9 07:20:38.868316 kubelet[2700]: E1009 07:20:38.868196 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:20:38.869297 containerd[1594]: time="2024-10-09T07:20:38.869171105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7b8c8f8f6c-fkwt5,Uid:0584163c-c8d9-439f-9de8-be7d546a1a37,Namespace:calico-system,Attempt:0,}" Oct 9 07:20:38.914690 containerd[1594]: time="2024-10-09T07:20:38.910000037Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:20:38.914690 containerd[1594]: time="2024-10-09T07:20:38.910099429Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:20:38.914690 containerd[1594]: time="2024-10-09T07:20:38.910130096Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:20:38.914690 containerd[1594]: time="2024-10-09T07:20:38.910150821Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:20:39.012444 containerd[1594]: time="2024-10-09T07:20:39.012399809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7b8c8f8f6c-fkwt5,Uid:0584163c-c8d9-439f-9de8-be7d546a1a37,Namespace:calico-system,Attempt:0,} returns sandbox id \"005327737ce6a7c1c55f638e10532debc3ee9e54193eb49216a17c542ba77e03\"" Oct 9 07:20:39.014065 kubelet[2700]: E1009 07:20:39.014006 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:20:39.031588 containerd[1594]: time="2024-10-09T07:20:39.030883195Z" level=info msg="CreateContainer within sandbox \"005327737ce6a7c1c55f638e10532debc3ee9e54193eb49216a17c542ba77e03\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 9 07:20:39.045486 containerd[1594]: time="2024-10-09T07:20:39.045416282Z" level=info msg="CreateContainer within sandbox \"005327737ce6a7c1c55f638e10532debc3ee9e54193eb49216a17c542ba77e03\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"9d36c077f08666bedbf83f26f65dd290fade4feac19dfac075fb06bacfd3db71\"" Oct 9 07:20:39.048179 containerd[1594]: time="2024-10-09T07:20:39.048115968Z" level=info msg="StartContainer for \"9d36c077f08666bedbf83f26f65dd290fade4feac19dfac075fb06bacfd3db71\"" Oct 9 07:20:39.177609 containerd[1594]: time="2024-10-09T07:20:39.177324544Z" level=info msg="StartContainer for \"9d36c077f08666bedbf83f26f65dd290fade4feac19dfac075fb06bacfd3db71\" returns successfully" Oct 9 07:20:39.287382 kubelet[2700]: E1009 07:20:39.287281 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:20:39.294333 kubelet[2700]: I1009 07:20:39.293260 2700 scope.go:117] "RemoveContainer" containerID="dbf572e429593678d6c3e84edf622a03173c6bbd989d688dc599f8ea2cbef5d5" Oct 9 07:20:39.306664 containerd[1594]: time="2024-10-09T07:20:39.304278228Z" level=info msg="RemoveContainer for \"dbf572e429593678d6c3e84edf622a03173c6bbd989d688dc599f8ea2cbef5d5\"" Oct 9 07:20:39.317461 kubelet[2700]: I1009 07:20:39.317130 2700 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-7b8c8f8f6c-fkwt5" podStartSLOduration=4.317080533 podStartE2EDuration="4.317080533s" podCreationTimestamp="2024-10-09 07:20:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 07:20:39.314114413 +0000 UTC m=+24.505820369" watchObservedRunningTime="2024-10-09 07:20:39.317080533 +0000 UTC m=+24.508786488" Oct 9 07:20:39.323871 containerd[1594]: time="2024-10-09T07:20:39.321911404Z" level=info msg="RemoveContainer for \"dbf572e429593678d6c3e84edf622a03173c6bbd989d688dc599f8ea2cbef5d5\" returns successfully" Oct 9 07:20:39.324689 kubelet[2700]: I1009 07:20:39.323523 2700 scope.go:117] "RemoveContainer" containerID="dbf572e429593678d6c3e84edf622a03173c6bbd989d688dc599f8ea2cbef5d5" Oct 9 07:20:39.327274 containerd[1594]: time="2024-10-09T07:20:39.326270983Z" level=error msg="ContainerStatus for \"dbf572e429593678d6c3e84edf622a03173c6bbd989d688dc599f8ea2cbef5d5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dbf572e429593678d6c3e84edf622a03173c6bbd989d688dc599f8ea2cbef5d5\": not found" Oct 9 07:20:39.328442 kubelet[2700]: E1009 07:20:39.327671 2700 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dbf572e429593678d6c3e84edf622a03173c6bbd989d688dc599f8ea2cbef5d5\": not found" containerID="dbf572e429593678d6c3e84edf622a03173c6bbd989d688dc599f8ea2cbef5d5" Oct 9 07:20:39.328442 kubelet[2700]: I1009 07:20:39.327743 2700 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dbf572e429593678d6c3e84edf622a03173c6bbd989d688dc599f8ea2cbef5d5"} err="failed to get container status \"dbf572e429593678d6c3e84edf622a03173c6bbd989d688dc599f8ea2cbef5d5\": rpc error: code = NotFound desc = an error occurred when try to find container \"dbf572e429593678d6c3e84edf622a03173c6bbd989d688dc599f8ea2cbef5d5\": not found" Oct 9 07:20:39.376811 kubelet[2700]: E1009 07:20:39.374721 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:39.376811 kubelet[2700]: W1009 07:20:39.374747 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:39.376811 kubelet[2700]: E1009 07:20:39.374774 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:39.376811 kubelet[2700]: E1009 07:20:39.375009 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:39.376811 kubelet[2700]: W1009 07:20:39.375033 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:39.376811 kubelet[2700]: E1009 07:20:39.375048 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:39.376811 kubelet[2700]: E1009 07:20:39.375334 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:39.376811 kubelet[2700]: W1009 07:20:39.375344 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:39.376811 kubelet[2700]: E1009 07:20:39.375374 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:39.376811 kubelet[2700]: E1009 07:20:39.375703 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:39.377324 kubelet[2700]: W1009 07:20:39.375713 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:39.377324 kubelet[2700]: E1009 07:20:39.375727 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:39.377324 kubelet[2700]: E1009 07:20:39.376008 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:39.377324 kubelet[2700]: W1009 07:20:39.376018 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:39.377324 kubelet[2700]: E1009 07:20:39.376030 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:39.377324 kubelet[2700]: E1009 07:20:39.376253 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:39.377324 kubelet[2700]: W1009 07:20:39.376262 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:39.377324 kubelet[2700]: E1009 07:20:39.376273 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:39.377324 kubelet[2700]: E1009 07:20:39.376454 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:39.377324 kubelet[2700]: W1009 07:20:39.376462 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:39.382510 kubelet[2700]: E1009 07:20:39.376483 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:39.382510 kubelet[2700]: E1009 07:20:39.376754 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:39.382510 kubelet[2700]: W1009 07:20:39.376766 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:39.382510 kubelet[2700]: E1009 07:20:39.376778 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:39.382510 kubelet[2700]: E1009 07:20:39.377000 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:39.382510 kubelet[2700]: W1009 07:20:39.377008 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:39.382510 kubelet[2700]: E1009 07:20:39.377018 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:39.382510 kubelet[2700]: E1009 07:20:39.377237 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:39.382510 kubelet[2700]: W1009 07:20:39.377247 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:39.382510 kubelet[2700]: E1009 07:20:39.377260 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:39.383038 kubelet[2700]: E1009 07:20:39.377453 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:39.383038 kubelet[2700]: W1009 07:20:39.377461 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:39.383038 kubelet[2700]: E1009 07:20:39.377473 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:39.383038 kubelet[2700]: E1009 07:20:39.377667 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:39.383038 kubelet[2700]: W1009 07:20:39.377689 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:39.383038 kubelet[2700]: E1009 07:20:39.377711 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:39.383038 kubelet[2700]: E1009 07:20:39.377883 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:39.383038 kubelet[2700]: W1009 07:20:39.377890 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:39.383038 kubelet[2700]: E1009 07:20:39.377900 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:39.383038 kubelet[2700]: E1009 07:20:39.378059 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:39.383295 kubelet[2700]: W1009 07:20:39.378066 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:39.383295 kubelet[2700]: E1009 07:20:39.378076 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:39.383295 kubelet[2700]: E1009 07:20:39.378251 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:39.383295 kubelet[2700]: W1009 07:20:39.378258 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:39.383295 kubelet[2700]: E1009 07:20:39.378270 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:39.388318 kubelet[2700]: E1009 07:20:39.388270 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:39.389135 kubelet[2700]: W1009 07:20:39.388545 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:39.389135 kubelet[2700]: E1009 07:20:39.388692 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:39.389865 kubelet[2700]: E1009 07:20:39.389479 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:39.389865 kubelet[2700]: W1009 07:20:39.389493 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:39.389865 kubelet[2700]: E1009 07:20:39.389515 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:39.390403 kubelet[2700]: E1009 07:20:39.390272 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:39.390403 kubelet[2700]: W1009 07:20:39.390303 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:39.390985 kubelet[2700]: E1009 07:20:39.390639 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:39.391296 kubelet[2700]: E1009 07:20:39.391280 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:39.391850 kubelet[2700]: W1009 07:20:39.391699 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:39.391850 kubelet[2700]: E1009 07:20:39.391745 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:39.392895 kubelet[2700]: E1009 07:20:39.392388 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:39.392895 kubelet[2700]: W1009 07:20:39.392407 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:39.392895 kubelet[2700]: E1009 07:20:39.392453 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:39.393108 kubelet[2700]: E1009 07:20:39.393093 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:39.393494 kubelet[2700]: W1009 07:20:39.393197 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:39.393494 kubelet[2700]: E1009 07:20:39.393396 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:39.394163 kubelet[2700]: E1009 07:20:39.394025 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:39.394163 kubelet[2700]: W1009 07:20:39.394038 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:39.394640 kubelet[2700]: E1009 07:20:39.394354 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:39.395024 kubelet[2700]: E1009 07:20:39.394885 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:39.395024 kubelet[2700]: W1009 07:20:39.394902 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:39.395293 kubelet[2700]: E1009 07:20:39.395140 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:39.395519 kubelet[2700]: E1009 07:20:39.395413 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:39.395519 kubelet[2700]: W1009 07:20:39.395424 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:39.396001 kubelet[2700]: E1009 07:20:39.395891 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:39.396001 kubelet[2700]: E1009 07:20:39.395963 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:39.396001 kubelet[2700]: W1009 07:20:39.395972 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:39.396512 kubelet[2700]: E1009 07:20:39.396285 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:39.397186 kubelet[2700]: E1009 07:20:39.396988 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:39.397186 kubelet[2700]: W1009 07:20:39.397004 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:39.397186 kubelet[2700]: E1009 07:20:39.397136 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:39.398615 kubelet[2700]: E1009 07:20:39.398397 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:39.398615 kubelet[2700]: W1009 07:20:39.398414 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:39.398615 kubelet[2700]: E1009 07:20:39.398506 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:39.400735 kubelet[2700]: E1009 07:20:39.399961 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:39.400735 kubelet[2700]: W1009 07:20:39.399977 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:39.400735 kubelet[2700]: E1009 07:20:39.400385 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:39.401677 kubelet[2700]: E1009 07:20:39.401535 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:39.401677 kubelet[2700]: W1009 07:20:39.401574 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:39.402324 kubelet[2700]: E1009 07:20:39.402086 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:39.402578 kubelet[2700]: E1009 07:20:39.402421 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:39.402578 kubelet[2700]: W1009 07:20:39.402433 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:39.402843 kubelet[2700]: E1009 07:20:39.402829 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:39.403564 kubelet[2700]: E1009 07:20:39.403485 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:39.403564 kubelet[2700]: W1009 07:20:39.403499 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:39.404013 kubelet[2700]: E1009 07:20:39.403675 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:39.404414 kubelet[2700]: E1009 07:20:39.404307 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:39.405008 kubelet[2700]: W1009 07:20:39.404510 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:39.405008 kubelet[2700]: E1009 07:20:39.404549 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:39.410849 kubelet[2700]: E1009 07:20:39.410751 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:20:39.410849 kubelet[2700]: W1009 07:20:39.410776 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:20:39.410849 kubelet[2700]: E1009 07:20:39.410801 2700 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:20:39.512360 containerd[1594]: time="2024-10-09T07:20:39.511243869Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:20:39.514319 containerd[1594]: time="2024-10-09T07:20:39.513344925Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1: active requests=0, bytes read=5141007" Oct 9 07:20:39.516576 containerd[1594]: time="2024-10-09T07:20:39.516458546Z" level=info msg="ImageCreate event name:\"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:20:39.525679 containerd[1594]: time="2024-10-09T07:20:39.525523680Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:20:39.527358 containerd[1594]: time="2024-10-09T07:20:39.527139935Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" with image id \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\", size \"6633368\" in 1.719341266s" Oct 9 07:20:39.527358 containerd[1594]: time="2024-10-09T07:20:39.527198140Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" returns image reference \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\"" Oct 9 07:20:39.531604 containerd[1594]: time="2024-10-09T07:20:39.531526697Z" level=info msg="CreateContainer within sandbox \"23276d9274cfa1b681cf9cd4b97ce298ae45dc2d51c1949a35937a21499c617d\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 9 07:20:39.576052 containerd[1594]: time="2024-10-09T07:20:39.575948293Z" level=info msg="CreateContainer within sandbox \"23276d9274cfa1b681cf9cd4b97ce298ae45dc2d51c1949a35937a21499c617d\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"5b16400895e4997f13ae7eced825231f408bda934136219307c1c35a868d1f0f\"" Oct 9 07:20:39.578819 containerd[1594]: time="2024-10-09T07:20:39.577192580Z" level=info msg="StartContainer for \"5b16400895e4997f13ae7eced825231f408bda934136219307c1c35a868d1f0f\"" Oct 9 07:20:39.698702 containerd[1594]: time="2024-10-09T07:20:39.698644040Z" level=info msg="StartContainer for \"5b16400895e4997f13ae7eced825231f408bda934136219307c1c35a868d1f0f\" returns successfully" Oct 9 07:20:39.760923 containerd[1594]: time="2024-10-09T07:20:39.760848787Z" level=info msg="shim disconnected" id=5b16400895e4997f13ae7eced825231f408bda934136219307c1c35a868d1f0f namespace=k8s.io Oct 9 07:20:39.760923 containerd[1594]: time="2024-10-09T07:20:39.760913001Z" level=warning msg="cleaning up after shim disconnected" id=5b16400895e4997f13ae7eced825231f408bda934136219307c1c35a868d1f0f namespace=k8s.io Oct 9 07:20:39.760923 containerd[1594]: time="2024-10-09T07:20:39.760921797Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 07:20:39.987764 kubelet[2700]: E1009 07:20:39.987704 2700 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h7tbg" podUID="e1f970c3-0e6b-4c36-a7b6-7c163a15816e" Oct 9 07:20:40.301565 containerd[1594]: time="2024-10-09T07:20:40.301400996Z" level=info msg="StopPodSandbox for \"23276d9274cfa1b681cf9cd4b97ce298ae45dc2d51c1949a35937a21499c617d\"" Oct 9 07:20:40.301565 containerd[1594]: time="2024-10-09T07:20:40.301485143Z" level=info msg="Container to stop \"5b16400895e4997f13ae7eced825231f408bda934136219307c1c35a868d1f0f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 9 07:20:40.307096 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-23276d9274cfa1b681cf9cd4b97ce298ae45dc2d51c1949a35937a21499c617d-shm.mount: Deactivated successfully. Oct 9 07:20:40.372293 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-23276d9274cfa1b681cf9cd4b97ce298ae45dc2d51c1949a35937a21499c617d-rootfs.mount: Deactivated successfully. Oct 9 07:20:40.373709 containerd[1594]: time="2024-10-09T07:20:40.373456952Z" level=info msg="shim disconnected" id=23276d9274cfa1b681cf9cd4b97ce298ae45dc2d51c1949a35937a21499c617d namespace=k8s.io Oct 9 07:20:40.373709 containerd[1594]: time="2024-10-09T07:20:40.373522432Z" level=warning msg="cleaning up after shim disconnected" id=23276d9274cfa1b681cf9cd4b97ce298ae45dc2d51c1949a35937a21499c617d namespace=k8s.io Oct 9 07:20:40.373709 containerd[1594]: time="2024-10-09T07:20:40.373549687Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 07:20:40.406182 containerd[1594]: time="2024-10-09T07:20:40.406000518Z" level=info msg="TearDown network for sandbox \"23276d9274cfa1b681cf9cd4b97ce298ae45dc2d51c1949a35937a21499c617d\" successfully" Oct 9 07:20:40.406182 containerd[1594]: time="2024-10-09T07:20:40.406034923Z" level=info msg="StopPodSandbox for \"23276d9274cfa1b681cf9cd4b97ce298ae45dc2d51c1949a35937a21499c617d\" returns successfully" Oct 9 07:20:40.502937 kubelet[2700]: I1009 07:20:40.502887 2700 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c-var-run-calico\") pod \"485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c\" (UID: \"485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c\") " Oct 9 07:20:40.503466 kubelet[2700]: I1009 07:20:40.503029 2700 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c-cni-log-dir\") pod \"485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c\" (UID: \"485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c\") " Oct 9 07:20:40.503466 kubelet[2700]: I1009 07:20:40.503057 2700 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c-node-certs\") pod \"485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c\" (UID: \"485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c\") " Oct 9 07:20:40.503466 kubelet[2700]: I1009 07:20:40.503077 2700 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c-flexvol-driver-host\") pod \"485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c\" (UID: \"485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c\") " Oct 9 07:20:40.503466 kubelet[2700]: I1009 07:20:40.503099 2700 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c" (UID: "485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 07:20:40.503466 kubelet[2700]: I1009 07:20:40.502983 2700 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c" (UID: "485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 07:20:40.503466 kubelet[2700]: I1009 07:20:40.503182 2700 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c-cni-bin-dir\") pod \"485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c\" (UID: \"485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c\") " Oct 9 07:20:40.504248 kubelet[2700]: I1009 07:20:40.503201 2700 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c-xtables-lock\") pod \"485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c\" (UID: \"485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c\") " Oct 9 07:20:40.504248 kubelet[2700]: I1009 07:20:40.503719 2700 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c-tigera-ca-bundle\") pod \"485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c\" (UID: \"485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c\") " Oct 9 07:20:40.504248 kubelet[2700]: I1009 07:20:40.503751 2700 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c-cni-net-dir\") pod \"485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c\" (UID: \"485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c\") " Oct 9 07:20:40.504248 kubelet[2700]: I1009 07:20:40.503892 2700 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c-policysync\") pod \"485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c\" (UID: \"485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c\") " Oct 9 07:20:40.504248 kubelet[2700]: I1009 07:20:40.503917 2700 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c-lib-modules\") pod \"485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c\" (UID: \"485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c\") " Oct 9 07:20:40.505074 kubelet[2700]: I1009 07:20:40.504774 2700 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7q9l\" (UniqueName: \"kubernetes.io/projected/485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c-kube-api-access-x7q9l\") pod \"485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c\" (UID: \"485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c\") " Oct 9 07:20:40.505074 kubelet[2700]: I1009 07:20:40.504817 2700 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c-var-lib-calico\") pod \"485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c\" (UID: \"485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c\") " Oct 9 07:20:40.505074 kubelet[2700]: I1009 07:20:40.504910 2700 reconciler_common.go:300] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c-cni-log-dir\") on node \"ci-3975.2.2-f-f6e42a54cc\" DevicePath \"\"" Oct 9 07:20:40.505074 kubelet[2700]: I1009 07:20:40.504929 2700 reconciler_common.go:300] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c-var-run-calico\") on node \"ci-3975.2.2-f-f6e42a54cc\" DevicePath \"\"" Oct 9 07:20:40.505796 kubelet[2700]: I1009 07:20:40.505748 2700 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c" (UID: "485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 07:20:40.505982 kubelet[2700]: I1009 07:20:40.505940 2700 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c" (UID: "485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 07:20:40.506138 kubelet[2700]: I1009 07:20:40.506085 2700 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c-policysync" (OuterVolumeSpecName: "policysync") pod "485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c" (UID: "485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 07:20:40.506248 kubelet[2700]: I1009 07:20:40.506234 2700 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c" (UID: "485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 07:20:40.512377 systemd[1]: var-lib-kubelet-pods-485a9c84\x2d4ef0\x2d42e1\x2d98cf\x2d0ede0a2e7a6c-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Oct 9 07:20:40.516357 kubelet[2700]: I1009 07:20:40.506431 2700 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c" (UID: "485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 07:20:40.516357 kubelet[2700]: I1009 07:20:40.506914 2700 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c" (UID: "485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 07:20:40.516357 kubelet[2700]: I1009 07:20:40.513200 2700 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c" (UID: "485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 07:20:40.516357 kubelet[2700]: I1009 07:20:40.516211 2700 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c" (UID: "485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 9 07:20:40.517047 kubelet[2700]: I1009 07:20:40.516766 2700 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c-node-certs" (OuterVolumeSpecName: "node-certs") pod "485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c" (UID: "485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 9 07:20:40.525453 kubelet[2700]: I1009 07:20:40.525319 2700 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c-kube-api-access-x7q9l" (OuterVolumeSpecName: "kube-api-access-x7q9l") pod "485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c" (UID: "485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c"). InnerVolumeSpecName "kube-api-access-x7q9l". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 9 07:20:40.526121 systemd[1]: var-lib-kubelet-pods-485a9c84\x2d4ef0\x2d42e1\x2d98cf\x2d0ede0a2e7a6c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dx7q9l.mount: Deactivated successfully. Oct 9 07:20:40.605997 kubelet[2700]: I1009 07:20:40.605756 2700 reconciler_common.go:300] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c-flexvol-driver-host\") on node \"ci-3975.2.2-f-f6e42a54cc\" DevicePath \"\"" Oct 9 07:20:40.605997 kubelet[2700]: I1009 07:20:40.605795 2700 reconciler_common.go:300] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c-node-certs\") on node \"ci-3975.2.2-f-f6e42a54cc\" DevicePath \"\"" Oct 9 07:20:40.605997 kubelet[2700]: I1009 07:20:40.605809 2700 reconciler_common.go:300] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c-cni-bin-dir\") on node \"ci-3975.2.2-f-f6e42a54cc\" DevicePath \"\"" Oct 9 07:20:40.605997 kubelet[2700]: I1009 07:20:40.605822 2700 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c-xtables-lock\") on node \"ci-3975.2.2-f-f6e42a54cc\" DevicePath \"\"" Oct 9 07:20:40.605997 kubelet[2700]: I1009 07:20:40.605833 2700 reconciler_common.go:300] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c-tigera-ca-bundle\") on node \"ci-3975.2.2-f-f6e42a54cc\" DevicePath \"\"" Oct 9 07:20:40.605997 kubelet[2700]: I1009 07:20:40.605843 2700 reconciler_common.go:300] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c-cni-net-dir\") on node \"ci-3975.2.2-f-f6e42a54cc\" DevicePath \"\"" Oct 9 07:20:40.605997 kubelet[2700]: I1009 07:20:40.605852 2700 reconciler_common.go:300] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c-policysync\") on node \"ci-3975.2.2-f-f6e42a54cc\" DevicePath \"\"" Oct 9 07:20:40.605997 kubelet[2700]: I1009 07:20:40.605861 2700 reconciler_common.go:300] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c-var-lib-calico\") on node \"ci-3975.2.2-f-f6e42a54cc\" DevicePath \"\"" Oct 9 07:20:40.606482 kubelet[2700]: I1009 07:20:40.605870 2700 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c-lib-modules\") on node \"ci-3975.2.2-f-f6e42a54cc\" DevicePath \"\"" Oct 9 07:20:40.606482 kubelet[2700]: I1009 07:20:40.605880 2700 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-x7q9l\" (UniqueName: \"kubernetes.io/projected/485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c-kube-api-access-x7q9l\") on node \"ci-3975.2.2-f-f6e42a54cc\" DevicePath \"\"" Oct 9 07:20:40.991134 kubelet[2700]: I1009 07:20:40.990998 2700 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="5dbfb979-9107-431c-b5a6-1fcdd29ac8ca" path="/var/lib/kubelet/pods/5dbfb979-9107-431c-b5a6-1fcdd29ac8ca/volumes" Oct 9 07:20:41.303802 kubelet[2700]: I1009 07:20:41.303761 2700 scope.go:117] "RemoveContainer" containerID="5b16400895e4997f13ae7eced825231f408bda934136219307c1c35a868d1f0f" Oct 9 07:20:41.306258 containerd[1594]: time="2024-10-09T07:20:41.306224739Z" level=info msg="RemoveContainer for \"5b16400895e4997f13ae7eced825231f408bda934136219307c1c35a868d1f0f\"" Oct 9 07:20:41.309657 containerd[1594]: time="2024-10-09T07:20:41.309614219Z" level=info msg="RemoveContainer for \"5b16400895e4997f13ae7eced825231f408bda934136219307c1c35a868d1f0f\" returns successfully" Oct 9 07:20:41.361215 kubelet[2700]: I1009 07:20:41.361176 2700 topology_manager.go:215] "Topology Admit Handler" podUID="64fea273-5519-428c-8eb9-3148fc7b73aa" podNamespace="calico-system" podName="calico-node-69rb7" Oct 9 07:20:41.363008 kubelet[2700]: E1009 07:20:41.361239 2700 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c" containerName="flexvol-driver" Oct 9 07:20:41.363008 kubelet[2700]: I1009 07:20:41.361269 2700 memory_manager.go:354] "RemoveStaleState removing state" podUID="485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c" containerName="flexvol-driver" Oct 9 07:20:41.410767 kubelet[2700]: I1009 07:20:41.410728 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/64fea273-5519-428c-8eb9-3148fc7b73aa-cni-log-dir\") pod \"calico-node-69rb7\" (UID: \"64fea273-5519-428c-8eb9-3148fc7b73aa\") " pod="calico-system/calico-node-69rb7" Oct 9 07:20:41.410996 kubelet[2700]: I1009 07:20:41.410980 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/64fea273-5519-428c-8eb9-3148fc7b73aa-lib-modules\") pod \"calico-node-69rb7\" (UID: \"64fea273-5519-428c-8eb9-3148fc7b73aa\") " pod="calico-system/calico-node-69rb7" Oct 9 07:20:41.411213 kubelet[2700]: I1009 07:20:41.411187 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/64fea273-5519-428c-8eb9-3148fc7b73aa-policysync\") pod \"calico-node-69rb7\" (UID: \"64fea273-5519-428c-8eb9-3148fc7b73aa\") " pod="calico-system/calico-node-69rb7" Oct 9 07:20:41.411488 kubelet[2700]: I1009 07:20:41.411368 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/64fea273-5519-428c-8eb9-3148fc7b73aa-node-certs\") pod \"calico-node-69rb7\" (UID: \"64fea273-5519-428c-8eb9-3148fc7b73aa\") " pod="calico-system/calico-node-69rb7" Oct 9 07:20:41.411488 kubelet[2700]: I1009 07:20:41.411400 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/64fea273-5519-428c-8eb9-3148fc7b73aa-cni-bin-dir\") pod \"calico-node-69rb7\" (UID: \"64fea273-5519-428c-8eb9-3148fc7b73aa\") " pod="calico-system/calico-node-69rb7" Oct 9 07:20:41.411488 kubelet[2700]: I1009 07:20:41.411437 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/64fea273-5519-428c-8eb9-3148fc7b73aa-var-lib-calico\") pod \"calico-node-69rb7\" (UID: \"64fea273-5519-428c-8eb9-3148fc7b73aa\") " pod="calico-system/calico-node-69rb7" Oct 9 07:20:41.411488 kubelet[2700]: I1009 07:20:41.411459 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/64fea273-5519-428c-8eb9-3148fc7b73aa-tigera-ca-bundle\") pod \"calico-node-69rb7\" (UID: \"64fea273-5519-428c-8eb9-3148fc7b73aa\") " pod="calico-system/calico-node-69rb7" Oct 9 07:20:41.411488 kubelet[2700]: I1009 07:20:41.411477 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/64fea273-5519-428c-8eb9-3148fc7b73aa-var-run-calico\") pod \"calico-node-69rb7\" (UID: \"64fea273-5519-428c-8eb9-3148fc7b73aa\") " pod="calico-system/calico-node-69rb7" Oct 9 07:20:41.411970 kubelet[2700]: I1009 07:20:41.411596 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/64fea273-5519-428c-8eb9-3148fc7b73aa-cni-net-dir\") pod \"calico-node-69rb7\" (UID: \"64fea273-5519-428c-8eb9-3148fc7b73aa\") " pod="calico-system/calico-node-69rb7" Oct 9 07:20:41.411970 kubelet[2700]: I1009 07:20:41.411796 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/64fea273-5519-428c-8eb9-3148fc7b73aa-flexvol-driver-host\") pod \"calico-node-69rb7\" (UID: \"64fea273-5519-428c-8eb9-3148fc7b73aa\") " pod="calico-system/calico-node-69rb7" Oct 9 07:20:41.412315 kubelet[2700]: I1009 07:20:41.412045 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/64fea273-5519-428c-8eb9-3148fc7b73aa-xtables-lock\") pod \"calico-node-69rb7\" (UID: \"64fea273-5519-428c-8eb9-3148fc7b73aa\") " pod="calico-system/calico-node-69rb7" Oct 9 07:20:41.412832 kubelet[2700]: I1009 07:20:41.412495 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8wp9\" (UniqueName: \"kubernetes.io/projected/64fea273-5519-428c-8eb9-3148fc7b73aa-kube-api-access-p8wp9\") pod \"calico-node-69rb7\" (UID: \"64fea273-5519-428c-8eb9-3148fc7b73aa\") " pod="calico-system/calico-node-69rb7" Oct 9 07:20:41.668386 kubelet[2700]: E1009 07:20:41.667907 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:20:41.670691 containerd[1594]: time="2024-10-09T07:20:41.670654650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-69rb7,Uid:64fea273-5519-428c-8eb9-3148fc7b73aa,Namespace:calico-system,Attempt:0,}" Oct 9 07:20:41.702835 containerd[1594]: time="2024-10-09T07:20:41.702670047Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:20:41.702835 containerd[1594]: time="2024-10-09T07:20:41.702773040Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:20:41.702835 containerd[1594]: time="2024-10-09T07:20:41.702788371Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:20:41.702835 containerd[1594]: time="2024-10-09T07:20:41.702798597Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:20:41.761908 containerd[1594]: time="2024-10-09T07:20:41.761484112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-69rb7,Uid:64fea273-5519-428c-8eb9-3148fc7b73aa,Namespace:calico-system,Attempt:0,} returns sandbox id \"feabfb91df3cd69eaa844d52ac971a8692775097c31a7d45ef733b91b92bdfea\"" Oct 9 07:20:41.762769 kubelet[2700]: E1009 07:20:41.762703 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:20:41.767351 containerd[1594]: time="2024-10-09T07:20:41.767174531Z" level=info msg="CreateContainer within sandbox \"feabfb91df3cd69eaa844d52ac971a8692775097c31a7d45ef733b91b92bdfea\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 9 07:20:41.788308 containerd[1594]: time="2024-10-09T07:20:41.788213293Z" level=info msg="CreateContainer within sandbox \"feabfb91df3cd69eaa844d52ac971a8692775097c31a7d45ef733b91b92bdfea\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"25ba1a865cee98cacb2c856b054486f87a12826e6e93d018358307dedb8ea76b\"" Oct 9 07:20:41.791718 containerd[1594]: time="2024-10-09T07:20:41.790997510Z" level=info msg="StartContainer for \"25ba1a865cee98cacb2c856b054486f87a12826e6e93d018358307dedb8ea76b\"" Oct 9 07:20:41.868546 containerd[1594]: time="2024-10-09T07:20:41.867777352Z" level=info msg="StartContainer for \"25ba1a865cee98cacb2c856b054486f87a12826e6e93d018358307dedb8ea76b\" returns successfully" Oct 9 07:20:41.920271 containerd[1594]: time="2024-10-09T07:20:41.920068057Z" level=info msg="shim disconnected" id=25ba1a865cee98cacb2c856b054486f87a12826e6e93d018358307dedb8ea76b namespace=k8s.io Oct 9 07:20:41.920271 containerd[1594]: time="2024-10-09T07:20:41.920155742Z" level=warning msg="cleaning up after shim disconnected" id=25ba1a865cee98cacb2c856b054486f87a12826e6e93d018358307dedb8ea76b namespace=k8s.io Oct 9 07:20:41.920271 containerd[1594]: time="2024-10-09T07:20:41.920184469Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 07:20:41.988329 kubelet[2700]: E1009 07:20:41.988258 2700 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h7tbg" podUID="e1f970c3-0e6b-4c36-a7b6-7c163a15816e" Oct 9 07:20:42.308260 kubelet[2700]: E1009 07:20:42.308198 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:20:42.311396 containerd[1594]: time="2024-10-09T07:20:42.310979849Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\"" Oct 9 07:20:42.991340 kubelet[2700]: I1009 07:20:42.991249 2700 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c" path="/var/lib/kubelet/pods/485a9c84-4ef0-42e1-98cf-0ede0a2e7a6c/volumes" Oct 9 07:20:43.989267 kubelet[2700]: E1009 07:20:43.988771 2700 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h7tbg" podUID="e1f970c3-0e6b-4c36-a7b6-7c163a15816e" Oct 9 07:20:45.988335 kubelet[2700]: E1009 07:20:45.988296 2700 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h7tbg" podUID="e1f970c3-0e6b-4c36-a7b6-7c163a15816e" Oct 9 07:20:46.904327 containerd[1594]: time="2024-10-09T07:20:46.903223944Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:20:46.904327 containerd[1594]: time="2024-10-09T07:20:46.904231378Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.1: active requests=0, bytes read=93083736" Oct 9 07:20:46.905161 containerd[1594]: time="2024-10-09T07:20:46.905130681Z" level=info msg="ImageCreate event name:\"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:20:46.908716 containerd[1594]: time="2024-10-09T07:20:46.908641570Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:20:46.909688 containerd[1594]: time="2024-10-09T07:20:46.909652855Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.1\" with image id \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\", size \"94576137\" in 4.598613645s" Oct 9 07:20:46.909849 containerd[1594]: time="2024-10-09T07:20:46.909830406Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\" returns image reference \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\"" Oct 9 07:20:46.913524 containerd[1594]: time="2024-10-09T07:20:46.913487380Z" level=info msg="CreateContainer within sandbox \"feabfb91df3cd69eaa844d52ac971a8692775097c31a7d45ef733b91b92bdfea\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 9 07:20:46.952401 containerd[1594]: time="2024-10-09T07:20:46.952277764Z" level=info msg="CreateContainer within sandbox \"feabfb91df3cd69eaa844d52ac971a8692775097c31a7d45ef733b91b92bdfea\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b4c36008e9de58fd21cb1b1476d7fec853b803e7a2db224c71a87d34f37d6640\"" Oct 9 07:20:46.954766 containerd[1594]: time="2024-10-09T07:20:46.953036337Z" level=info msg="StartContainer for \"b4c36008e9de58fd21cb1b1476d7fec853b803e7a2db224c71a87d34f37d6640\"" Oct 9 07:20:47.127450 containerd[1594]: time="2024-10-09T07:20:47.127378639Z" level=info msg="StartContainer for \"b4c36008e9de58fd21cb1b1476d7fec853b803e7a2db224c71a87d34f37d6640\" returns successfully" Oct 9 07:20:47.334943 kubelet[2700]: E1009 07:20:47.334679 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:20:47.831433 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b4c36008e9de58fd21cb1b1476d7fec853b803e7a2db224c71a87d34f37d6640-rootfs.mount: Deactivated successfully. Oct 9 07:20:47.840050 kubelet[2700]: I1009 07:20:47.840002 2700 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Oct 9 07:20:47.842737 containerd[1594]: time="2024-10-09T07:20:47.841783001Z" level=info msg="shim disconnected" id=b4c36008e9de58fd21cb1b1476d7fec853b803e7a2db224c71a87d34f37d6640 namespace=k8s.io Oct 9 07:20:47.842737 containerd[1594]: time="2024-10-09T07:20:47.841862358Z" level=warning msg="cleaning up after shim disconnected" id=b4c36008e9de58fd21cb1b1476d7fec853b803e7a2db224c71a87d34f37d6640 namespace=k8s.io Oct 9 07:20:47.842737 containerd[1594]: time="2024-10-09T07:20:47.841876484Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 07:20:47.870192 containerd[1594]: time="2024-10-09T07:20:47.870130342Z" level=warning msg="cleanup warnings time=\"2024-10-09T07:20:47Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Oct 9 07:20:47.890566 kubelet[2700]: I1009 07:20:47.885527 2700 topology_manager.go:215] "Topology Admit Handler" podUID="3336ca45-6ca5-4891-8a41-a80ade85063f" podNamespace="calico-system" podName="calico-kube-controllers-8f856f8-6czvj" Oct 9 07:20:47.892650 kubelet[2700]: I1009 07:20:47.891774 2700 topology_manager.go:215] "Topology Admit Handler" podUID="70a54fb9-5d74-4760-8a65-2ba9c139331e" podNamespace="kube-system" podName="coredns-76f75df574-wfcsr" Oct 9 07:20:47.894111 kubelet[2700]: I1009 07:20:47.893731 2700 topology_manager.go:215] "Topology Admit Handler" podUID="d0afbab9-e0c9-43bf-b03d-a80eefbc01be" podNamespace="kube-system" podName="coredns-76f75df574-stxm6" Oct 9 07:20:47.968276 kubelet[2700]: I1009 07:20:47.968153 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d0afbab9-e0c9-43bf-b03d-a80eefbc01be-config-volume\") pod \"coredns-76f75df574-stxm6\" (UID: \"d0afbab9-e0c9-43bf-b03d-a80eefbc01be\") " pod="kube-system/coredns-76f75df574-stxm6" Oct 9 07:20:47.968276 kubelet[2700]: I1009 07:20:47.968206 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vw2b\" (UniqueName: \"kubernetes.io/projected/3336ca45-6ca5-4891-8a41-a80ade85063f-kube-api-access-5vw2b\") pod \"calico-kube-controllers-8f856f8-6czvj\" (UID: \"3336ca45-6ca5-4891-8a41-a80ade85063f\") " pod="calico-system/calico-kube-controllers-8f856f8-6czvj" Oct 9 07:20:47.968276 kubelet[2700]: I1009 07:20:47.968233 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/70a54fb9-5d74-4760-8a65-2ba9c139331e-config-volume\") pod \"coredns-76f75df574-wfcsr\" (UID: \"70a54fb9-5d74-4760-8a65-2ba9c139331e\") " pod="kube-system/coredns-76f75df574-wfcsr" Oct 9 07:20:47.968276 kubelet[2700]: I1009 07:20:47.968295 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52ds8\" (UniqueName: \"kubernetes.io/projected/d0afbab9-e0c9-43bf-b03d-a80eefbc01be-kube-api-access-52ds8\") pod \"coredns-76f75df574-stxm6\" (UID: \"d0afbab9-e0c9-43bf-b03d-a80eefbc01be\") " pod="kube-system/coredns-76f75df574-stxm6" Oct 9 07:20:47.968581 kubelet[2700]: I1009 07:20:47.968334 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcpjq\" (UniqueName: \"kubernetes.io/projected/70a54fb9-5d74-4760-8a65-2ba9c139331e-kube-api-access-zcpjq\") pod \"coredns-76f75df574-wfcsr\" (UID: \"70a54fb9-5d74-4760-8a65-2ba9c139331e\") " pod="kube-system/coredns-76f75df574-wfcsr" Oct 9 07:20:47.968581 kubelet[2700]: I1009 07:20:47.968394 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3336ca45-6ca5-4891-8a41-a80ade85063f-tigera-ca-bundle\") pod \"calico-kube-controllers-8f856f8-6czvj\" (UID: \"3336ca45-6ca5-4891-8a41-a80ade85063f\") " pod="calico-system/calico-kube-controllers-8f856f8-6czvj" Oct 9 07:20:47.992744 containerd[1594]: time="2024-10-09T07:20:47.991963735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h7tbg,Uid:e1f970c3-0e6b-4c36-a7b6-7c163a15816e,Namespace:calico-system,Attempt:0,}" Oct 9 07:20:48.201719 kubelet[2700]: E1009 07:20:48.201176 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:20:48.203720 containerd[1594]: time="2024-10-09T07:20:48.203659566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8f856f8-6czvj,Uid:3336ca45-6ca5-4891-8a41-a80ade85063f,Namespace:calico-system,Attempt:0,}" Oct 9 07:20:48.205314 containerd[1594]: time="2024-10-09T07:20:48.205283440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-wfcsr,Uid:70a54fb9-5d74-4760-8a65-2ba9c139331e,Namespace:kube-system,Attempt:0,}" Oct 9 07:20:48.206163 kubelet[2700]: E1009 07:20:48.205691 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:20:48.207011 containerd[1594]: time="2024-10-09T07:20:48.206798609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-stxm6,Uid:d0afbab9-e0c9-43bf-b03d-a80eefbc01be,Namespace:kube-system,Attempt:0,}" Oct 9 07:20:48.316610 containerd[1594]: time="2024-10-09T07:20:48.316463396Z" level=error msg="Failed to destroy network for sandbox \"b8084a1f6f964263c12e85d0d0eb3be3c4a1b7491315bc4a047f4f1295143e66\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:20:48.321409 containerd[1594]: time="2024-10-09T07:20:48.321297219Z" level=error msg="encountered an error cleaning up failed sandbox \"b8084a1f6f964263c12e85d0d0eb3be3c4a1b7491315bc4a047f4f1295143e66\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:20:48.344740 kubelet[2700]: I1009 07:20:48.344613 2700 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8084a1f6f964263c12e85d0d0eb3be3c4a1b7491315bc4a047f4f1295143e66" Oct 9 07:20:48.355963 kubelet[2700]: E1009 07:20:48.355748 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:20:48.373820 containerd[1594]: time="2024-10-09T07:20:48.373780274Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\"" Oct 9 07:20:48.412041 containerd[1594]: time="2024-10-09T07:20:48.411855855Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h7tbg,Uid:e1f970c3-0e6b-4c36-a7b6-7c163a15816e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b8084a1f6f964263c12e85d0d0eb3be3c4a1b7491315bc4a047f4f1295143e66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:20:48.412676 kubelet[2700]: E1009 07:20:48.412358 2700 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8084a1f6f964263c12e85d0d0eb3be3c4a1b7491315bc4a047f4f1295143e66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:20:48.412676 kubelet[2700]: E1009 07:20:48.412433 2700 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8084a1f6f964263c12e85d0d0eb3be3c4a1b7491315bc4a047f4f1295143e66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h7tbg" Oct 9 07:20:48.412676 kubelet[2700]: E1009 07:20:48.412456 2700 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8084a1f6f964263c12e85d0d0eb3be3c4a1b7491315bc4a047f4f1295143e66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h7tbg" Oct 9 07:20:48.415235 kubelet[2700]: E1009 07:20:48.414930 2700 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-h7tbg_calico-system(e1f970c3-0e6b-4c36-a7b6-7c163a15816e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-h7tbg_calico-system(e1f970c3-0e6b-4c36-a7b6-7c163a15816e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b8084a1f6f964263c12e85d0d0eb3be3c4a1b7491315bc4a047f4f1295143e66\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-h7tbg" podUID="e1f970c3-0e6b-4c36-a7b6-7c163a15816e" Oct 9 07:20:48.429299 containerd[1594]: time="2024-10-09T07:20:48.429220545Z" level=error msg="Failed to destroy network for sandbox \"df262431353447533ae14f7e8540234c43b8638dff4f4777ae00acdf3ee89c13\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:20:48.429810 containerd[1594]: time="2024-10-09T07:20:48.429774993Z" level=error msg="encountered an error cleaning up failed sandbox \"df262431353447533ae14f7e8540234c43b8638dff4f4777ae00acdf3ee89c13\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:20:48.429878 containerd[1594]: time="2024-10-09T07:20:48.429857247Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8f856f8-6czvj,Uid:3336ca45-6ca5-4891-8a41-a80ade85063f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"df262431353447533ae14f7e8540234c43b8638dff4f4777ae00acdf3ee89c13\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:20:48.430242 kubelet[2700]: E1009 07:20:48.430183 2700 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df262431353447533ae14f7e8540234c43b8638dff4f4777ae00acdf3ee89c13\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:20:48.431598 kubelet[2700]: E1009 07:20:48.430629 2700 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df262431353447533ae14f7e8540234c43b8638dff4f4777ae00acdf3ee89c13\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8f856f8-6czvj" Oct 9 07:20:48.431598 kubelet[2700]: E1009 07:20:48.430684 2700 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df262431353447533ae14f7e8540234c43b8638dff4f4777ae00acdf3ee89c13\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8f856f8-6czvj" Oct 9 07:20:48.431598 kubelet[2700]: E1009 07:20:48.430792 2700 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-8f856f8-6czvj_calico-system(3336ca45-6ca5-4891-8a41-a80ade85063f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-8f856f8-6czvj_calico-system(3336ca45-6ca5-4891-8a41-a80ade85063f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"df262431353447533ae14f7e8540234c43b8638dff4f4777ae00acdf3ee89c13\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-8f856f8-6czvj" podUID="3336ca45-6ca5-4891-8a41-a80ade85063f" Oct 9 07:20:48.452968 containerd[1594]: time="2024-10-09T07:20:48.452819677Z" level=error msg="Failed to destroy network for sandbox \"3f83c3a32eba9d2d238a139f196d8a01e7cfd89d1fadae4f5fa8f670b3884c00\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:20:48.454983 containerd[1594]: time="2024-10-09T07:20:48.454908896Z" level=error msg="encountered an error cleaning up failed sandbox \"3f83c3a32eba9d2d238a139f196d8a01e7cfd89d1fadae4f5fa8f670b3884c00\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:20:48.455163 containerd[1594]: time="2024-10-09T07:20:48.455044756Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-wfcsr,Uid:70a54fb9-5d74-4760-8a65-2ba9c139331e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3f83c3a32eba9d2d238a139f196d8a01e7cfd89d1fadae4f5fa8f670b3884c00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:20:48.455788 kubelet[2700]: E1009 07:20:48.455372 2700 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f83c3a32eba9d2d238a139f196d8a01e7cfd89d1fadae4f5fa8f670b3884c00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:20:48.455996 kubelet[2700]: E1009 07:20:48.455933 2700 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f83c3a32eba9d2d238a139f196d8a01e7cfd89d1fadae4f5fa8f670b3884c00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-wfcsr" Oct 9 07:20:48.456599 kubelet[2700]: E1009 07:20:48.456094 2700 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f83c3a32eba9d2d238a139f196d8a01e7cfd89d1fadae4f5fa8f670b3884c00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-wfcsr" Oct 9 07:20:48.457952 kubelet[2700]: E1009 07:20:48.457929 2700 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-wfcsr_kube-system(70a54fb9-5d74-4760-8a65-2ba9c139331e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-wfcsr_kube-system(70a54fb9-5d74-4760-8a65-2ba9c139331e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3f83c3a32eba9d2d238a139f196d8a01e7cfd89d1fadae4f5fa8f670b3884c00\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-wfcsr" podUID="70a54fb9-5d74-4760-8a65-2ba9c139331e" Oct 9 07:20:48.468863 containerd[1594]: time="2024-10-09T07:20:48.468734368Z" level=error msg="Failed to destroy network for sandbox \"ff82f6a143dbe2170e9cac8003a3e122eb1328ba382897fbae3bf4524e89917b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:20:48.470052 containerd[1594]: time="2024-10-09T07:20:48.469610669Z" level=error msg="encountered an error cleaning up failed sandbox \"ff82f6a143dbe2170e9cac8003a3e122eb1328ba382897fbae3bf4524e89917b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:20:48.470052 containerd[1594]: time="2024-10-09T07:20:48.469728891Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-stxm6,Uid:d0afbab9-e0c9-43bf-b03d-a80eefbc01be,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ff82f6a143dbe2170e9cac8003a3e122eb1328ba382897fbae3bf4524e89917b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:20:48.470923 kubelet[2700]: E1009 07:20:48.470060 2700 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff82f6a143dbe2170e9cac8003a3e122eb1328ba382897fbae3bf4524e89917b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:20:48.470923 kubelet[2700]: E1009 07:20:48.470131 2700 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff82f6a143dbe2170e9cac8003a3e122eb1328ba382897fbae3bf4524e89917b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-stxm6" Oct 9 07:20:48.470923 kubelet[2700]: E1009 07:20:48.470161 2700 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff82f6a143dbe2170e9cac8003a3e122eb1328ba382897fbae3bf4524e89917b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-stxm6" Oct 9 07:20:48.471060 kubelet[2700]: E1009 07:20:48.470243 2700 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-stxm6_kube-system(d0afbab9-e0c9-43bf-b03d-a80eefbc01be)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-stxm6_kube-system(d0afbab9-e0c9-43bf-b03d-a80eefbc01be)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ff82f6a143dbe2170e9cac8003a3e122eb1328ba382897fbae3bf4524e89917b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-stxm6" podUID="d0afbab9-e0c9-43bf-b03d-a80eefbc01be" Oct 9 07:20:49.011883 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b8084a1f6f964263c12e85d0d0eb3be3c4a1b7491315bc4a047f4f1295143e66-shm.mount: Deactivated successfully. Oct 9 07:20:49.361062 kubelet[2700]: I1009 07:20:49.359691 2700 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df262431353447533ae14f7e8540234c43b8638dff4f4777ae00acdf3ee89c13" Oct 9 07:20:49.361814 containerd[1594]: time="2024-10-09T07:20:49.360349748Z" level=info msg="StopPodSandbox for \"df262431353447533ae14f7e8540234c43b8638dff4f4777ae00acdf3ee89c13\"" Oct 9 07:20:49.361814 containerd[1594]: time="2024-10-09T07:20:49.360770163Z" level=info msg="Ensure that sandbox df262431353447533ae14f7e8540234c43b8638dff4f4777ae00acdf3ee89c13 in task-service has been cleanup successfully" Oct 9 07:20:49.364926 kubelet[2700]: I1009 07:20:49.364894 2700 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff82f6a143dbe2170e9cac8003a3e122eb1328ba382897fbae3bf4524e89917b" Oct 9 07:20:49.367896 containerd[1594]: time="2024-10-09T07:20:49.367464465Z" level=info msg="StopPodSandbox for \"ff82f6a143dbe2170e9cac8003a3e122eb1328ba382897fbae3bf4524e89917b\"" Oct 9 07:20:49.370573 kubelet[2700]: I1009 07:20:49.370534 2700 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f83c3a32eba9d2d238a139f196d8a01e7cfd89d1fadae4f5fa8f670b3884c00" Oct 9 07:20:49.371527 containerd[1594]: time="2024-10-09T07:20:49.371436860Z" level=info msg="Ensure that sandbox ff82f6a143dbe2170e9cac8003a3e122eb1328ba382897fbae3bf4524e89917b in task-service has been cleanup successfully" Oct 9 07:20:49.372665 containerd[1594]: time="2024-10-09T07:20:49.372524526Z" level=info msg="StopPodSandbox for \"3f83c3a32eba9d2d238a139f196d8a01e7cfd89d1fadae4f5fa8f670b3884c00\"" Oct 9 07:20:49.372908 containerd[1594]: time="2024-10-09T07:20:49.372810399Z" level=info msg="Ensure that sandbox 3f83c3a32eba9d2d238a139f196d8a01e7cfd89d1fadae4f5fa8f670b3884c00 in task-service has been cleanup successfully" Oct 9 07:20:49.374227 containerd[1594]: time="2024-10-09T07:20:49.374130799Z" level=info msg="StopPodSandbox for \"b8084a1f6f964263c12e85d0d0eb3be3c4a1b7491315bc4a047f4f1295143e66\"" Oct 9 07:20:49.374838 containerd[1594]: time="2024-10-09T07:20:49.374485876Z" level=info msg="Ensure that sandbox b8084a1f6f964263c12e85d0d0eb3be3c4a1b7491315bc4a047f4f1295143e66 in task-service has been cleanup successfully" Oct 9 07:20:49.493455 containerd[1594]: time="2024-10-09T07:20:49.493359520Z" level=error msg="StopPodSandbox for \"df262431353447533ae14f7e8540234c43b8638dff4f4777ae00acdf3ee89c13\" failed" error="failed to destroy network for sandbox \"df262431353447533ae14f7e8540234c43b8638dff4f4777ae00acdf3ee89c13\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:20:49.494040 kubelet[2700]: E1009 07:20:49.493735 2700 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"df262431353447533ae14f7e8540234c43b8638dff4f4777ae00acdf3ee89c13\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="df262431353447533ae14f7e8540234c43b8638dff4f4777ae00acdf3ee89c13" Oct 9 07:20:49.494040 kubelet[2700]: E1009 07:20:49.493789 2700 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"df262431353447533ae14f7e8540234c43b8638dff4f4777ae00acdf3ee89c13"} Oct 9 07:20:49.494040 kubelet[2700]: E1009 07:20:49.493828 2700 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3336ca45-6ca5-4891-8a41-a80ade85063f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"df262431353447533ae14f7e8540234c43b8638dff4f4777ae00acdf3ee89c13\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 07:20:49.494040 kubelet[2700]: E1009 07:20:49.493858 2700 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3336ca45-6ca5-4891-8a41-a80ade85063f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"df262431353447533ae14f7e8540234c43b8638dff4f4777ae00acdf3ee89c13\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-8f856f8-6czvj" podUID="3336ca45-6ca5-4891-8a41-a80ade85063f" Oct 9 07:20:49.498006 containerd[1594]: time="2024-10-09T07:20:49.497752882Z" level=error msg="StopPodSandbox for \"3f83c3a32eba9d2d238a139f196d8a01e7cfd89d1fadae4f5fa8f670b3884c00\" failed" error="failed to destroy network for sandbox \"3f83c3a32eba9d2d238a139f196d8a01e7cfd89d1fadae4f5fa8f670b3884c00\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:20:49.498006 containerd[1594]: time="2024-10-09T07:20:49.497958204Z" level=error msg="StopPodSandbox for \"ff82f6a143dbe2170e9cac8003a3e122eb1328ba382897fbae3bf4524e89917b\" failed" error="failed to destroy network for sandbox \"ff82f6a143dbe2170e9cac8003a3e122eb1328ba382897fbae3bf4524e89917b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:20:49.499322 kubelet[2700]: E1009 07:20:49.498194 2700 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ff82f6a143dbe2170e9cac8003a3e122eb1328ba382897fbae3bf4524e89917b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ff82f6a143dbe2170e9cac8003a3e122eb1328ba382897fbae3bf4524e89917b" Oct 9 07:20:49.499322 kubelet[2700]: E1009 07:20:49.498260 2700 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ff82f6a143dbe2170e9cac8003a3e122eb1328ba382897fbae3bf4524e89917b"} Oct 9 07:20:49.499322 kubelet[2700]: E1009 07:20:49.498336 2700 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3f83c3a32eba9d2d238a139f196d8a01e7cfd89d1fadae4f5fa8f670b3884c00\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3f83c3a32eba9d2d238a139f196d8a01e7cfd89d1fadae4f5fa8f670b3884c00" Oct 9 07:20:49.499322 kubelet[2700]: E1009 07:20:49.498413 2700 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d0afbab9-e0c9-43bf-b03d-a80eefbc01be\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ff82f6a143dbe2170e9cac8003a3e122eb1328ba382897fbae3bf4524e89917b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 07:20:49.499322 kubelet[2700]: E1009 07:20:49.498419 2700 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3f83c3a32eba9d2d238a139f196d8a01e7cfd89d1fadae4f5fa8f670b3884c00"} Oct 9 07:20:49.499605 kubelet[2700]: E1009 07:20:49.498460 2700 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"70a54fb9-5d74-4760-8a65-2ba9c139331e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3f83c3a32eba9d2d238a139f196d8a01e7cfd89d1fadae4f5fa8f670b3884c00\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 07:20:49.499605 kubelet[2700]: E1009 07:20:49.498462 2700 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d0afbab9-e0c9-43bf-b03d-a80eefbc01be\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ff82f6a143dbe2170e9cac8003a3e122eb1328ba382897fbae3bf4524e89917b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-stxm6" podUID="d0afbab9-e0c9-43bf-b03d-a80eefbc01be" Oct 9 07:20:49.499605 kubelet[2700]: E1009 07:20:49.498487 2700 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"70a54fb9-5d74-4760-8a65-2ba9c139331e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3f83c3a32eba9d2d238a139f196d8a01e7cfd89d1fadae4f5fa8f670b3884c00\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-wfcsr" podUID="70a54fb9-5d74-4760-8a65-2ba9c139331e" Oct 9 07:20:49.507832 containerd[1594]: time="2024-10-09T07:20:49.507780484Z" level=error msg="StopPodSandbox for \"b8084a1f6f964263c12e85d0d0eb3be3c4a1b7491315bc4a047f4f1295143e66\" failed" error="failed to destroy network for sandbox \"b8084a1f6f964263c12e85d0d0eb3be3c4a1b7491315bc4a047f4f1295143e66\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:20:49.508432 kubelet[2700]: E1009 07:20:49.508370 2700 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b8084a1f6f964263c12e85d0d0eb3be3c4a1b7491315bc4a047f4f1295143e66\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b8084a1f6f964263c12e85d0d0eb3be3c4a1b7491315bc4a047f4f1295143e66" Oct 9 07:20:49.508529 kubelet[2700]: E1009 07:20:49.508445 2700 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b8084a1f6f964263c12e85d0d0eb3be3c4a1b7491315bc4a047f4f1295143e66"} Oct 9 07:20:49.508529 kubelet[2700]: E1009 07:20:49.508508 2700 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e1f970c3-0e6b-4c36-a7b6-7c163a15816e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b8084a1f6f964263c12e85d0d0eb3be3c4a1b7491315bc4a047f4f1295143e66\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 07:20:49.508769 kubelet[2700]: E1009 07:20:49.508585 2700 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e1f970c3-0e6b-4c36-a7b6-7c163a15816e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b8084a1f6f964263c12e85d0d0eb3be3c4a1b7491315bc4a047f4f1295143e66\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-h7tbg" podUID="e1f970c3-0e6b-4c36-a7b6-7c163a15816e" Oct 9 07:20:54.930316 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1566397520.mount: Deactivated successfully. Oct 9 07:20:54.983288 containerd[1594]: time="2024-10-09T07:20:54.981867618Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:20:54.986059 containerd[1594]: time="2024-10-09T07:20:54.985999482Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.1: active requests=0, bytes read=117873564" Oct 9 07:20:54.987732 containerd[1594]: time="2024-10-09T07:20:54.987681392Z" level=info msg="ImageCreate event name:\"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:20:54.995602 containerd[1594]: time="2024-10-09T07:20:54.995558213Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:20:54.997023 containerd[1594]: time="2024-10-09T07:20:54.996979752Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.1\" with image id \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\", size \"117873426\" in 6.62186767s" Oct 9 07:20:54.997023 containerd[1594]: time="2024-10-09T07:20:54.997022877Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\" returns image reference \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\"" Oct 9 07:20:55.057607 containerd[1594]: time="2024-10-09T07:20:55.057521521Z" level=info msg="CreateContainer within sandbox \"feabfb91df3cd69eaa844d52ac971a8692775097c31a7d45ef733b91b92bdfea\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 9 07:20:55.100135 containerd[1594]: time="2024-10-09T07:20:55.100046405Z" level=info msg="CreateContainer within sandbox \"feabfb91df3cd69eaa844d52ac971a8692775097c31a7d45ef733b91b92bdfea\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"793582fc99eefa654778b389af2615a0bff10a542780522c8aca6e82dc1b35c5\"" Oct 9 07:20:55.101137 containerd[1594]: time="2024-10-09T07:20:55.101081741Z" level=info msg="StartContainer for \"793582fc99eefa654778b389af2615a0bff10a542780522c8aca6e82dc1b35c5\"" Oct 9 07:20:55.220417 containerd[1594]: time="2024-10-09T07:20:55.220257768Z" level=info msg="StartContainer for \"793582fc99eefa654778b389af2615a0bff10a542780522c8aca6e82dc1b35c5\" returns successfully" Oct 9 07:20:55.322119 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 9 07:20:55.323496 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 9 07:20:55.405989 kubelet[2700]: E1009 07:20:55.405955 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:20:55.460919 kubelet[2700]: I1009 07:20:55.460665 2700 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-69rb7" podStartSLOduration=1.763502377 podStartE2EDuration="14.450529175s" podCreationTimestamp="2024-10-09 07:20:41 +0000 UTC" firstStartedPulling="2024-10-09 07:20:42.310576912 +0000 UTC m=+27.502282847" lastFinishedPulling="2024-10-09 07:20:54.997603711 +0000 UTC m=+40.189309645" observedRunningTime="2024-10-09 07:20:55.436172212 +0000 UTC m=+40.627878162" watchObservedRunningTime="2024-10-09 07:20:55.450529175 +0000 UTC m=+40.642235198" Oct 9 07:20:56.089643 systemd-journald[1137]: Under memory pressure, flushing caches. Oct 9 07:20:56.087759 systemd-resolved[1487]: Under memory pressure, flushing caches. Oct 9 07:20:56.087824 systemd-resolved[1487]: Flushed all caches. Oct 9 07:21:00.990202 containerd[1594]: time="2024-10-09T07:21:00.989210005Z" level=info msg="StopPodSandbox for \"3f83c3a32eba9d2d238a139f196d8a01e7cfd89d1fadae4f5fa8f670b3884c00\"" Oct 9 07:21:01.328937 containerd[1594]: 2024-10-09 07:21:01.092 [INFO][4229] k8s.go 608: Cleaning up netns ContainerID="3f83c3a32eba9d2d238a139f196d8a01e7cfd89d1fadae4f5fa8f670b3884c00" Oct 9 07:21:01.328937 containerd[1594]: 2024-10-09 07:21:01.093 [INFO][4229] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="3f83c3a32eba9d2d238a139f196d8a01e7cfd89d1fadae4f5fa8f670b3884c00" iface="eth0" netns="/var/run/netns/cni-0d30da50-1010-8d2c-36fc-1f35cebb3ffa" Oct 9 07:21:01.328937 containerd[1594]: 2024-10-09 07:21:01.093 [INFO][4229] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="3f83c3a32eba9d2d238a139f196d8a01e7cfd89d1fadae4f5fa8f670b3884c00" iface="eth0" netns="/var/run/netns/cni-0d30da50-1010-8d2c-36fc-1f35cebb3ffa" Oct 9 07:21:01.328937 containerd[1594]: 2024-10-09 07:21:01.094 [INFO][4229] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="3f83c3a32eba9d2d238a139f196d8a01e7cfd89d1fadae4f5fa8f670b3884c00" iface="eth0" netns="/var/run/netns/cni-0d30da50-1010-8d2c-36fc-1f35cebb3ffa" Oct 9 07:21:01.328937 containerd[1594]: 2024-10-09 07:21:01.094 [INFO][4229] k8s.go 615: Releasing IP address(es) ContainerID="3f83c3a32eba9d2d238a139f196d8a01e7cfd89d1fadae4f5fa8f670b3884c00" Oct 9 07:21:01.328937 containerd[1594]: 2024-10-09 07:21:01.094 [INFO][4229] utils.go 188: Calico CNI releasing IP address ContainerID="3f83c3a32eba9d2d238a139f196d8a01e7cfd89d1fadae4f5fa8f670b3884c00" Oct 9 07:21:01.328937 containerd[1594]: 2024-10-09 07:21:01.301 [INFO][4236] ipam_plugin.go 417: Releasing address using handleID ContainerID="3f83c3a32eba9d2d238a139f196d8a01e7cfd89d1fadae4f5fa8f670b3884c00" HandleID="k8s-pod-network.3f83c3a32eba9d2d238a139f196d8a01e7cfd89d1fadae4f5fa8f670b3884c00" Workload="ci--3975.2.2--f--f6e42a54cc-k8s-coredns--76f75df574--wfcsr-eth0" Oct 9 07:21:01.328937 containerd[1594]: 2024-10-09 07:21:01.303 [INFO][4236] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:21:01.328937 containerd[1594]: 2024-10-09 07:21:01.305 [INFO][4236] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:21:01.328937 containerd[1594]: 2024-10-09 07:21:01.320 [WARNING][4236] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="3f83c3a32eba9d2d238a139f196d8a01e7cfd89d1fadae4f5fa8f670b3884c00" HandleID="k8s-pod-network.3f83c3a32eba9d2d238a139f196d8a01e7cfd89d1fadae4f5fa8f670b3884c00" Workload="ci--3975.2.2--f--f6e42a54cc-k8s-coredns--76f75df574--wfcsr-eth0" Oct 9 07:21:01.328937 containerd[1594]: 2024-10-09 07:21:01.321 [INFO][4236] ipam_plugin.go 445: Releasing address using workloadID ContainerID="3f83c3a32eba9d2d238a139f196d8a01e7cfd89d1fadae4f5fa8f670b3884c00" HandleID="k8s-pod-network.3f83c3a32eba9d2d238a139f196d8a01e7cfd89d1fadae4f5fa8f670b3884c00" Workload="ci--3975.2.2--f--f6e42a54cc-k8s-coredns--76f75df574--wfcsr-eth0" Oct 9 07:21:01.328937 containerd[1594]: 2024-10-09 07:21:01.323 [INFO][4236] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:21:01.328937 containerd[1594]: 2024-10-09 07:21:01.326 [INFO][4229] k8s.go 621: Teardown processing complete. ContainerID="3f83c3a32eba9d2d238a139f196d8a01e7cfd89d1fadae4f5fa8f670b3884c00" Oct 9 07:21:01.331068 containerd[1594]: time="2024-10-09T07:21:01.330968321Z" level=info msg="TearDown network for sandbox \"3f83c3a32eba9d2d238a139f196d8a01e7cfd89d1fadae4f5fa8f670b3884c00\" successfully" Oct 9 07:21:01.331068 containerd[1594]: time="2024-10-09T07:21:01.331022455Z" level=info msg="StopPodSandbox for \"3f83c3a32eba9d2d238a139f196d8a01e7cfd89d1fadae4f5fa8f670b3884c00\" returns successfully" Oct 9 07:21:01.331627 kubelet[2700]: E1009 07:21:01.331588 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:21:01.335193 containerd[1594]: time="2024-10-09T07:21:01.332000175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-wfcsr,Uid:70a54fb9-5d74-4760-8a65-2ba9c139331e,Namespace:kube-system,Attempt:1,}" Oct 9 07:21:01.337065 systemd[1]: run-netns-cni\x2d0d30da50\x2d1010\x2d8d2c\x2d36fc\x2d1f35cebb3ffa.mount: Deactivated successfully. Oct 9 07:21:01.689650 systemd-networkd[1226]: cali53071761dce: Link UP Oct 9 07:21:01.689907 systemd-networkd[1226]: cali53071761dce: Gained carrier Oct 9 07:21:01.707612 containerd[1594]: 2024-10-09 07:21:01.426 [INFO][4249] utils.go 100: File /var/lib/calico/mtu does not exist Oct 9 07:21:01.707612 containerd[1594]: 2024-10-09 07:21:01.443 [INFO][4249] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.2.2--f--f6e42a54cc-k8s-coredns--76f75df574--wfcsr-eth0 coredns-76f75df574- kube-system 70a54fb9-5d74-4760-8a65-2ba9c139331e 832 0 2024-10-09 07:20:28 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3975.2.2-f-f6e42a54cc coredns-76f75df574-wfcsr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali53071761dce [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="9bc387d21825db9a745d1f91275afa8c39ce3b0ce063a5ed016c54a165b9f545" Namespace="kube-system" Pod="coredns-76f75df574-wfcsr" WorkloadEndpoint="ci--3975.2.2--f--f6e42a54cc-k8s-coredns--76f75df574--wfcsr-" Oct 9 07:21:01.707612 containerd[1594]: 2024-10-09 07:21:01.443 [INFO][4249] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9bc387d21825db9a745d1f91275afa8c39ce3b0ce063a5ed016c54a165b9f545" Namespace="kube-system" Pod="coredns-76f75df574-wfcsr" WorkloadEndpoint="ci--3975.2.2--f--f6e42a54cc-k8s-coredns--76f75df574--wfcsr-eth0" Oct 9 07:21:01.707612 containerd[1594]: 2024-10-09 07:21:01.550 [INFO][4273] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9bc387d21825db9a745d1f91275afa8c39ce3b0ce063a5ed016c54a165b9f545" HandleID="k8s-pod-network.9bc387d21825db9a745d1f91275afa8c39ce3b0ce063a5ed016c54a165b9f545" Workload="ci--3975.2.2--f--f6e42a54cc-k8s-coredns--76f75df574--wfcsr-eth0" Oct 9 07:21:01.707612 containerd[1594]: 2024-10-09 07:21:01.570 [INFO][4273] ipam_plugin.go 270: Auto assigning IP ContainerID="9bc387d21825db9a745d1f91275afa8c39ce3b0ce063a5ed016c54a165b9f545" HandleID="k8s-pod-network.9bc387d21825db9a745d1f91275afa8c39ce3b0ce063a5ed016c54a165b9f545" Workload="ci--3975.2.2--f--f6e42a54cc-k8s-coredns--76f75df574--wfcsr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000319430), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3975.2.2-f-f6e42a54cc", "pod":"coredns-76f75df574-wfcsr", "timestamp":"2024-10-09 07:21:01.550356235 +0000 UTC"}, Hostname:"ci-3975.2.2-f-f6e42a54cc", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 07:21:01.707612 containerd[1594]: 2024-10-09 07:21:01.571 [INFO][4273] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:21:01.707612 containerd[1594]: 2024-10-09 07:21:01.571 [INFO][4273] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:21:01.707612 containerd[1594]: 2024-10-09 07:21:01.571 [INFO][4273] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.2.2-f-f6e42a54cc' Oct 9 07:21:01.707612 containerd[1594]: 2024-10-09 07:21:01.575 [INFO][4273] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9bc387d21825db9a745d1f91275afa8c39ce3b0ce063a5ed016c54a165b9f545" host="ci-3975.2.2-f-f6e42a54cc" Oct 9 07:21:01.707612 containerd[1594]: 2024-10-09 07:21:01.594 [INFO][4273] ipam.go 372: Looking up existing affinities for host host="ci-3975.2.2-f-f6e42a54cc" Oct 9 07:21:01.707612 containerd[1594]: 2024-10-09 07:21:01.612 [INFO][4273] ipam.go 489: Trying affinity for 192.168.23.0/26 host="ci-3975.2.2-f-f6e42a54cc" Oct 9 07:21:01.707612 containerd[1594]: 2024-10-09 07:21:01.617 [INFO][4273] ipam.go 155: Attempting to load block cidr=192.168.23.0/26 host="ci-3975.2.2-f-f6e42a54cc" Oct 9 07:21:01.707612 containerd[1594]: 2024-10-09 07:21:01.623 [INFO][4273] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.23.0/26 host="ci-3975.2.2-f-f6e42a54cc" Oct 9 07:21:01.707612 containerd[1594]: 2024-10-09 07:21:01.623 [INFO][4273] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.23.0/26 handle="k8s-pod-network.9bc387d21825db9a745d1f91275afa8c39ce3b0ce063a5ed016c54a165b9f545" host="ci-3975.2.2-f-f6e42a54cc" Oct 9 07:21:01.707612 containerd[1594]: 2024-10-09 07:21:01.627 [INFO][4273] ipam.go 1685: Creating new handle: k8s-pod-network.9bc387d21825db9a745d1f91275afa8c39ce3b0ce063a5ed016c54a165b9f545 Oct 9 07:21:01.707612 containerd[1594]: 2024-10-09 07:21:01.643 [INFO][4273] ipam.go 1203: Writing block in order to claim IPs block=192.168.23.0/26 handle="k8s-pod-network.9bc387d21825db9a745d1f91275afa8c39ce3b0ce063a5ed016c54a165b9f545" host="ci-3975.2.2-f-f6e42a54cc" Oct 9 07:21:01.707612 containerd[1594]: 2024-10-09 07:21:01.655 [INFO][4273] ipam.go 1216: Successfully claimed IPs: [192.168.23.1/26] block=192.168.23.0/26 handle="k8s-pod-network.9bc387d21825db9a745d1f91275afa8c39ce3b0ce063a5ed016c54a165b9f545" host="ci-3975.2.2-f-f6e42a54cc" Oct 9 07:21:01.707612 containerd[1594]: 2024-10-09 07:21:01.655 [INFO][4273] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.23.1/26] handle="k8s-pod-network.9bc387d21825db9a745d1f91275afa8c39ce3b0ce063a5ed016c54a165b9f545" host="ci-3975.2.2-f-f6e42a54cc" Oct 9 07:21:01.707612 containerd[1594]: 2024-10-09 07:21:01.655 [INFO][4273] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:21:01.707612 containerd[1594]: 2024-10-09 07:21:01.655 [INFO][4273] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.23.1/26] IPv6=[] ContainerID="9bc387d21825db9a745d1f91275afa8c39ce3b0ce063a5ed016c54a165b9f545" HandleID="k8s-pod-network.9bc387d21825db9a745d1f91275afa8c39ce3b0ce063a5ed016c54a165b9f545" Workload="ci--3975.2.2--f--f6e42a54cc-k8s-coredns--76f75df574--wfcsr-eth0" Oct 9 07:21:01.709821 containerd[1594]: 2024-10-09 07:21:01.659 [INFO][4249] k8s.go 386: Populated endpoint ContainerID="9bc387d21825db9a745d1f91275afa8c39ce3b0ce063a5ed016c54a165b9f545" Namespace="kube-system" Pod="coredns-76f75df574-wfcsr" WorkloadEndpoint="ci--3975.2.2--f--f6e42a54cc-k8s-coredns--76f75df574--wfcsr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.2--f--f6e42a54cc-k8s-coredns--76f75df574--wfcsr-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"70a54fb9-5d74-4760-8a65-2ba9c139331e", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 20, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.2-f-f6e42a54cc", ContainerID:"", Pod:"coredns-76f75df574-wfcsr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.23.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali53071761dce", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:21:01.709821 containerd[1594]: 2024-10-09 07:21:01.659 [INFO][4249] k8s.go 387: Calico CNI using IPs: [192.168.23.1/32] ContainerID="9bc387d21825db9a745d1f91275afa8c39ce3b0ce063a5ed016c54a165b9f545" Namespace="kube-system" Pod="coredns-76f75df574-wfcsr" WorkloadEndpoint="ci--3975.2.2--f--f6e42a54cc-k8s-coredns--76f75df574--wfcsr-eth0" Oct 9 07:21:01.709821 containerd[1594]: 2024-10-09 07:21:01.659 [INFO][4249] dataplane_linux.go 68: Setting the host side veth name to cali53071761dce ContainerID="9bc387d21825db9a745d1f91275afa8c39ce3b0ce063a5ed016c54a165b9f545" Namespace="kube-system" Pod="coredns-76f75df574-wfcsr" WorkloadEndpoint="ci--3975.2.2--f--f6e42a54cc-k8s-coredns--76f75df574--wfcsr-eth0" Oct 9 07:21:01.709821 containerd[1594]: 2024-10-09 07:21:01.672 [INFO][4249] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="9bc387d21825db9a745d1f91275afa8c39ce3b0ce063a5ed016c54a165b9f545" Namespace="kube-system" Pod="coredns-76f75df574-wfcsr" WorkloadEndpoint="ci--3975.2.2--f--f6e42a54cc-k8s-coredns--76f75df574--wfcsr-eth0" Oct 9 07:21:01.709821 containerd[1594]: 2024-10-09 07:21:01.673 [INFO][4249] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9bc387d21825db9a745d1f91275afa8c39ce3b0ce063a5ed016c54a165b9f545" Namespace="kube-system" Pod="coredns-76f75df574-wfcsr" WorkloadEndpoint="ci--3975.2.2--f--f6e42a54cc-k8s-coredns--76f75df574--wfcsr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.2--f--f6e42a54cc-k8s-coredns--76f75df574--wfcsr-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"70a54fb9-5d74-4760-8a65-2ba9c139331e", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 20, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.2-f-f6e42a54cc", ContainerID:"9bc387d21825db9a745d1f91275afa8c39ce3b0ce063a5ed016c54a165b9f545", Pod:"coredns-76f75df574-wfcsr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.23.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali53071761dce", MAC:"92:62:bb:86:07:fb", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:21:01.709821 containerd[1594]: 2024-10-09 07:21:01.703 [INFO][4249] k8s.go 500: Wrote updated endpoint to datastore ContainerID="9bc387d21825db9a745d1f91275afa8c39ce3b0ce063a5ed016c54a165b9f545" Namespace="kube-system" Pod="coredns-76f75df574-wfcsr" WorkloadEndpoint="ci--3975.2.2--f--f6e42a54cc-k8s-coredns--76f75df574--wfcsr-eth0" Oct 9 07:21:01.757319 containerd[1594]: time="2024-10-09T07:21:01.756491831Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:21:01.757319 containerd[1594]: time="2024-10-09T07:21:01.756601759Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:21:01.757319 containerd[1594]: time="2024-10-09T07:21:01.756624356Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:21:01.757319 containerd[1594]: time="2024-10-09T07:21:01.756669268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:21:01.838901 containerd[1594]: time="2024-10-09T07:21:01.838842136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-wfcsr,Uid:70a54fb9-5d74-4760-8a65-2ba9c139331e,Namespace:kube-system,Attempt:1,} returns sandbox id \"9bc387d21825db9a745d1f91275afa8c39ce3b0ce063a5ed016c54a165b9f545\"" Oct 9 07:21:01.840385 kubelet[2700]: E1009 07:21:01.840341 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:21:01.868209 containerd[1594]: time="2024-10-09T07:21:01.868121177Z" level=info msg="CreateContainer within sandbox \"9bc387d21825db9a745d1f91275afa8c39ce3b0ce063a5ed016c54a165b9f545\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 9 07:21:01.897848 containerd[1594]: time="2024-10-09T07:21:01.897690386Z" level=info msg="CreateContainer within sandbox \"9bc387d21825db9a745d1f91275afa8c39ce3b0ce063a5ed016c54a165b9f545\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1b77ddb17cfb9bebefeeaab1a43b5d52c1902a2d1e25bc3b03123511a9262066\"" Oct 9 07:21:01.900298 containerd[1594]: time="2024-10-09T07:21:01.900061063Z" level=info msg="StartContainer for \"1b77ddb17cfb9bebefeeaab1a43b5d52c1902a2d1e25bc3b03123511a9262066\"" Oct 9 07:21:01.990304 containerd[1594]: time="2024-10-09T07:21:01.989910864Z" level=info msg="StartContainer for \"1b77ddb17cfb9bebefeeaab1a43b5d52c1902a2d1e25bc3b03123511a9262066\" returns successfully" Oct 9 07:21:02.371036 kubelet[2700]: I1009 07:21:02.370592 2700 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 9 07:21:02.373136 kubelet[2700]: E1009 07:21:02.372202 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:21:02.437018 kubelet[2700]: E1009 07:21:02.436971 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:21:02.437524 kubelet[2700]: E1009 07:21:02.437420 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:21:02.495266 kubelet[2700]: I1009 07:21:02.494049 2700 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-wfcsr" podStartSLOduration=34.493970464 podStartE2EDuration="34.493970464s" podCreationTimestamp="2024-10-09 07:20:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 07:21:02.48905313 +0000 UTC m=+47.680759108" watchObservedRunningTime="2024-10-09 07:21:02.493970464 +0000 UTC m=+47.685676421" Oct 9 07:21:02.935747 systemd-networkd[1226]: cali53071761dce: Gained IPv6LL Oct 9 07:21:02.991703 containerd[1594]: time="2024-10-09T07:21:02.991325406Z" level=info msg="StopPodSandbox for \"b8084a1f6f964263c12e85d0d0eb3be3c4a1b7491315bc4a047f4f1295143e66\"" Oct 9 07:21:02.991703 containerd[1594]: time="2024-10-09T07:21:02.991578912Z" level=info msg="StopPodSandbox for \"ff82f6a143dbe2170e9cac8003a3e122eb1328ba382897fbae3bf4524e89917b\"" Oct 9 07:21:03.127521 containerd[1594]: 2024-10-09 07:21:03.082 [INFO][4437] k8s.go 608: Cleaning up netns ContainerID="ff82f6a143dbe2170e9cac8003a3e122eb1328ba382897fbae3bf4524e89917b" Oct 9 07:21:03.127521 containerd[1594]: 2024-10-09 07:21:03.082 [INFO][4437] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="ff82f6a143dbe2170e9cac8003a3e122eb1328ba382897fbae3bf4524e89917b" iface="eth0" netns="/var/run/netns/cni-52e1b10d-cc24-90ac-d18d-e1f1863aee2e" Oct 9 07:21:03.127521 containerd[1594]: 2024-10-09 07:21:03.083 [INFO][4437] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="ff82f6a143dbe2170e9cac8003a3e122eb1328ba382897fbae3bf4524e89917b" iface="eth0" netns="/var/run/netns/cni-52e1b10d-cc24-90ac-d18d-e1f1863aee2e" Oct 9 07:21:03.127521 containerd[1594]: 2024-10-09 07:21:03.083 [INFO][4437] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="ff82f6a143dbe2170e9cac8003a3e122eb1328ba382897fbae3bf4524e89917b" iface="eth0" netns="/var/run/netns/cni-52e1b10d-cc24-90ac-d18d-e1f1863aee2e" Oct 9 07:21:03.127521 containerd[1594]: 2024-10-09 07:21:03.083 [INFO][4437] k8s.go 615: Releasing IP address(es) ContainerID="ff82f6a143dbe2170e9cac8003a3e122eb1328ba382897fbae3bf4524e89917b" Oct 9 07:21:03.127521 containerd[1594]: 2024-10-09 07:21:03.084 [INFO][4437] utils.go 188: Calico CNI releasing IP address ContainerID="ff82f6a143dbe2170e9cac8003a3e122eb1328ba382897fbae3bf4524e89917b" Oct 9 07:21:03.127521 containerd[1594]: 2024-10-09 07:21:03.111 [INFO][4456] ipam_plugin.go 417: Releasing address using handleID ContainerID="ff82f6a143dbe2170e9cac8003a3e122eb1328ba382897fbae3bf4524e89917b" HandleID="k8s-pod-network.ff82f6a143dbe2170e9cac8003a3e122eb1328ba382897fbae3bf4524e89917b" Workload="ci--3975.2.2--f--f6e42a54cc-k8s-coredns--76f75df574--stxm6-eth0" Oct 9 07:21:03.127521 containerd[1594]: 2024-10-09 07:21:03.111 [INFO][4456] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:21:03.127521 containerd[1594]: 2024-10-09 07:21:03.111 [INFO][4456] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:21:03.127521 containerd[1594]: 2024-10-09 07:21:03.118 [WARNING][4456] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="ff82f6a143dbe2170e9cac8003a3e122eb1328ba382897fbae3bf4524e89917b" HandleID="k8s-pod-network.ff82f6a143dbe2170e9cac8003a3e122eb1328ba382897fbae3bf4524e89917b" Workload="ci--3975.2.2--f--f6e42a54cc-k8s-coredns--76f75df574--stxm6-eth0" Oct 9 07:21:03.127521 containerd[1594]: 2024-10-09 07:21:03.118 [INFO][4456] ipam_plugin.go 445: Releasing address using workloadID ContainerID="ff82f6a143dbe2170e9cac8003a3e122eb1328ba382897fbae3bf4524e89917b" HandleID="k8s-pod-network.ff82f6a143dbe2170e9cac8003a3e122eb1328ba382897fbae3bf4524e89917b" Workload="ci--3975.2.2--f--f6e42a54cc-k8s-coredns--76f75df574--stxm6-eth0" Oct 9 07:21:03.127521 containerd[1594]: 2024-10-09 07:21:03.122 [INFO][4456] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:21:03.127521 containerd[1594]: 2024-10-09 07:21:03.124 [INFO][4437] k8s.go 621: Teardown processing complete. ContainerID="ff82f6a143dbe2170e9cac8003a3e122eb1328ba382897fbae3bf4524e89917b" Oct 9 07:21:03.127521 containerd[1594]: time="2024-10-09T07:21:03.127187187Z" level=info msg="TearDown network for sandbox \"ff82f6a143dbe2170e9cac8003a3e122eb1328ba382897fbae3bf4524e89917b\" successfully" Oct 9 07:21:03.127521 containerd[1594]: time="2024-10-09T07:21:03.127230508Z" level=info msg="StopPodSandbox for \"ff82f6a143dbe2170e9cac8003a3e122eb1328ba382897fbae3bf4524e89917b\" returns successfully" Oct 9 07:21:03.130745 kubelet[2700]: E1009 07:21:03.128585 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:21:03.132944 containerd[1594]: time="2024-10-09T07:21:03.130407935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-stxm6,Uid:d0afbab9-e0c9-43bf-b03d-a80eefbc01be,Namespace:kube-system,Attempt:1,}" Oct 9 07:21:03.134765 systemd[1]: run-netns-cni\x2d52e1b10d\x2dcc24\x2d90ac\x2dd18d\x2de1f1863aee2e.mount: Deactivated successfully. Oct 9 07:21:03.164384 containerd[1594]: 2024-10-09 07:21:03.070 [INFO][4441] k8s.go 608: Cleaning up netns ContainerID="b8084a1f6f964263c12e85d0d0eb3be3c4a1b7491315bc4a047f4f1295143e66" Oct 9 07:21:03.164384 containerd[1594]: 2024-10-09 07:21:03.071 [INFO][4441] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="b8084a1f6f964263c12e85d0d0eb3be3c4a1b7491315bc4a047f4f1295143e66" iface="eth0" netns="/var/run/netns/cni-2e37913c-8e24-8a27-f297-bc616bf1577c" Oct 9 07:21:03.164384 containerd[1594]: 2024-10-09 07:21:03.071 [INFO][4441] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="b8084a1f6f964263c12e85d0d0eb3be3c4a1b7491315bc4a047f4f1295143e66" iface="eth0" netns="/var/run/netns/cni-2e37913c-8e24-8a27-f297-bc616bf1577c" Oct 9 07:21:03.164384 containerd[1594]: 2024-10-09 07:21:03.072 [INFO][4441] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="b8084a1f6f964263c12e85d0d0eb3be3c4a1b7491315bc4a047f4f1295143e66" iface="eth0" netns="/var/run/netns/cni-2e37913c-8e24-8a27-f297-bc616bf1577c" Oct 9 07:21:03.164384 containerd[1594]: 2024-10-09 07:21:03.072 [INFO][4441] k8s.go 615: Releasing IP address(es) ContainerID="b8084a1f6f964263c12e85d0d0eb3be3c4a1b7491315bc4a047f4f1295143e66" Oct 9 07:21:03.164384 containerd[1594]: 2024-10-09 07:21:03.072 [INFO][4441] utils.go 188: Calico CNI releasing IP address ContainerID="b8084a1f6f964263c12e85d0d0eb3be3c4a1b7491315bc4a047f4f1295143e66" Oct 9 07:21:03.164384 containerd[1594]: 2024-10-09 07:21:03.126 [INFO][4452] ipam_plugin.go 417: Releasing address using handleID ContainerID="b8084a1f6f964263c12e85d0d0eb3be3c4a1b7491315bc4a047f4f1295143e66" HandleID="k8s-pod-network.b8084a1f6f964263c12e85d0d0eb3be3c4a1b7491315bc4a047f4f1295143e66" Workload="ci--3975.2.2--f--f6e42a54cc-k8s-csi--node--driver--h7tbg-eth0" Oct 9 07:21:03.164384 containerd[1594]: 2024-10-09 07:21:03.130 [INFO][4452] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:21:03.164384 containerd[1594]: 2024-10-09 07:21:03.130 [INFO][4452] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:21:03.164384 containerd[1594]: 2024-10-09 07:21:03.143 [WARNING][4452] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="b8084a1f6f964263c12e85d0d0eb3be3c4a1b7491315bc4a047f4f1295143e66" HandleID="k8s-pod-network.b8084a1f6f964263c12e85d0d0eb3be3c4a1b7491315bc4a047f4f1295143e66" Workload="ci--3975.2.2--f--f6e42a54cc-k8s-csi--node--driver--h7tbg-eth0" Oct 9 07:21:03.164384 containerd[1594]: 2024-10-09 07:21:03.144 [INFO][4452] ipam_plugin.go 445: Releasing address using workloadID ContainerID="b8084a1f6f964263c12e85d0d0eb3be3c4a1b7491315bc4a047f4f1295143e66" HandleID="k8s-pod-network.b8084a1f6f964263c12e85d0d0eb3be3c4a1b7491315bc4a047f4f1295143e66" Workload="ci--3975.2.2--f--f6e42a54cc-k8s-csi--node--driver--h7tbg-eth0" Oct 9 07:21:03.164384 containerd[1594]: 2024-10-09 07:21:03.146 [INFO][4452] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:21:03.164384 containerd[1594]: 2024-10-09 07:21:03.156 [INFO][4441] k8s.go 621: Teardown processing complete. ContainerID="b8084a1f6f964263c12e85d0d0eb3be3c4a1b7491315bc4a047f4f1295143e66" Oct 9 07:21:03.165584 containerd[1594]: time="2024-10-09T07:21:03.165433057Z" level=info msg="TearDown network for sandbox \"b8084a1f6f964263c12e85d0d0eb3be3c4a1b7491315bc4a047f4f1295143e66\" successfully" Oct 9 07:21:03.165584 containerd[1594]: time="2024-10-09T07:21:03.165470745Z" level=info msg="StopPodSandbox for \"b8084a1f6f964263c12e85d0d0eb3be3c4a1b7491315bc4a047f4f1295143e66\" returns successfully" Oct 9 07:21:03.169270 systemd[1]: run-netns-cni\x2d2e37913c\x2d8e24\x2d8a27\x2df297\x2dbc616bf1577c.mount: Deactivated successfully. Oct 9 07:21:03.174801 containerd[1594]: time="2024-10-09T07:21:03.174748679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h7tbg,Uid:e1f970c3-0e6b-4c36-a7b6-7c163a15816e,Namespace:calico-system,Attempt:1,}" Oct 9 07:21:03.368236 systemd-networkd[1226]: calia62c3a53bfd: Link UP Oct 9 07:21:03.369284 systemd-networkd[1226]: calia62c3a53bfd: Gained carrier Oct 9 07:21:03.395207 containerd[1594]: 2024-10-09 07:21:03.202 [INFO][4468] utils.go 100: File /var/lib/calico/mtu does not exist Oct 9 07:21:03.395207 containerd[1594]: 2024-10-09 07:21:03.227 [INFO][4468] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.2.2--f--f6e42a54cc-k8s-coredns--76f75df574--stxm6-eth0 coredns-76f75df574- kube-system d0afbab9-e0c9-43bf-b03d-a80eefbc01be 866 0 2024-10-09 07:20:28 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3975.2.2-f-f6e42a54cc coredns-76f75df574-stxm6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia62c3a53bfd [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="4473b96f1bd298ecfd0f6721811fda32c477d03455ced2a1186bfe744e8f64fe" Namespace="kube-system" Pod="coredns-76f75df574-stxm6" WorkloadEndpoint="ci--3975.2.2--f--f6e42a54cc-k8s-coredns--76f75df574--stxm6-" Oct 9 07:21:03.395207 containerd[1594]: 2024-10-09 07:21:03.227 [INFO][4468] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4473b96f1bd298ecfd0f6721811fda32c477d03455ced2a1186bfe744e8f64fe" Namespace="kube-system" Pod="coredns-76f75df574-stxm6" WorkloadEndpoint="ci--3975.2.2--f--f6e42a54cc-k8s-coredns--76f75df574--stxm6-eth0" Oct 9 07:21:03.395207 containerd[1594]: 2024-10-09 07:21:03.276 [INFO][4487] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4473b96f1bd298ecfd0f6721811fda32c477d03455ced2a1186bfe744e8f64fe" HandleID="k8s-pod-network.4473b96f1bd298ecfd0f6721811fda32c477d03455ced2a1186bfe744e8f64fe" Workload="ci--3975.2.2--f--f6e42a54cc-k8s-coredns--76f75df574--stxm6-eth0" Oct 9 07:21:03.395207 containerd[1594]: 2024-10-09 07:21:03.289 [INFO][4487] ipam_plugin.go 270: Auto assigning IP ContainerID="4473b96f1bd298ecfd0f6721811fda32c477d03455ced2a1186bfe744e8f64fe" HandleID="k8s-pod-network.4473b96f1bd298ecfd0f6721811fda32c477d03455ced2a1186bfe744e8f64fe" Workload="ci--3975.2.2--f--f6e42a54cc-k8s-coredns--76f75df574--stxm6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318450), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3975.2.2-f-f6e42a54cc", "pod":"coredns-76f75df574-stxm6", "timestamp":"2024-10-09 07:21:03.276481235 +0000 UTC"}, Hostname:"ci-3975.2.2-f-f6e42a54cc", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 07:21:03.395207 containerd[1594]: 2024-10-09 07:21:03.289 [INFO][4487] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:21:03.395207 containerd[1594]: 2024-10-09 07:21:03.289 [INFO][4487] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:21:03.395207 containerd[1594]: 2024-10-09 07:21:03.289 [INFO][4487] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.2.2-f-f6e42a54cc' Oct 9 07:21:03.395207 containerd[1594]: 2024-10-09 07:21:03.292 [INFO][4487] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4473b96f1bd298ecfd0f6721811fda32c477d03455ced2a1186bfe744e8f64fe" host="ci-3975.2.2-f-f6e42a54cc" Oct 9 07:21:03.395207 containerd[1594]: 2024-10-09 07:21:03.300 [INFO][4487] ipam.go 372: Looking up existing affinities for host host="ci-3975.2.2-f-f6e42a54cc" Oct 9 07:21:03.395207 containerd[1594]: 2024-10-09 07:21:03.310 [INFO][4487] ipam.go 489: Trying affinity for 192.168.23.0/26 host="ci-3975.2.2-f-f6e42a54cc" Oct 9 07:21:03.395207 containerd[1594]: 2024-10-09 07:21:03.313 [INFO][4487] ipam.go 155: Attempting to load block cidr=192.168.23.0/26 host="ci-3975.2.2-f-f6e42a54cc" Oct 9 07:21:03.395207 containerd[1594]: 2024-10-09 07:21:03.320 [INFO][4487] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.23.0/26 host="ci-3975.2.2-f-f6e42a54cc" Oct 9 07:21:03.395207 containerd[1594]: 2024-10-09 07:21:03.320 [INFO][4487] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.23.0/26 handle="k8s-pod-network.4473b96f1bd298ecfd0f6721811fda32c477d03455ced2a1186bfe744e8f64fe" host="ci-3975.2.2-f-f6e42a54cc" Oct 9 07:21:03.395207 containerd[1594]: 2024-10-09 07:21:03.327 [INFO][4487] ipam.go 1685: Creating new handle: k8s-pod-network.4473b96f1bd298ecfd0f6721811fda32c477d03455ced2a1186bfe744e8f64fe Oct 9 07:21:03.395207 containerd[1594]: 2024-10-09 07:21:03.338 [INFO][4487] ipam.go 1203: Writing block in order to claim IPs block=192.168.23.0/26 handle="k8s-pod-network.4473b96f1bd298ecfd0f6721811fda32c477d03455ced2a1186bfe744e8f64fe" host="ci-3975.2.2-f-f6e42a54cc" Oct 9 07:21:03.395207 containerd[1594]: 2024-10-09 07:21:03.353 [INFO][4487] ipam.go 1216: Successfully claimed IPs: [192.168.23.2/26] block=192.168.23.0/26 handle="k8s-pod-network.4473b96f1bd298ecfd0f6721811fda32c477d03455ced2a1186bfe744e8f64fe" host="ci-3975.2.2-f-f6e42a54cc" Oct 9 07:21:03.395207 containerd[1594]: 2024-10-09 07:21:03.354 [INFO][4487] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.23.2/26] handle="k8s-pod-network.4473b96f1bd298ecfd0f6721811fda32c477d03455ced2a1186bfe744e8f64fe" host="ci-3975.2.2-f-f6e42a54cc" Oct 9 07:21:03.395207 containerd[1594]: 2024-10-09 07:21:03.355 [INFO][4487] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:21:03.395207 containerd[1594]: 2024-10-09 07:21:03.355 [INFO][4487] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.23.2/26] IPv6=[] ContainerID="4473b96f1bd298ecfd0f6721811fda32c477d03455ced2a1186bfe744e8f64fe" HandleID="k8s-pod-network.4473b96f1bd298ecfd0f6721811fda32c477d03455ced2a1186bfe744e8f64fe" Workload="ci--3975.2.2--f--f6e42a54cc-k8s-coredns--76f75df574--stxm6-eth0" Oct 9 07:21:03.396215 containerd[1594]: 2024-10-09 07:21:03.359 [INFO][4468] k8s.go 386: Populated endpoint ContainerID="4473b96f1bd298ecfd0f6721811fda32c477d03455ced2a1186bfe744e8f64fe" Namespace="kube-system" Pod="coredns-76f75df574-stxm6" WorkloadEndpoint="ci--3975.2.2--f--f6e42a54cc-k8s-coredns--76f75df574--stxm6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.2--f--f6e42a54cc-k8s-coredns--76f75df574--stxm6-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"d0afbab9-e0c9-43bf-b03d-a80eefbc01be", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 20, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.2-f-f6e42a54cc", ContainerID:"", Pod:"coredns-76f75df574-stxm6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.23.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia62c3a53bfd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:21:03.396215 containerd[1594]: 2024-10-09 07:21:03.359 [INFO][4468] k8s.go 387: Calico CNI using IPs: [192.168.23.2/32] ContainerID="4473b96f1bd298ecfd0f6721811fda32c477d03455ced2a1186bfe744e8f64fe" Namespace="kube-system" Pod="coredns-76f75df574-stxm6" WorkloadEndpoint="ci--3975.2.2--f--f6e42a54cc-k8s-coredns--76f75df574--stxm6-eth0" Oct 9 07:21:03.396215 containerd[1594]: 2024-10-09 07:21:03.359 [INFO][4468] dataplane_linux.go 68: Setting the host side veth name to calia62c3a53bfd ContainerID="4473b96f1bd298ecfd0f6721811fda32c477d03455ced2a1186bfe744e8f64fe" Namespace="kube-system" Pod="coredns-76f75df574-stxm6" WorkloadEndpoint="ci--3975.2.2--f--f6e42a54cc-k8s-coredns--76f75df574--stxm6-eth0" Oct 9 07:21:03.396215 containerd[1594]: 2024-10-09 07:21:03.369 [INFO][4468] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="4473b96f1bd298ecfd0f6721811fda32c477d03455ced2a1186bfe744e8f64fe" Namespace="kube-system" Pod="coredns-76f75df574-stxm6" WorkloadEndpoint="ci--3975.2.2--f--f6e42a54cc-k8s-coredns--76f75df574--stxm6-eth0" Oct 9 07:21:03.396215 containerd[1594]: 2024-10-09 07:21:03.371 [INFO][4468] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4473b96f1bd298ecfd0f6721811fda32c477d03455ced2a1186bfe744e8f64fe" Namespace="kube-system" Pod="coredns-76f75df574-stxm6" WorkloadEndpoint="ci--3975.2.2--f--f6e42a54cc-k8s-coredns--76f75df574--stxm6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.2--f--f6e42a54cc-k8s-coredns--76f75df574--stxm6-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"d0afbab9-e0c9-43bf-b03d-a80eefbc01be", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 20, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.2-f-f6e42a54cc", ContainerID:"4473b96f1bd298ecfd0f6721811fda32c477d03455ced2a1186bfe744e8f64fe", Pod:"coredns-76f75df574-stxm6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.23.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia62c3a53bfd", MAC:"ae:12:1a:80:90:4e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:21:03.396215 containerd[1594]: 2024-10-09 07:21:03.390 [INFO][4468] k8s.go 500: Wrote updated endpoint to datastore ContainerID="4473b96f1bd298ecfd0f6721811fda32c477d03455ced2a1186bfe744e8f64fe" Namespace="kube-system" Pod="coredns-76f75df574-stxm6" WorkloadEndpoint="ci--3975.2.2--f--f6e42a54cc-k8s-coredns--76f75df574--stxm6-eth0" Oct 9 07:21:03.452968 systemd-networkd[1226]: califd7b9fcca1e: Link UP Oct 9 07:21:03.455559 systemd-networkd[1226]: califd7b9fcca1e: Gained carrier Oct 9 07:21:03.479102 kubelet[2700]: E1009 07:21:03.477828 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:21:03.491632 containerd[1594]: time="2024-10-09T07:21:03.490877271Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:21:03.491632 containerd[1594]: time="2024-10-09T07:21:03.491030249Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:21:03.491632 containerd[1594]: time="2024-10-09T07:21:03.491069163Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:21:03.491632 containerd[1594]: time="2024-10-09T07:21:03.491091949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:21:03.492722 containerd[1594]: 2024-10-09 07:21:03.240 [INFO][4473] utils.go 100: File /var/lib/calico/mtu does not exist Oct 9 07:21:03.492722 containerd[1594]: 2024-10-09 07:21:03.258 [INFO][4473] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.2.2--f--f6e42a54cc-k8s-csi--node--driver--h7tbg-eth0 csi-node-driver- calico-system e1f970c3-0e6b-4c36-a7b6-7c163a15816e 865 0 2024-10-09 07:20:34 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78cd84fb8c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ci-3975.2.2-f-f6e42a54cc csi-node-driver-h7tbg eth0 default [] [] [kns.calico-system ksa.calico-system.default] califd7b9fcca1e [] []}} ContainerID="589d7f337c2012bd82c67821a2e3dc46eefb61ab69903e1f4bc00967ea5fd7d7" Namespace="calico-system" Pod="csi-node-driver-h7tbg" WorkloadEndpoint="ci--3975.2.2--f--f6e42a54cc-k8s-csi--node--driver--h7tbg-" Oct 9 07:21:03.492722 containerd[1594]: 2024-10-09 07:21:03.258 [INFO][4473] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="589d7f337c2012bd82c67821a2e3dc46eefb61ab69903e1f4bc00967ea5fd7d7" Namespace="calico-system" Pod="csi-node-driver-h7tbg" WorkloadEndpoint="ci--3975.2.2--f--f6e42a54cc-k8s-csi--node--driver--h7tbg-eth0" Oct 9 07:21:03.492722 containerd[1594]: 2024-10-09 07:21:03.315 [INFO][4493] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="589d7f337c2012bd82c67821a2e3dc46eefb61ab69903e1f4bc00967ea5fd7d7" HandleID="k8s-pod-network.589d7f337c2012bd82c67821a2e3dc46eefb61ab69903e1f4bc00967ea5fd7d7" Workload="ci--3975.2.2--f--f6e42a54cc-k8s-csi--node--driver--h7tbg-eth0" Oct 9 07:21:03.492722 containerd[1594]: 2024-10-09 07:21:03.333 [INFO][4493] ipam_plugin.go 270: Auto assigning IP ContainerID="589d7f337c2012bd82c67821a2e3dc46eefb61ab69903e1f4bc00967ea5fd7d7" HandleID="k8s-pod-network.589d7f337c2012bd82c67821a2e3dc46eefb61ab69903e1f4bc00967ea5fd7d7" Workload="ci--3975.2.2--f--f6e42a54cc-k8s-csi--node--driver--h7tbg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000599c90), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3975.2.2-f-f6e42a54cc", "pod":"csi-node-driver-h7tbg", "timestamp":"2024-10-09 07:21:03.315487009 +0000 UTC"}, Hostname:"ci-3975.2.2-f-f6e42a54cc", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 07:21:03.492722 containerd[1594]: 2024-10-09 07:21:03.333 [INFO][4493] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:21:03.492722 containerd[1594]: 2024-10-09 07:21:03.354 [INFO][4493] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:21:03.492722 containerd[1594]: 2024-10-09 07:21:03.354 [INFO][4493] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.2.2-f-f6e42a54cc' Oct 9 07:21:03.492722 containerd[1594]: 2024-10-09 07:21:03.358 [INFO][4493] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.589d7f337c2012bd82c67821a2e3dc46eefb61ab69903e1f4bc00967ea5fd7d7" host="ci-3975.2.2-f-f6e42a54cc" Oct 9 07:21:03.492722 containerd[1594]: 2024-10-09 07:21:03.370 [INFO][4493] ipam.go 372: Looking up existing affinities for host host="ci-3975.2.2-f-f6e42a54cc" Oct 9 07:21:03.492722 containerd[1594]: 2024-10-09 07:21:03.390 [INFO][4493] ipam.go 489: Trying affinity for 192.168.23.0/26 host="ci-3975.2.2-f-f6e42a54cc" Oct 9 07:21:03.492722 containerd[1594]: 2024-10-09 07:21:03.396 [INFO][4493] ipam.go 155: Attempting to load block cidr=192.168.23.0/26 host="ci-3975.2.2-f-f6e42a54cc" Oct 9 07:21:03.492722 containerd[1594]: 2024-10-09 07:21:03.401 [INFO][4493] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.23.0/26 host="ci-3975.2.2-f-f6e42a54cc" Oct 9 07:21:03.492722 containerd[1594]: 2024-10-09 07:21:03.401 [INFO][4493] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.23.0/26 handle="k8s-pod-network.589d7f337c2012bd82c67821a2e3dc46eefb61ab69903e1f4bc00967ea5fd7d7" host="ci-3975.2.2-f-f6e42a54cc" Oct 9 07:21:03.492722 containerd[1594]: 2024-10-09 07:21:03.406 [INFO][4493] ipam.go 1685: Creating new handle: k8s-pod-network.589d7f337c2012bd82c67821a2e3dc46eefb61ab69903e1f4bc00967ea5fd7d7 Oct 9 07:21:03.492722 containerd[1594]: 2024-10-09 07:21:03.416 [INFO][4493] ipam.go 1203: Writing block in order to claim IPs block=192.168.23.0/26 handle="k8s-pod-network.589d7f337c2012bd82c67821a2e3dc46eefb61ab69903e1f4bc00967ea5fd7d7" host="ci-3975.2.2-f-f6e42a54cc" Oct 9 07:21:03.492722 containerd[1594]: 2024-10-09 07:21:03.430 [INFO][4493] ipam.go 1216: Successfully claimed IPs: [192.168.23.3/26] block=192.168.23.0/26 handle="k8s-pod-network.589d7f337c2012bd82c67821a2e3dc46eefb61ab69903e1f4bc00967ea5fd7d7" host="ci-3975.2.2-f-f6e42a54cc" Oct 9 07:21:03.492722 containerd[1594]: 2024-10-09 07:21:03.430 [INFO][4493] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.23.3/26] handle="k8s-pod-network.589d7f337c2012bd82c67821a2e3dc46eefb61ab69903e1f4bc00967ea5fd7d7" host="ci-3975.2.2-f-f6e42a54cc" Oct 9 07:21:03.492722 containerd[1594]: 2024-10-09 07:21:03.430 [INFO][4493] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:21:03.492722 containerd[1594]: 2024-10-09 07:21:03.430 [INFO][4493] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.23.3/26] IPv6=[] ContainerID="589d7f337c2012bd82c67821a2e3dc46eefb61ab69903e1f4bc00967ea5fd7d7" HandleID="k8s-pod-network.589d7f337c2012bd82c67821a2e3dc46eefb61ab69903e1f4bc00967ea5fd7d7" Workload="ci--3975.2.2--f--f6e42a54cc-k8s-csi--node--driver--h7tbg-eth0" Oct 9 07:21:03.496099 containerd[1594]: 2024-10-09 07:21:03.434 [INFO][4473] k8s.go 386: Populated endpoint ContainerID="589d7f337c2012bd82c67821a2e3dc46eefb61ab69903e1f4bc00967ea5fd7d7" Namespace="calico-system" Pod="csi-node-driver-h7tbg" WorkloadEndpoint="ci--3975.2.2--f--f6e42a54cc-k8s-csi--node--driver--h7tbg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.2--f--f6e42a54cc-k8s-csi--node--driver--h7tbg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e1f970c3-0e6b-4c36-a7b6-7c163a15816e", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 20, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.2-f-f6e42a54cc", ContainerID:"", Pod:"csi-node-driver-h7tbg", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.23.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"califd7b9fcca1e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:21:03.496099 containerd[1594]: 2024-10-09 07:21:03.438 [INFO][4473] k8s.go 387: Calico CNI using IPs: [192.168.23.3/32] ContainerID="589d7f337c2012bd82c67821a2e3dc46eefb61ab69903e1f4bc00967ea5fd7d7" Namespace="calico-system" Pod="csi-node-driver-h7tbg" WorkloadEndpoint="ci--3975.2.2--f--f6e42a54cc-k8s-csi--node--driver--h7tbg-eth0" Oct 9 07:21:03.496099 containerd[1594]: 2024-10-09 07:21:03.438 [INFO][4473] dataplane_linux.go 68: Setting the host side veth name to califd7b9fcca1e ContainerID="589d7f337c2012bd82c67821a2e3dc46eefb61ab69903e1f4bc00967ea5fd7d7" Namespace="calico-system" Pod="csi-node-driver-h7tbg" WorkloadEndpoint="ci--3975.2.2--f--f6e42a54cc-k8s-csi--node--driver--h7tbg-eth0" Oct 9 07:21:03.496099 containerd[1594]: 2024-10-09 07:21:03.455 [INFO][4473] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="589d7f337c2012bd82c67821a2e3dc46eefb61ab69903e1f4bc00967ea5fd7d7" Namespace="calico-system" Pod="csi-node-driver-h7tbg" WorkloadEndpoint="ci--3975.2.2--f--f6e42a54cc-k8s-csi--node--driver--h7tbg-eth0" Oct 9 07:21:03.496099 containerd[1594]: 2024-10-09 07:21:03.461 [INFO][4473] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="589d7f337c2012bd82c67821a2e3dc46eefb61ab69903e1f4bc00967ea5fd7d7" Namespace="calico-system" Pod="csi-node-driver-h7tbg" WorkloadEndpoint="ci--3975.2.2--f--f6e42a54cc-k8s-csi--node--driver--h7tbg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.2--f--f6e42a54cc-k8s-csi--node--driver--h7tbg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e1f970c3-0e6b-4c36-a7b6-7c163a15816e", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 20, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.2-f-f6e42a54cc", ContainerID:"589d7f337c2012bd82c67821a2e3dc46eefb61ab69903e1f4bc00967ea5fd7d7", Pod:"csi-node-driver-h7tbg", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.23.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"califd7b9fcca1e", MAC:"6e:af:29:36:69:de", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:21:03.496099 containerd[1594]: 2024-10-09 07:21:03.484 [INFO][4473] k8s.go 500: Wrote updated endpoint to datastore ContainerID="589d7f337c2012bd82c67821a2e3dc46eefb61ab69903e1f4bc00967ea5fd7d7" Namespace="calico-system" Pod="csi-node-driver-h7tbg" WorkloadEndpoint="ci--3975.2.2--f--f6e42a54cc-k8s-csi--node--driver--h7tbg-eth0" Oct 9 07:21:03.561823 systemd[1]: run-containerd-runc-k8s.io-4473b96f1bd298ecfd0f6721811fda32c477d03455ced2a1186bfe744e8f64fe-runc.sgNqyV.mount: Deactivated successfully. Oct 9 07:21:03.602408 containerd[1594]: time="2024-10-09T07:21:03.593005201Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:21:03.602408 containerd[1594]: time="2024-10-09T07:21:03.593267476Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:21:03.602408 containerd[1594]: time="2024-10-09T07:21:03.593285226Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:21:03.602408 containerd[1594]: time="2024-10-09T07:21:03.593295075Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:21:03.692365 containerd[1594]: time="2024-10-09T07:21:03.692161929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h7tbg,Uid:e1f970c3-0e6b-4c36-a7b6-7c163a15816e,Namespace:calico-system,Attempt:1,} returns sandbox id \"589d7f337c2012bd82c67821a2e3dc46eefb61ab69903e1f4bc00967ea5fd7d7\"" Oct 9 07:21:03.698187 containerd[1594]: time="2024-10-09T07:21:03.697941642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-stxm6,Uid:d0afbab9-e0c9-43bf-b03d-a80eefbc01be,Namespace:kube-system,Attempt:1,} returns sandbox id \"4473b96f1bd298ecfd0f6721811fda32c477d03455ced2a1186bfe744e8f64fe\"" Oct 9 07:21:03.703936 kubelet[2700]: E1009 07:21:03.702678 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:21:03.710283 containerd[1594]: time="2024-10-09T07:21:03.709818375Z" level=info msg="CreateContainer within sandbox \"4473b96f1bd298ecfd0f6721811fda32c477d03455ced2a1186bfe744e8f64fe\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 9 07:21:03.714165 containerd[1594]: time="2024-10-09T07:21:03.713951364Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\"" Oct 9 07:21:03.733985 containerd[1594]: time="2024-10-09T07:21:03.733937516Z" level=info msg="CreateContainer within sandbox \"4473b96f1bd298ecfd0f6721811fda32c477d03455ced2a1186bfe744e8f64fe\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2c06e93053b55f70a5583e6302fe8a32acca00fe2041c550907786c5ec8a48e1\"" Oct 9 07:21:03.737039 containerd[1594]: time="2024-10-09T07:21:03.736984745Z" level=info msg="StartContainer for \"2c06e93053b55f70a5583e6302fe8a32acca00fe2041c550907786c5ec8a48e1\"" Oct 9 07:21:03.742682 kernel: bpftool[4620]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Oct 9 07:21:03.828548 containerd[1594]: time="2024-10-09T07:21:03.828482213Z" level=info msg="StartContainer for \"2c06e93053b55f70a5583e6302fe8a32acca00fe2041c550907786c5ec8a48e1\" returns successfully" Oct 9 07:21:03.989901 containerd[1594]: time="2024-10-09T07:21:03.989775634Z" level=info msg="StopPodSandbox for \"df262431353447533ae14f7e8540234c43b8638dff4f4777ae00acdf3ee89c13\"" Oct 9 07:21:04.175291 systemd-networkd[1226]: vxlan.calico: Link UP Oct 9 07:21:04.175302 systemd-networkd[1226]: vxlan.calico: Gained carrier Oct 9 07:21:04.264343 containerd[1594]: 2024-10-09 07:21:04.063 [INFO][4676] k8s.go 608: Cleaning up netns ContainerID="df262431353447533ae14f7e8540234c43b8638dff4f4777ae00acdf3ee89c13" Oct 9 07:21:04.264343 containerd[1594]: 2024-10-09 07:21:04.064 [INFO][4676] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="df262431353447533ae14f7e8540234c43b8638dff4f4777ae00acdf3ee89c13" iface="eth0" netns="/var/run/netns/cni-9f917869-47b7-8545-cb68-da1fc6876600" Oct 9 07:21:04.264343 containerd[1594]: 2024-10-09 07:21:04.067 [INFO][4676] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="df262431353447533ae14f7e8540234c43b8638dff4f4777ae00acdf3ee89c13" iface="eth0" netns="/var/run/netns/cni-9f917869-47b7-8545-cb68-da1fc6876600" Oct 9 07:21:04.264343 containerd[1594]: 2024-10-09 07:21:04.069 [INFO][4676] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="df262431353447533ae14f7e8540234c43b8638dff4f4777ae00acdf3ee89c13" iface="eth0" netns="/var/run/netns/cni-9f917869-47b7-8545-cb68-da1fc6876600" Oct 9 07:21:04.264343 containerd[1594]: 2024-10-09 07:21:04.069 [INFO][4676] k8s.go 615: Releasing IP address(es) ContainerID="df262431353447533ae14f7e8540234c43b8638dff4f4777ae00acdf3ee89c13" Oct 9 07:21:04.264343 containerd[1594]: 2024-10-09 07:21:04.069 [INFO][4676] utils.go 188: Calico CNI releasing IP address ContainerID="df262431353447533ae14f7e8540234c43b8638dff4f4777ae00acdf3ee89c13" Oct 9 07:21:04.264343 containerd[1594]: 2024-10-09 07:21:04.141 [INFO][4695] ipam_plugin.go 417: Releasing address using handleID ContainerID="df262431353447533ae14f7e8540234c43b8638dff4f4777ae00acdf3ee89c13" HandleID="k8s-pod-network.df262431353447533ae14f7e8540234c43b8638dff4f4777ae00acdf3ee89c13" Workload="ci--3975.2.2--f--f6e42a54cc-k8s-calico--kube--controllers--8f856f8--6czvj-eth0" Oct 9 07:21:04.264343 containerd[1594]: 2024-10-09 07:21:04.143 [INFO][4695] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:21:04.264343 containerd[1594]: 2024-10-09 07:21:04.143 [INFO][4695] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:21:04.264343 containerd[1594]: 2024-10-09 07:21:04.161 [WARNING][4695] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="df262431353447533ae14f7e8540234c43b8638dff4f4777ae00acdf3ee89c13" HandleID="k8s-pod-network.df262431353447533ae14f7e8540234c43b8638dff4f4777ae00acdf3ee89c13" Workload="ci--3975.2.2--f--f6e42a54cc-k8s-calico--kube--controllers--8f856f8--6czvj-eth0" Oct 9 07:21:04.264343 containerd[1594]: 2024-10-09 07:21:04.161 [INFO][4695] ipam_plugin.go 445: Releasing address using workloadID ContainerID="df262431353447533ae14f7e8540234c43b8638dff4f4777ae00acdf3ee89c13" HandleID="k8s-pod-network.df262431353447533ae14f7e8540234c43b8638dff4f4777ae00acdf3ee89c13" Workload="ci--3975.2.2--f--f6e42a54cc-k8s-calico--kube--controllers--8f856f8--6czvj-eth0" Oct 9 07:21:04.264343 containerd[1594]: 2024-10-09 07:21:04.179 [INFO][4695] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:21:04.264343 containerd[1594]: 2024-10-09 07:21:04.202 [INFO][4676] k8s.go 621: Teardown processing complete. ContainerID="df262431353447533ae14f7e8540234c43b8638dff4f4777ae00acdf3ee89c13" Oct 9 07:21:04.264343 containerd[1594]: time="2024-10-09T07:21:04.263976149Z" level=info msg="TearDown network for sandbox \"df262431353447533ae14f7e8540234c43b8638dff4f4777ae00acdf3ee89c13\" successfully" Oct 9 07:21:04.264343 containerd[1594]: time="2024-10-09T07:21:04.264024882Z" level=info msg="StopPodSandbox for \"df262431353447533ae14f7e8540234c43b8638dff4f4777ae00acdf3ee89c13\" returns successfully" Oct 9 07:21:04.269682 containerd[1594]: time="2024-10-09T07:21:04.269232473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8f856f8-6czvj,Uid:3336ca45-6ca5-4891-8a41-a80ade85063f,Namespace:calico-system,Attempt:1,}" Oct 9 07:21:04.342667 systemd[1]: run-netns-cni\x2d9f917869\x2d47b7\x2d8545\x2dcb68\x2dda1fc6876600.mount: Deactivated successfully. Oct 9 07:21:04.453375 kubelet[2700]: E1009 07:21:04.453346 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:21:04.457529 kubelet[2700]: E1009 07:21:04.457500 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:21:04.480030 systemd-networkd[1226]: cali3cd4bd103f1: Link UP Oct 9 07:21:04.484269 systemd-networkd[1226]: cali3cd4bd103f1: Gained carrier Oct 9 07:21:04.506460 kubelet[2700]: I1009 07:21:04.503814 2700 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-stxm6" podStartSLOduration=36.503753315 podStartE2EDuration="36.503753315s" podCreationTimestamp="2024-10-09 07:20:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 07:21:04.47477449 +0000 UTC m=+49.666480445" watchObservedRunningTime="2024-10-09 07:21:04.503753315 +0000 UTC m=+49.695459272" Oct 9 07:21:04.520855 containerd[1594]: 2024-10-09 07:21:04.349 [INFO][4724] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.2.2--f--f6e42a54cc-k8s-calico--kube--controllers--8f856f8--6czvj-eth0 calico-kube-controllers-8f856f8- calico-system 3336ca45-6ca5-4891-8a41-a80ade85063f 883 0 2024-10-09 07:20:35 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:8f856f8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-3975.2.2-f-f6e42a54cc calico-kube-controllers-8f856f8-6czvj eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali3cd4bd103f1 [] []}} ContainerID="1400f2f788eafb6df72d34020faca3a18cd2b69d2f2b118eccb035dae25d3aa4" Namespace="calico-system" Pod="calico-kube-controllers-8f856f8-6czvj" WorkloadEndpoint="ci--3975.2.2--f--f6e42a54cc-k8s-calico--kube--controllers--8f856f8--6czvj-" Oct 9 07:21:04.520855 containerd[1594]: 2024-10-09 07:21:04.349 [INFO][4724] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1400f2f788eafb6df72d34020faca3a18cd2b69d2f2b118eccb035dae25d3aa4" Namespace="calico-system" Pod="calico-kube-controllers-8f856f8-6czvj" WorkloadEndpoint="ci--3975.2.2--f--f6e42a54cc-k8s-calico--kube--controllers--8f856f8--6czvj-eth0" Oct 9 07:21:04.520855 containerd[1594]: 2024-10-09 07:21:04.402 [INFO][4736] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1400f2f788eafb6df72d34020faca3a18cd2b69d2f2b118eccb035dae25d3aa4" HandleID="k8s-pod-network.1400f2f788eafb6df72d34020faca3a18cd2b69d2f2b118eccb035dae25d3aa4" Workload="ci--3975.2.2--f--f6e42a54cc-k8s-calico--kube--controllers--8f856f8--6czvj-eth0" Oct 9 07:21:04.520855 containerd[1594]: 2024-10-09 07:21:04.414 [INFO][4736] ipam_plugin.go 270: Auto assigning IP ContainerID="1400f2f788eafb6df72d34020faca3a18cd2b69d2f2b118eccb035dae25d3aa4" HandleID="k8s-pod-network.1400f2f788eafb6df72d34020faca3a18cd2b69d2f2b118eccb035dae25d3aa4" Workload="ci--3975.2.2--f--f6e42a54cc-k8s-calico--kube--controllers--8f856f8--6czvj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002edd20), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3975.2.2-f-f6e42a54cc", "pod":"calico-kube-controllers-8f856f8-6czvj", "timestamp":"2024-10-09 07:21:04.402180165 +0000 UTC"}, Hostname:"ci-3975.2.2-f-f6e42a54cc", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 07:21:04.520855 containerd[1594]: 2024-10-09 07:21:04.414 [INFO][4736] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:21:04.520855 containerd[1594]: 2024-10-09 07:21:04.414 [INFO][4736] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:21:04.520855 containerd[1594]: 2024-10-09 07:21:04.414 [INFO][4736] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.2.2-f-f6e42a54cc' Oct 9 07:21:04.520855 containerd[1594]: 2024-10-09 07:21:04.417 [INFO][4736] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1400f2f788eafb6df72d34020faca3a18cd2b69d2f2b118eccb035dae25d3aa4" host="ci-3975.2.2-f-f6e42a54cc" Oct 9 07:21:04.520855 containerd[1594]: 2024-10-09 07:21:04.425 [INFO][4736] ipam.go 372: Looking up existing affinities for host host="ci-3975.2.2-f-f6e42a54cc" Oct 9 07:21:04.520855 containerd[1594]: 2024-10-09 07:21:04.431 [INFO][4736] ipam.go 489: Trying affinity for 192.168.23.0/26 host="ci-3975.2.2-f-f6e42a54cc" Oct 9 07:21:04.520855 containerd[1594]: 2024-10-09 07:21:04.434 [INFO][4736] ipam.go 155: Attempting to load block cidr=192.168.23.0/26 host="ci-3975.2.2-f-f6e42a54cc" Oct 9 07:21:04.520855 containerd[1594]: 2024-10-09 07:21:04.437 [INFO][4736] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.23.0/26 host="ci-3975.2.2-f-f6e42a54cc" Oct 9 07:21:04.520855 containerd[1594]: 2024-10-09 07:21:04.437 [INFO][4736] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.23.0/26 handle="k8s-pod-network.1400f2f788eafb6df72d34020faca3a18cd2b69d2f2b118eccb035dae25d3aa4" host="ci-3975.2.2-f-f6e42a54cc" Oct 9 07:21:04.520855 containerd[1594]: 2024-10-09 07:21:04.441 [INFO][4736] ipam.go 1685: Creating new handle: k8s-pod-network.1400f2f788eafb6df72d34020faca3a18cd2b69d2f2b118eccb035dae25d3aa4 Oct 9 07:21:04.520855 containerd[1594]: 2024-10-09 07:21:04.448 [INFO][4736] ipam.go 1203: Writing block in order to claim IPs block=192.168.23.0/26 handle="k8s-pod-network.1400f2f788eafb6df72d34020faca3a18cd2b69d2f2b118eccb035dae25d3aa4" host="ci-3975.2.2-f-f6e42a54cc" Oct 9 07:21:04.520855 containerd[1594]: 2024-10-09 07:21:04.461 [INFO][4736] ipam.go 1216: Successfully claimed IPs: [192.168.23.4/26] block=192.168.23.0/26 handle="k8s-pod-network.1400f2f788eafb6df72d34020faca3a18cd2b69d2f2b118eccb035dae25d3aa4" host="ci-3975.2.2-f-f6e42a54cc" Oct 9 07:21:04.520855 containerd[1594]: 2024-10-09 07:21:04.461 [INFO][4736] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.23.4/26] handle="k8s-pod-network.1400f2f788eafb6df72d34020faca3a18cd2b69d2f2b118eccb035dae25d3aa4" host="ci-3975.2.2-f-f6e42a54cc" Oct 9 07:21:04.520855 containerd[1594]: 2024-10-09 07:21:04.461 [INFO][4736] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:21:04.520855 containerd[1594]: 2024-10-09 07:21:04.461 [INFO][4736] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.23.4/26] IPv6=[] ContainerID="1400f2f788eafb6df72d34020faca3a18cd2b69d2f2b118eccb035dae25d3aa4" HandleID="k8s-pod-network.1400f2f788eafb6df72d34020faca3a18cd2b69d2f2b118eccb035dae25d3aa4" Workload="ci--3975.2.2--f--f6e42a54cc-k8s-calico--kube--controllers--8f856f8--6czvj-eth0" Oct 9 07:21:04.525684 containerd[1594]: 2024-10-09 07:21:04.467 [INFO][4724] k8s.go 386: Populated endpoint ContainerID="1400f2f788eafb6df72d34020faca3a18cd2b69d2f2b118eccb035dae25d3aa4" Namespace="calico-system" Pod="calico-kube-controllers-8f856f8-6czvj" WorkloadEndpoint="ci--3975.2.2--f--f6e42a54cc-k8s-calico--kube--controllers--8f856f8--6czvj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.2--f--f6e42a54cc-k8s-calico--kube--controllers--8f856f8--6czvj-eth0", GenerateName:"calico-kube-controllers-8f856f8-", Namespace:"calico-system", SelfLink:"", UID:"3336ca45-6ca5-4891-8a41-a80ade85063f", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 20, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8f856f8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.2-f-f6e42a54cc", ContainerID:"", Pod:"calico-kube-controllers-8f856f8-6czvj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.23.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3cd4bd103f1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:21:04.525684 containerd[1594]: 2024-10-09 07:21:04.467 [INFO][4724] k8s.go 387: Calico CNI using IPs: [192.168.23.4/32] ContainerID="1400f2f788eafb6df72d34020faca3a18cd2b69d2f2b118eccb035dae25d3aa4" Namespace="calico-system" Pod="calico-kube-controllers-8f856f8-6czvj" WorkloadEndpoint="ci--3975.2.2--f--f6e42a54cc-k8s-calico--kube--controllers--8f856f8--6czvj-eth0" Oct 9 07:21:04.525684 containerd[1594]: 2024-10-09 07:21:04.468 [INFO][4724] dataplane_linux.go 68: Setting the host side veth name to cali3cd4bd103f1 ContainerID="1400f2f788eafb6df72d34020faca3a18cd2b69d2f2b118eccb035dae25d3aa4" Namespace="calico-system" Pod="calico-kube-controllers-8f856f8-6czvj" WorkloadEndpoint="ci--3975.2.2--f--f6e42a54cc-k8s-calico--kube--controllers--8f856f8--6czvj-eth0" Oct 9 07:21:04.525684 containerd[1594]: 2024-10-09 07:21:04.486 [INFO][4724] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="1400f2f788eafb6df72d34020faca3a18cd2b69d2f2b118eccb035dae25d3aa4" Namespace="calico-system" Pod="calico-kube-controllers-8f856f8-6czvj" WorkloadEndpoint="ci--3975.2.2--f--f6e42a54cc-k8s-calico--kube--controllers--8f856f8--6czvj-eth0" Oct 9 07:21:04.525684 containerd[1594]: 2024-10-09 07:21:04.487 [INFO][4724] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1400f2f788eafb6df72d34020faca3a18cd2b69d2f2b118eccb035dae25d3aa4" Namespace="calico-system" Pod="calico-kube-controllers-8f856f8-6czvj" WorkloadEndpoint="ci--3975.2.2--f--f6e42a54cc-k8s-calico--kube--controllers--8f856f8--6czvj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.2--f--f6e42a54cc-k8s-calico--kube--controllers--8f856f8--6czvj-eth0", GenerateName:"calico-kube-controllers-8f856f8-", Namespace:"calico-system", SelfLink:"", UID:"3336ca45-6ca5-4891-8a41-a80ade85063f", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 20, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8f856f8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.2-f-f6e42a54cc", ContainerID:"1400f2f788eafb6df72d34020faca3a18cd2b69d2f2b118eccb035dae25d3aa4", Pod:"calico-kube-controllers-8f856f8-6czvj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.23.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3cd4bd103f1", MAC:"82:36:d2:1e:13:74", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:21:04.525684 containerd[1594]: 2024-10-09 07:21:04.508 [INFO][4724] k8s.go 500: Wrote updated endpoint to datastore ContainerID="1400f2f788eafb6df72d34020faca3a18cd2b69d2f2b118eccb035dae25d3aa4" Namespace="calico-system" Pod="calico-kube-controllers-8f856f8-6czvj" WorkloadEndpoint="ci--3975.2.2--f--f6e42a54cc-k8s-calico--kube--controllers--8f856f8--6czvj-eth0" Oct 9 07:21:04.577360 containerd[1594]: time="2024-10-09T07:21:04.577149276Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:21:04.577964 containerd[1594]: time="2024-10-09T07:21:04.577436741Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:21:04.577964 containerd[1594]: time="2024-10-09T07:21:04.577631021Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:21:04.577964 containerd[1594]: time="2024-10-09T07:21:04.577661239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:21:04.720843 containerd[1594]: time="2024-10-09T07:21:04.720564238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8f856f8-6czvj,Uid:3336ca45-6ca5-4891-8a41-a80ade85063f,Namespace:calico-system,Attempt:1,} returns sandbox id \"1400f2f788eafb6df72d34020faca3a18cd2b69d2f2b118eccb035dae25d3aa4\"" Oct 9 07:21:04.727871 systemd-networkd[1226]: calia62c3a53bfd: Gained IPv6LL Oct 9 07:21:04.919925 systemd-networkd[1226]: califd7b9fcca1e: Gained IPv6LL Oct 9 07:21:05.220302 containerd[1594]: time="2024-10-09T07:21:05.220156759Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:21:05.221313 containerd[1594]: time="2024-10-09T07:21:05.221262905Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.1: active requests=0, bytes read=7642081" Oct 9 07:21:05.222294 containerd[1594]: time="2024-10-09T07:21:05.222212819Z" level=info msg="ImageCreate event name:\"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:21:05.224943 containerd[1594]: time="2024-10-09T07:21:05.224601973Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:21:05.225711 containerd[1594]: time="2024-10-09T07:21:05.225670002Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.1\" with image id \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\", size \"9134482\" in 1.511664443s" Oct 9 07:21:05.225792 containerd[1594]: time="2024-10-09T07:21:05.225721391Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\" returns image reference \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\"" Oct 9 07:21:05.228456 containerd[1594]: time="2024-10-09T07:21:05.228413245Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\"" Oct 9 07:21:05.230091 containerd[1594]: time="2024-10-09T07:21:05.229862829Z" level=info msg="CreateContainer within sandbox \"589d7f337c2012bd82c67821a2e3dc46eefb61ab69903e1f4bc00967ea5fd7d7\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Oct 9 07:21:05.258686 containerd[1594]: time="2024-10-09T07:21:05.258637903Z" level=info msg="CreateContainer within sandbox \"589d7f337c2012bd82c67821a2e3dc46eefb61ab69903e1f4bc00967ea5fd7d7\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"31824bb6bbc84486b4766d40225aa0bcd0912b84ab6881b0852132061ac4d821\"" Oct 9 07:21:05.259406 containerd[1594]: time="2024-10-09T07:21:05.259342899Z" level=info msg="StartContainer for \"31824bb6bbc84486b4766d40225aa0bcd0912b84ab6881b0852132061ac4d821\"" Oct 9 07:21:05.400307 containerd[1594]: time="2024-10-09T07:21:05.399763976Z" level=info msg="StartContainer for \"31824bb6bbc84486b4766d40225aa0bcd0912b84ab6881b0852132061ac4d821\" returns successfully" Oct 9 07:21:05.411092 kubelet[2700]: I1009 07:21:05.409782 2700 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 9 07:21:05.413111 kubelet[2700]: E1009 07:21:05.413069 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:21:05.497892 kubelet[2700]: E1009 07:21:05.496119 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:21:05.752108 systemd-networkd[1226]: vxlan.calico: Gained IPv6LL Oct 9 07:21:05.774905 kubelet[2700]: E1009 07:21:05.774869 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:21:06.136604 systemd-journald[1137]: Under memory pressure, flushing caches. Oct 9 07:21:06.136386 systemd-resolved[1487]: Under memory pressure, flushing caches. Oct 9 07:21:06.136433 systemd-resolved[1487]: Flushed all caches. Oct 9 07:21:06.456203 systemd-networkd[1226]: cali3cd4bd103f1: Gained IPv6LL Oct 9 07:21:07.809880 systemd[1]: Started sshd@7-161.35.237.80:22-147.75.109.163:54202.service - OpenSSH per-connection server daemon (147.75.109.163:54202). Oct 9 07:21:07.966023 sshd[4921]: Accepted publickey for core from 147.75.109.163 port 54202 ssh2: RSA SHA256:OOTuok04LPMhCB4st0aqyl5Dfz9DReS3qIQDSGH1S/w Oct 9 07:21:07.970617 sshd[4921]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:21:07.997292 systemd-logind[1573]: New session 8 of user core. Oct 9 07:21:08.003346 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 9 07:21:08.274913 containerd[1594]: time="2024-10-09T07:21:08.274757552Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:21:08.285898 containerd[1594]: time="2024-10-09T07:21:08.285370016Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.1: active requests=0, bytes read=33507125" Oct 9 07:21:08.288686 containerd[1594]: time="2024-10-09T07:21:08.288617087Z" level=info msg="ImageCreate event name:\"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:21:08.302893 containerd[1594]: time="2024-10-09T07:21:08.301529646Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" with image id \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\", size \"34999494\" in 3.07306941s" Oct 9 07:21:08.302893 containerd[1594]: time="2024-10-09T07:21:08.301592516Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" returns image reference \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\"" Oct 9 07:21:08.303642 containerd[1594]: time="2024-10-09T07:21:08.301652958Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:21:08.308851 containerd[1594]: time="2024-10-09T07:21:08.308720605Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\"" Oct 9 07:21:08.339564 containerd[1594]: time="2024-10-09T07:21:08.339249214Z" level=info msg="CreateContainer within sandbox \"1400f2f788eafb6df72d34020faca3a18cd2b69d2f2b118eccb035dae25d3aa4\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Oct 9 07:21:08.375640 containerd[1594]: time="2024-10-09T07:21:08.375567500Z" level=info msg="CreateContainer within sandbox \"1400f2f788eafb6df72d34020faca3a18cd2b69d2f2b118eccb035dae25d3aa4\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"9d605468cf8ba86e193da0d68bd9cdb9daff8f6195768d005d3ff6249cf5522f\"" Oct 9 07:21:08.376767 containerd[1594]: time="2024-10-09T07:21:08.376441063Z" level=info msg="StartContainer for \"9d605468cf8ba86e193da0d68bd9cdb9daff8f6195768d005d3ff6249cf5522f\"" Oct 9 07:21:08.528517 sshd[4921]: pam_unix(sshd:session): session closed for user core Oct 9 07:21:08.539050 systemd[1]: sshd@7-161.35.237.80:22-147.75.109.163:54202.service: Deactivated successfully. Oct 9 07:21:08.547774 systemd[1]: session-8.scope: Deactivated successfully. Oct 9 07:21:08.548045 systemd-logind[1573]: Session 8 logged out. Waiting for processes to exit. Oct 9 07:21:08.558601 systemd-logind[1573]: Removed session 8. Oct 9 07:21:08.657156 containerd[1594]: time="2024-10-09T07:21:08.657102447Z" level=info msg="StartContainer for \"9d605468cf8ba86e193da0d68bd9cdb9daff8f6195768d005d3ff6249cf5522f\" returns successfully" Oct 9 07:21:09.586375 kubelet[2700]: I1009 07:21:09.585084 2700 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-8f856f8-6czvj" podStartSLOduration=31.007702606 podStartE2EDuration="34.585039141s" podCreationTimestamp="2024-10-09 07:20:35 +0000 UTC" firstStartedPulling="2024-10-09 07:21:04.724680779 +0000 UTC m=+49.916386730" lastFinishedPulling="2024-10-09 07:21:08.302017322 +0000 UTC m=+53.493723265" observedRunningTime="2024-10-09 07:21:09.58470396 +0000 UTC m=+54.776409910" watchObservedRunningTime="2024-10-09 07:21:09.585039141 +0000 UTC m=+54.776745096" Oct 9 07:21:10.092397 containerd[1594]: time="2024-10-09T07:21:10.092343376Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:21:10.093191 containerd[1594]: time="2024-10-09T07:21:10.092872820Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1: active requests=0, bytes read=12907822" Oct 9 07:21:10.096576 containerd[1594]: time="2024-10-09T07:21:10.096355071Z" level=info msg="ImageCreate event name:\"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:21:10.102765 containerd[1594]: time="2024-10-09T07:21:10.101210303Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:21:10.112108 containerd[1594]: time="2024-10-09T07:21:10.111977877Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" with image id \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\", size \"14400175\" in 1.803119613s" Oct 9 07:21:10.112108 containerd[1594]: time="2024-10-09T07:21:10.112049418Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" returns image reference \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\"" Oct 9 07:21:10.141587 containerd[1594]: time="2024-10-09T07:21:10.140976265Z" level=info msg="CreateContainer within sandbox \"589d7f337c2012bd82c67821a2e3dc46eefb61ab69903e1f4bc00967ea5fd7d7\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Oct 9 07:21:10.166065 containerd[1594]: time="2024-10-09T07:21:10.166003492Z" level=info msg="CreateContainer within sandbox \"589d7f337c2012bd82c67821a2e3dc46eefb61ab69903e1f4bc00967ea5fd7d7\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"cd62624ea7c6cc0f19a40be7aae7577706a20070d08a9405933549cacc2e6a79\"" Oct 9 07:21:10.166770 containerd[1594]: time="2024-10-09T07:21:10.166733002Z" level=info msg="StartContainer for \"cd62624ea7c6cc0f19a40be7aae7577706a20070d08a9405933549cacc2e6a79\"" Oct 9 07:21:10.172347 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2947606522.mount: Deactivated successfully. Oct 9 07:21:10.277718 containerd[1594]: time="2024-10-09T07:21:10.277636377Z" level=info msg="StartContainer for \"cd62624ea7c6cc0f19a40be7aae7577706a20070d08a9405933549cacc2e6a79\" returns successfully" Oct 9 07:21:11.254090 kubelet[2700]: I1009 07:21:11.254031 2700 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Oct 9 07:21:11.259299 kubelet[2700]: I1009 07:21:11.259240 2700 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Oct 9 07:21:13.548246 systemd[1]: Started sshd@8-161.35.237.80:22-147.75.109.163:54218.service - OpenSSH per-connection server daemon (147.75.109.163:54218). Oct 9 07:21:13.729600 sshd[5049]: Accepted publickey for core from 147.75.109.163 port 54218 ssh2: RSA SHA256:OOTuok04LPMhCB4st0aqyl5Dfz9DReS3qIQDSGH1S/w Oct 9 07:21:13.733741 sshd[5049]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:21:13.744822 systemd-logind[1573]: New session 9 of user core. Oct 9 07:21:13.750963 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 9 07:21:14.118866 sshd[5049]: pam_unix(sshd:session): session closed for user core Oct 9 07:21:14.124703 systemd[1]: sshd@8-161.35.237.80:22-147.75.109.163:54218.service: Deactivated successfully. Oct 9 07:21:14.129488 systemd[1]: session-9.scope: Deactivated successfully. Oct 9 07:21:14.130064 systemd-logind[1573]: Session 9 logged out. Waiting for processes to exit. Oct 9 07:21:14.132072 systemd-logind[1573]: Removed session 9. Oct 9 07:21:14.999531 containerd[1594]: time="2024-10-09T07:21:14.999485144Z" level=info msg="StopPodSandbox for \"3f83c3a32eba9d2d238a139f196d8a01e7cfd89d1fadae4f5fa8f670b3884c00\"" Oct 9 07:21:15.126116 containerd[1594]: 2024-10-09 07:21:15.070 [WARNING][5090] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3f83c3a32eba9d2d238a139f196d8a01e7cfd89d1fadae4f5fa8f670b3884c00" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.2--f--f6e42a54cc-k8s-coredns--76f75df574--wfcsr-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"70a54fb9-5d74-4760-8a65-2ba9c139331e", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 20, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.2-f-f6e42a54cc", ContainerID:"9bc387d21825db9a745d1f91275afa8c39ce3b0ce063a5ed016c54a165b9f545", Pod:"coredns-76f75df574-wfcsr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.23.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali53071761dce", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:21:15.126116 containerd[1594]: 2024-10-09 07:21:15.070 [INFO][5090] k8s.go 608: Cleaning up netns ContainerID="3f83c3a32eba9d2d238a139f196d8a01e7cfd89d1fadae4f5fa8f670b3884c00" Oct 9 07:21:15.126116 containerd[1594]: 2024-10-09 07:21:15.070 [INFO][5090] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="3f83c3a32eba9d2d238a139f196d8a01e7cfd89d1fadae4f5fa8f670b3884c00" iface="eth0" netns="" Oct 9 07:21:15.126116 containerd[1594]: 2024-10-09 07:21:15.070 [INFO][5090] k8s.go 615: Releasing IP address(es) ContainerID="3f83c3a32eba9d2d238a139f196d8a01e7cfd89d1fadae4f5fa8f670b3884c00" Oct 9 07:21:15.126116 containerd[1594]: 2024-10-09 07:21:15.070 [INFO][5090] utils.go 188: Calico CNI releasing IP address ContainerID="3f83c3a32eba9d2d238a139f196d8a01e7cfd89d1fadae4f5fa8f670b3884c00" Oct 9 07:21:15.126116 containerd[1594]: 2024-10-09 07:21:15.099 [INFO][5098] ipam_plugin.go 417: Releasing address using handleID ContainerID="3f83c3a32eba9d2d238a139f196d8a01e7cfd89d1fadae4f5fa8f670b3884c00" HandleID="k8s-pod-network.3f83c3a32eba9d2d238a139f196d8a01e7cfd89d1fadae4f5fa8f670b3884c00" Workload="ci--3975.2.2--f--f6e42a54cc-k8s-coredns--76f75df574--wfcsr-eth0" Oct 9 07:21:15.126116 containerd[1594]: 2024-10-09 07:21:15.099 [INFO][5098] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:21:15.126116 containerd[1594]: 2024-10-09 07:21:15.099 [INFO][5098] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:21:15.126116 containerd[1594]: 2024-10-09 07:21:15.108 [WARNING][5098] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="3f83c3a32eba9d2d238a139f196d8a01e7cfd89d1fadae4f5fa8f670b3884c00" HandleID="k8s-pod-network.3f83c3a32eba9d2d238a139f196d8a01e7cfd89d1fadae4f5fa8f670b3884c00" Workload="ci--3975.2.2--f--f6e42a54cc-k8s-coredns--76f75df574--wfcsr-eth0" Oct 9 07:21:15.126116 containerd[1594]: 2024-10-09 07:21:15.109 [INFO][5098] ipam_plugin.go 445: Releasing address using workloadID ContainerID="3f83c3a32eba9d2d238a139f196d8a01e7cfd89d1fadae4f5fa8f670b3884c00" HandleID="k8s-pod-network.3f83c3a32eba9d2d238a139f196d8a01e7cfd89d1fadae4f5fa8f670b3884c00" Workload="ci--3975.2.2--f--f6e42a54cc-k8s-coredns--76f75df574--wfcsr-eth0" Oct 9 07:21:15.126116 containerd[1594]: 2024-10-09 07:21:15.114 [INFO][5098] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:21:15.126116 containerd[1594]: 2024-10-09 07:21:15.120 [INFO][5090] k8s.go 621: Teardown processing complete. ContainerID="3f83c3a32eba9d2d238a139f196d8a01e7cfd89d1fadae4f5fa8f670b3884c00" Oct 9 07:21:15.129730 containerd[1594]: time="2024-10-09T07:21:15.126124814Z" level=info msg="TearDown network for sandbox \"3f83c3a32eba9d2d238a139f196d8a01e7cfd89d1fadae4f5fa8f670b3884c00\" successfully" Oct 9 07:21:15.129730 containerd[1594]: time="2024-10-09T07:21:15.126154882Z" level=info msg="StopPodSandbox for \"3f83c3a32eba9d2d238a139f196d8a01e7cfd89d1fadae4f5fa8f670b3884c00\" returns successfully" Oct 9 07:21:15.135279 containerd[1594]: time="2024-10-09T07:21:15.135090742Z" level=info msg="RemovePodSandbox for \"3f83c3a32eba9d2d238a139f196d8a01e7cfd89d1fadae4f5fa8f670b3884c00\"" Oct 9 07:21:15.139380 containerd[1594]: time="2024-10-09T07:21:15.139081105Z" level=info msg="Forcibly stopping sandbox \"3f83c3a32eba9d2d238a139f196d8a01e7cfd89d1fadae4f5fa8f670b3884c00\"" Oct 9 07:21:15.269031 containerd[1594]: 2024-10-09 07:21:15.218 [WARNING][5116] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3f83c3a32eba9d2d238a139f196d8a01e7cfd89d1fadae4f5fa8f670b3884c00" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.2--f--f6e42a54cc-k8s-coredns--76f75df574--wfcsr-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"70a54fb9-5d74-4760-8a65-2ba9c139331e", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 20, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.2-f-f6e42a54cc", ContainerID:"9bc387d21825db9a745d1f91275afa8c39ce3b0ce063a5ed016c54a165b9f545", Pod:"coredns-76f75df574-wfcsr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.23.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali53071761dce", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:21:15.269031 containerd[1594]: 2024-10-09 07:21:15.218 [INFO][5116] k8s.go 608: Cleaning up netns ContainerID="3f83c3a32eba9d2d238a139f196d8a01e7cfd89d1fadae4f5fa8f670b3884c00" Oct 9 07:21:15.269031 containerd[1594]: 2024-10-09 07:21:15.218 [INFO][5116] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="3f83c3a32eba9d2d238a139f196d8a01e7cfd89d1fadae4f5fa8f670b3884c00" iface="eth0" netns="" Oct 9 07:21:15.269031 containerd[1594]: 2024-10-09 07:21:15.218 [INFO][5116] k8s.go 615: Releasing IP address(es) ContainerID="3f83c3a32eba9d2d238a139f196d8a01e7cfd89d1fadae4f5fa8f670b3884c00" Oct 9 07:21:15.269031 containerd[1594]: 2024-10-09 07:21:15.218 [INFO][5116] utils.go 188: Calico CNI releasing IP address ContainerID="3f83c3a32eba9d2d238a139f196d8a01e7cfd89d1fadae4f5fa8f670b3884c00" Oct 9 07:21:15.269031 containerd[1594]: 2024-10-09 07:21:15.251 [INFO][5122] ipam_plugin.go 417: Releasing address using handleID ContainerID="3f83c3a32eba9d2d238a139f196d8a01e7cfd89d1fadae4f5fa8f670b3884c00" HandleID="k8s-pod-network.3f83c3a32eba9d2d238a139f196d8a01e7cfd89d1fadae4f5fa8f670b3884c00" Workload="ci--3975.2.2--f--f6e42a54cc-k8s-coredns--76f75df574--wfcsr-eth0" Oct 9 07:21:15.269031 containerd[1594]: 2024-10-09 07:21:15.251 [INFO][5122] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:21:15.269031 containerd[1594]: 2024-10-09 07:21:15.251 [INFO][5122] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:21:15.269031 containerd[1594]: 2024-10-09 07:21:15.259 [WARNING][5122] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="3f83c3a32eba9d2d238a139f196d8a01e7cfd89d1fadae4f5fa8f670b3884c00" HandleID="k8s-pod-network.3f83c3a32eba9d2d238a139f196d8a01e7cfd89d1fadae4f5fa8f670b3884c00" Workload="ci--3975.2.2--f--f6e42a54cc-k8s-coredns--76f75df574--wfcsr-eth0" Oct 9 07:21:15.269031 containerd[1594]: 2024-10-09 07:21:15.259 [INFO][5122] ipam_plugin.go 445: Releasing address using workloadID ContainerID="3f83c3a32eba9d2d238a139f196d8a01e7cfd89d1fadae4f5fa8f670b3884c00" HandleID="k8s-pod-network.3f83c3a32eba9d2d238a139f196d8a01e7cfd89d1fadae4f5fa8f670b3884c00" Workload="ci--3975.2.2--f--f6e42a54cc-k8s-coredns--76f75df574--wfcsr-eth0" Oct 9 07:21:15.269031 containerd[1594]: 2024-10-09 07:21:15.262 [INFO][5122] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:21:15.269031 containerd[1594]: 2024-10-09 07:21:15.265 [INFO][5116] k8s.go 621: Teardown processing complete. ContainerID="3f83c3a32eba9d2d238a139f196d8a01e7cfd89d1fadae4f5fa8f670b3884c00" Oct 9 07:21:15.269031 containerd[1594]: time="2024-10-09T07:21:15.268815788Z" level=info msg="TearDown network for sandbox \"3f83c3a32eba9d2d238a139f196d8a01e7cfd89d1fadae4f5fa8f670b3884c00\" successfully" Oct 9 07:21:15.316290 containerd[1594]: time="2024-10-09T07:21:15.316194114Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3f83c3a32eba9d2d238a139f196d8a01e7cfd89d1fadae4f5fa8f670b3884c00\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 07:21:15.316476 containerd[1594]: time="2024-10-09T07:21:15.316343784Z" level=info msg="RemovePodSandbox \"3f83c3a32eba9d2d238a139f196d8a01e7cfd89d1fadae4f5fa8f670b3884c00\" returns successfully" Oct 9 07:21:15.317267 containerd[1594]: time="2024-10-09T07:21:15.317218175Z" level=info msg="StopPodSandbox for \"df262431353447533ae14f7e8540234c43b8638dff4f4777ae00acdf3ee89c13\"" Oct 9 07:21:15.420655 containerd[1594]: 2024-10-09 07:21:15.372 [WARNING][5140] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="df262431353447533ae14f7e8540234c43b8638dff4f4777ae00acdf3ee89c13" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.2--f--f6e42a54cc-k8s-calico--kube--controllers--8f856f8--6czvj-eth0", GenerateName:"calico-kube-controllers-8f856f8-", Namespace:"calico-system", SelfLink:"", UID:"3336ca45-6ca5-4891-8a41-a80ade85063f", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 20, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8f856f8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.2-f-f6e42a54cc", ContainerID:"1400f2f788eafb6df72d34020faca3a18cd2b69d2f2b118eccb035dae25d3aa4", Pod:"calico-kube-controllers-8f856f8-6czvj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.23.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3cd4bd103f1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:21:15.420655 containerd[1594]: 2024-10-09 07:21:15.373 [INFO][5140] k8s.go 608: Cleaning up netns ContainerID="df262431353447533ae14f7e8540234c43b8638dff4f4777ae00acdf3ee89c13" Oct 9 07:21:15.420655 containerd[1594]: 2024-10-09 07:21:15.373 [INFO][5140] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="df262431353447533ae14f7e8540234c43b8638dff4f4777ae00acdf3ee89c13" iface="eth0" netns="" Oct 9 07:21:15.420655 containerd[1594]: 2024-10-09 07:21:15.373 [INFO][5140] k8s.go 615: Releasing IP address(es) ContainerID="df262431353447533ae14f7e8540234c43b8638dff4f4777ae00acdf3ee89c13" Oct 9 07:21:15.420655 containerd[1594]: 2024-10-09 07:21:15.373 [INFO][5140] utils.go 188: Calico CNI releasing IP address ContainerID="df262431353447533ae14f7e8540234c43b8638dff4f4777ae00acdf3ee89c13" Oct 9 07:21:15.420655 containerd[1594]: 2024-10-09 07:21:15.402 [INFO][5146] ipam_plugin.go 417: Releasing address using handleID ContainerID="df262431353447533ae14f7e8540234c43b8638dff4f4777ae00acdf3ee89c13" HandleID="k8s-pod-network.df262431353447533ae14f7e8540234c43b8638dff4f4777ae00acdf3ee89c13" Workload="ci--3975.2.2--f--f6e42a54cc-k8s-calico--kube--controllers--8f856f8--6czvj-eth0" Oct 9 07:21:15.420655 containerd[1594]: 2024-10-09 07:21:15.402 [INFO][5146] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:21:15.420655 containerd[1594]: 2024-10-09 07:21:15.402 [INFO][5146] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:21:15.420655 containerd[1594]: 2024-10-09 07:21:15.412 [WARNING][5146] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="df262431353447533ae14f7e8540234c43b8638dff4f4777ae00acdf3ee89c13" HandleID="k8s-pod-network.df262431353447533ae14f7e8540234c43b8638dff4f4777ae00acdf3ee89c13" Workload="ci--3975.2.2--f--f6e42a54cc-k8s-calico--kube--controllers--8f856f8--6czvj-eth0" Oct 9 07:21:15.420655 containerd[1594]: 2024-10-09 07:21:15.412 [INFO][5146] ipam_plugin.go 445: Releasing address using workloadID ContainerID="df262431353447533ae14f7e8540234c43b8638dff4f4777ae00acdf3ee89c13" HandleID="k8s-pod-network.df262431353447533ae14f7e8540234c43b8638dff4f4777ae00acdf3ee89c13" Workload="ci--3975.2.2--f--f6e42a54cc-k8s-calico--kube--controllers--8f856f8--6czvj-eth0" Oct 9 07:21:15.420655 containerd[1594]: 2024-10-09 07:21:15.415 [INFO][5146] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:21:15.420655 containerd[1594]: 2024-10-09 07:21:15.417 [INFO][5140] k8s.go 621: Teardown processing complete. ContainerID="df262431353447533ae14f7e8540234c43b8638dff4f4777ae00acdf3ee89c13" Oct 9 07:21:15.420655 containerd[1594]: time="2024-10-09T07:21:15.420434530Z" level=info msg="TearDown network for sandbox \"df262431353447533ae14f7e8540234c43b8638dff4f4777ae00acdf3ee89c13\" successfully" Oct 9 07:21:15.420655 containerd[1594]: time="2024-10-09T07:21:15.420469343Z" level=info msg="StopPodSandbox for \"df262431353447533ae14f7e8540234c43b8638dff4f4777ae00acdf3ee89c13\" returns successfully" Oct 9 07:21:15.421912 containerd[1594]: time="2024-10-09T07:21:15.421512426Z" level=info msg="RemovePodSandbox for \"df262431353447533ae14f7e8540234c43b8638dff4f4777ae00acdf3ee89c13\"" Oct 9 07:21:15.421912 containerd[1594]: time="2024-10-09T07:21:15.421577216Z" level=info msg="Forcibly stopping sandbox \"df262431353447533ae14f7e8540234c43b8638dff4f4777ae00acdf3ee89c13\"" Oct 9 07:21:15.544068 containerd[1594]: 2024-10-09 07:21:15.481 [WARNING][5164] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="df262431353447533ae14f7e8540234c43b8638dff4f4777ae00acdf3ee89c13" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.2--f--f6e42a54cc-k8s-calico--kube--controllers--8f856f8--6czvj-eth0", GenerateName:"calico-kube-controllers-8f856f8-", Namespace:"calico-system", SelfLink:"", UID:"3336ca45-6ca5-4891-8a41-a80ade85063f", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 20, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8f856f8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.2-f-f6e42a54cc", ContainerID:"1400f2f788eafb6df72d34020faca3a18cd2b69d2f2b118eccb035dae25d3aa4", Pod:"calico-kube-controllers-8f856f8-6czvj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.23.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3cd4bd103f1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:21:15.544068 containerd[1594]: 2024-10-09 07:21:15.482 [INFO][5164] k8s.go 608: Cleaning up netns ContainerID="df262431353447533ae14f7e8540234c43b8638dff4f4777ae00acdf3ee89c13" Oct 9 07:21:15.544068 containerd[1594]: 2024-10-09 07:21:15.482 [INFO][5164] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="df262431353447533ae14f7e8540234c43b8638dff4f4777ae00acdf3ee89c13" iface="eth0" netns="" Oct 9 07:21:15.544068 containerd[1594]: 2024-10-09 07:21:15.482 [INFO][5164] k8s.go 615: Releasing IP address(es) ContainerID="df262431353447533ae14f7e8540234c43b8638dff4f4777ae00acdf3ee89c13" Oct 9 07:21:15.544068 containerd[1594]: 2024-10-09 07:21:15.482 [INFO][5164] utils.go 188: Calico CNI releasing IP address ContainerID="df262431353447533ae14f7e8540234c43b8638dff4f4777ae00acdf3ee89c13" Oct 9 07:21:15.544068 containerd[1594]: 2024-10-09 07:21:15.526 [INFO][5172] ipam_plugin.go 417: Releasing address using handleID ContainerID="df262431353447533ae14f7e8540234c43b8638dff4f4777ae00acdf3ee89c13" HandleID="k8s-pod-network.df262431353447533ae14f7e8540234c43b8638dff4f4777ae00acdf3ee89c13" Workload="ci--3975.2.2--f--f6e42a54cc-k8s-calico--kube--controllers--8f856f8--6czvj-eth0" Oct 9 07:21:15.544068 containerd[1594]: 2024-10-09 07:21:15.527 [INFO][5172] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:21:15.544068 containerd[1594]: 2024-10-09 07:21:15.527 [INFO][5172] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:21:15.544068 containerd[1594]: 2024-10-09 07:21:15.534 [WARNING][5172] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="df262431353447533ae14f7e8540234c43b8638dff4f4777ae00acdf3ee89c13" HandleID="k8s-pod-network.df262431353447533ae14f7e8540234c43b8638dff4f4777ae00acdf3ee89c13" Workload="ci--3975.2.2--f--f6e42a54cc-k8s-calico--kube--controllers--8f856f8--6czvj-eth0" Oct 9 07:21:15.544068 containerd[1594]: 2024-10-09 07:21:15.535 [INFO][5172] ipam_plugin.go 445: Releasing address using workloadID ContainerID="df262431353447533ae14f7e8540234c43b8638dff4f4777ae00acdf3ee89c13" HandleID="k8s-pod-network.df262431353447533ae14f7e8540234c43b8638dff4f4777ae00acdf3ee89c13" Workload="ci--3975.2.2--f--f6e42a54cc-k8s-calico--kube--controllers--8f856f8--6czvj-eth0" Oct 9 07:21:15.544068 containerd[1594]: 2024-10-09 07:21:15.537 [INFO][5172] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:21:15.544068 containerd[1594]: 2024-10-09 07:21:15.540 [INFO][5164] k8s.go 621: Teardown processing complete. ContainerID="df262431353447533ae14f7e8540234c43b8638dff4f4777ae00acdf3ee89c13" Oct 9 07:21:15.544068 containerd[1594]: time="2024-10-09T07:21:15.544009122Z" level=info msg="TearDown network for sandbox \"df262431353447533ae14f7e8540234c43b8638dff4f4777ae00acdf3ee89c13\" successfully" Oct 9 07:21:15.548601 containerd[1594]: time="2024-10-09T07:21:15.548511502Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"df262431353447533ae14f7e8540234c43b8638dff4f4777ae00acdf3ee89c13\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 07:21:15.548715 containerd[1594]: time="2024-10-09T07:21:15.548622599Z" level=info msg="RemovePodSandbox \"df262431353447533ae14f7e8540234c43b8638dff4f4777ae00acdf3ee89c13\" returns successfully" Oct 9 07:21:15.550299 containerd[1594]: time="2024-10-09T07:21:15.550133510Z" level=info msg="StopPodSandbox for \"5c0fe1ebbad2b203912d58df78933c946718736c34a6120cabb87ca566fff530\"" Oct 9 07:21:15.550778 containerd[1594]: time="2024-10-09T07:21:15.550618014Z" level=info msg="TearDown network for sandbox \"5c0fe1ebbad2b203912d58df78933c946718736c34a6120cabb87ca566fff530\" successfully" Oct 9 07:21:15.550778 containerd[1594]: time="2024-10-09T07:21:15.550663280Z" level=info msg="StopPodSandbox for \"5c0fe1ebbad2b203912d58df78933c946718736c34a6120cabb87ca566fff530\" returns successfully" Oct 9 07:21:15.551570 containerd[1594]: time="2024-10-09T07:21:15.551246216Z" level=info msg="RemovePodSandbox for \"5c0fe1ebbad2b203912d58df78933c946718736c34a6120cabb87ca566fff530\"" Oct 9 07:21:15.551570 containerd[1594]: time="2024-10-09T07:21:15.551290582Z" level=info msg="Forcibly stopping sandbox \"5c0fe1ebbad2b203912d58df78933c946718736c34a6120cabb87ca566fff530\"" Oct 9 07:21:15.565551 containerd[1594]: time="2024-10-09T07:21:15.551368451Z" level=info msg="TearDown network for sandbox \"5c0fe1ebbad2b203912d58df78933c946718736c34a6120cabb87ca566fff530\" successfully" Oct 9 07:21:15.572524 containerd[1594]: time="2024-10-09T07:21:15.572220755Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5c0fe1ebbad2b203912d58df78933c946718736c34a6120cabb87ca566fff530\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 07:21:15.572524 containerd[1594]: time="2024-10-09T07:21:15.572380959Z" level=info msg="RemovePodSandbox \"5c0fe1ebbad2b203912d58df78933c946718736c34a6120cabb87ca566fff530\" returns successfully" Oct 9 07:21:15.573379 containerd[1594]: time="2024-10-09T07:21:15.573342106Z" level=info msg="StopPodSandbox for \"ff82f6a143dbe2170e9cac8003a3e122eb1328ba382897fbae3bf4524e89917b\"" Oct 9 07:21:15.691671 containerd[1594]: 2024-10-09 07:21:15.637 [WARNING][5190] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ff82f6a143dbe2170e9cac8003a3e122eb1328ba382897fbae3bf4524e89917b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.2--f--f6e42a54cc-k8s-coredns--76f75df574--stxm6-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"d0afbab9-e0c9-43bf-b03d-a80eefbc01be", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 20, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.2-f-f6e42a54cc", ContainerID:"4473b96f1bd298ecfd0f6721811fda32c477d03455ced2a1186bfe744e8f64fe", Pod:"coredns-76f75df574-stxm6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.23.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia62c3a53bfd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:21:15.691671 containerd[1594]: 2024-10-09 07:21:15.638 [INFO][5190] k8s.go 608: Cleaning up netns ContainerID="ff82f6a143dbe2170e9cac8003a3e122eb1328ba382897fbae3bf4524e89917b" Oct 9 07:21:15.691671 containerd[1594]: 2024-10-09 07:21:15.638 [INFO][5190] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="ff82f6a143dbe2170e9cac8003a3e122eb1328ba382897fbae3bf4524e89917b" iface="eth0" netns="" Oct 9 07:21:15.691671 containerd[1594]: 2024-10-09 07:21:15.638 [INFO][5190] k8s.go 615: Releasing IP address(es) ContainerID="ff82f6a143dbe2170e9cac8003a3e122eb1328ba382897fbae3bf4524e89917b" Oct 9 07:21:15.691671 containerd[1594]: 2024-10-09 07:21:15.638 [INFO][5190] utils.go 188: Calico CNI releasing IP address ContainerID="ff82f6a143dbe2170e9cac8003a3e122eb1328ba382897fbae3bf4524e89917b" Oct 9 07:21:15.691671 containerd[1594]: 2024-10-09 07:21:15.673 [INFO][5196] ipam_plugin.go 417: Releasing address using handleID ContainerID="ff82f6a143dbe2170e9cac8003a3e122eb1328ba382897fbae3bf4524e89917b" HandleID="k8s-pod-network.ff82f6a143dbe2170e9cac8003a3e122eb1328ba382897fbae3bf4524e89917b" Workload="ci--3975.2.2--f--f6e42a54cc-k8s-coredns--76f75df574--stxm6-eth0" Oct 9 07:21:15.691671 containerd[1594]: 2024-10-09 07:21:15.673 [INFO][5196] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:21:15.691671 containerd[1594]: 2024-10-09 07:21:15.673 [INFO][5196] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:21:15.691671 containerd[1594]: 2024-10-09 07:21:15.684 [WARNING][5196] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="ff82f6a143dbe2170e9cac8003a3e122eb1328ba382897fbae3bf4524e89917b" HandleID="k8s-pod-network.ff82f6a143dbe2170e9cac8003a3e122eb1328ba382897fbae3bf4524e89917b" Workload="ci--3975.2.2--f--f6e42a54cc-k8s-coredns--76f75df574--stxm6-eth0" Oct 9 07:21:15.691671 containerd[1594]: 2024-10-09 07:21:15.684 [INFO][5196] ipam_plugin.go 445: Releasing address using workloadID ContainerID="ff82f6a143dbe2170e9cac8003a3e122eb1328ba382897fbae3bf4524e89917b" HandleID="k8s-pod-network.ff82f6a143dbe2170e9cac8003a3e122eb1328ba382897fbae3bf4524e89917b" Workload="ci--3975.2.2--f--f6e42a54cc-k8s-coredns--76f75df574--stxm6-eth0" Oct 9 07:21:15.691671 containerd[1594]: 2024-10-09 07:21:15.686 [INFO][5196] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:21:15.691671 containerd[1594]: 2024-10-09 07:21:15.689 [INFO][5190] k8s.go 621: Teardown processing complete. ContainerID="ff82f6a143dbe2170e9cac8003a3e122eb1328ba382897fbae3bf4524e89917b" Oct 9 07:21:15.692251 containerd[1594]: time="2024-10-09T07:21:15.691733928Z" level=info msg="TearDown network for sandbox \"ff82f6a143dbe2170e9cac8003a3e122eb1328ba382897fbae3bf4524e89917b\" successfully" Oct 9 07:21:15.692251 containerd[1594]: time="2024-10-09T07:21:15.691759239Z" level=info msg="StopPodSandbox for \"ff82f6a143dbe2170e9cac8003a3e122eb1328ba382897fbae3bf4524e89917b\" returns successfully" Oct 9 07:21:15.692477 containerd[1594]: time="2024-10-09T07:21:15.692453686Z" level=info msg="RemovePodSandbox for \"ff82f6a143dbe2170e9cac8003a3e122eb1328ba382897fbae3bf4524e89917b\"" Oct 9 07:21:15.692532 containerd[1594]: time="2024-10-09T07:21:15.692488431Z" level=info msg="Forcibly stopping sandbox \"ff82f6a143dbe2170e9cac8003a3e122eb1328ba382897fbae3bf4524e89917b\"" Oct 9 07:21:15.791804 containerd[1594]: 2024-10-09 07:21:15.744 [WARNING][5214] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ff82f6a143dbe2170e9cac8003a3e122eb1328ba382897fbae3bf4524e89917b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.2--f--f6e42a54cc-k8s-coredns--76f75df574--stxm6-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"d0afbab9-e0c9-43bf-b03d-a80eefbc01be", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 20, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.2-f-f6e42a54cc", ContainerID:"4473b96f1bd298ecfd0f6721811fda32c477d03455ced2a1186bfe744e8f64fe", Pod:"coredns-76f75df574-stxm6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.23.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia62c3a53bfd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:21:15.791804 containerd[1594]: 2024-10-09 07:21:15.745 [INFO][5214] k8s.go 608: Cleaning up netns ContainerID="ff82f6a143dbe2170e9cac8003a3e122eb1328ba382897fbae3bf4524e89917b" Oct 9 07:21:15.791804 containerd[1594]: 2024-10-09 07:21:15.745 [INFO][5214] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="ff82f6a143dbe2170e9cac8003a3e122eb1328ba382897fbae3bf4524e89917b" iface="eth0" netns="" Oct 9 07:21:15.791804 containerd[1594]: 2024-10-09 07:21:15.745 [INFO][5214] k8s.go 615: Releasing IP address(es) ContainerID="ff82f6a143dbe2170e9cac8003a3e122eb1328ba382897fbae3bf4524e89917b" Oct 9 07:21:15.791804 containerd[1594]: 2024-10-09 07:21:15.745 [INFO][5214] utils.go 188: Calico CNI releasing IP address ContainerID="ff82f6a143dbe2170e9cac8003a3e122eb1328ba382897fbae3bf4524e89917b" Oct 9 07:21:15.791804 containerd[1594]: 2024-10-09 07:21:15.777 [INFO][5220] ipam_plugin.go 417: Releasing address using handleID ContainerID="ff82f6a143dbe2170e9cac8003a3e122eb1328ba382897fbae3bf4524e89917b" HandleID="k8s-pod-network.ff82f6a143dbe2170e9cac8003a3e122eb1328ba382897fbae3bf4524e89917b" Workload="ci--3975.2.2--f--f6e42a54cc-k8s-coredns--76f75df574--stxm6-eth0" Oct 9 07:21:15.791804 containerd[1594]: 2024-10-09 07:21:15.777 [INFO][5220] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:21:15.791804 containerd[1594]: 2024-10-09 07:21:15.778 [INFO][5220] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:21:15.791804 containerd[1594]: 2024-10-09 07:21:15.784 [WARNING][5220] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="ff82f6a143dbe2170e9cac8003a3e122eb1328ba382897fbae3bf4524e89917b" HandleID="k8s-pod-network.ff82f6a143dbe2170e9cac8003a3e122eb1328ba382897fbae3bf4524e89917b" Workload="ci--3975.2.2--f--f6e42a54cc-k8s-coredns--76f75df574--stxm6-eth0" Oct 9 07:21:15.791804 containerd[1594]: 2024-10-09 07:21:15.785 [INFO][5220] ipam_plugin.go 445: Releasing address using workloadID ContainerID="ff82f6a143dbe2170e9cac8003a3e122eb1328ba382897fbae3bf4524e89917b" HandleID="k8s-pod-network.ff82f6a143dbe2170e9cac8003a3e122eb1328ba382897fbae3bf4524e89917b" Workload="ci--3975.2.2--f--f6e42a54cc-k8s-coredns--76f75df574--stxm6-eth0" Oct 9 07:21:15.791804 containerd[1594]: 2024-10-09 07:21:15.787 [INFO][5220] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:21:15.791804 containerd[1594]: 2024-10-09 07:21:15.789 [INFO][5214] k8s.go 621: Teardown processing complete. ContainerID="ff82f6a143dbe2170e9cac8003a3e122eb1328ba382897fbae3bf4524e89917b" Oct 9 07:21:15.792411 containerd[1594]: time="2024-10-09T07:21:15.791890403Z" level=info msg="TearDown network for sandbox \"ff82f6a143dbe2170e9cac8003a3e122eb1328ba382897fbae3bf4524e89917b\" successfully" Oct 9 07:21:15.797312 containerd[1594]: time="2024-10-09T07:21:15.797146908Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ff82f6a143dbe2170e9cac8003a3e122eb1328ba382897fbae3bf4524e89917b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 07:21:15.797312 containerd[1594]: time="2024-10-09T07:21:15.797275372Z" level=info msg="RemovePodSandbox \"ff82f6a143dbe2170e9cac8003a3e122eb1328ba382897fbae3bf4524e89917b\" returns successfully" Oct 9 07:21:15.798803 containerd[1594]: time="2024-10-09T07:21:15.798754749Z" level=info msg="StopPodSandbox for \"23276d9274cfa1b681cf9cd4b97ce298ae45dc2d51c1949a35937a21499c617d\"" Oct 9 07:21:15.798955 containerd[1594]: time="2024-10-09T07:21:15.798870562Z" level=info msg="TearDown network for sandbox \"23276d9274cfa1b681cf9cd4b97ce298ae45dc2d51c1949a35937a21499c617d\" successfully" Oct 9 07:21:15.798955 containerd[1594]: time="2024-10-09T07:21:15.798886432Z" level=info msg="StopPodSandbox for \"23276d9274cfa1b681cf9cd4b97ce298ae45dc2d51c1949a35937a21499c617d\" returns successfully" Oct 9 07:21:15.800163 containerd[1594]: time="2024-10-09T07:21:15.799708590Z" level=info msg="RemovePodSandbox for \"23276d9274cfa1b681cf9cd4b97ce298ae45dc2d51c1949a35937a21499c617d\"" Oct 9 07:21:15.800163 containerd[1594]: time="2024-10-09T07:21:15.799793397Z" level=info msg="Forcibly stopping sandbox \"23276d9274cfa1b681cf9cd4b97ce298ae45dc2d51c1949a35937a21499c617d\"" Oct 9 07:21:15.800163 containerd[1594]: time="2024-10-09T07:21:15.799955803Z" level=info msg="TearDown network for sandbox \"23276d9274cfa1b681cf9cd4b97ce298ae45dc2d51c1949a35937a21499c617d\" successfully" Oct 9 07:21:15.806044 containerd[1594]: time="2024-10-09T07:21:15.805977173Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"23276d9274cfa1b681cf9cd4b97ce298ae45dc2d51c1949a35937a21499c617d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 07:21:15.806711 containerd[1594]: time="2024-10-09T07:21:15.806057487Z" level=info msg="RemovePodSandbox \"23276d9274cfa1b681cf9cd4b97ce298ae45dc2d51c1949a35937a21499c617d\" returns successfully" Oct 9 07:21:15.807323 containerd[1594]: time="2024-10-09T07:21:15.807266081Z" level=info msg="StopPodSandbox for \"b8084a1f6f964263c12e85d0d0eb3be3c4a1b7491315bc4a047f4f1295143e66\"" Oct 9 07:21:15.907803 containerd[1594]: 2024-10-09 07:21:15.862 [WARNING][5239] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b8084a1f6f964263c12e85d0d0eb3be3c4a1b7491315bc4a047f4f1295143e66" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.2--f--f6e42a54cc-k8s-csi--node--driver--h7tbg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e1f970c3-0e6b-4c36-a7b6-7c163a15816e", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 20, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.2-f-f6e42a54cc", ContainerID:"589d7f337c2012bd82c67821a2e3dc46eefb61ab69903e1f4bc00967ea5fd7d7", Pod:"csi-node-driver-h7tbg", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.23.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"califd7b9fcca1e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:21:15.907803 containerd[1594]: 2024-10-09 07:21:15.863 [INFO][5239] k8s.go 608: Cleaning up netns ContainerID="b8084a1f6f964263c12e85d0d0eb3be3c4a1b7491315bc4a047f4f1295143e66" Oct 9 07:21:15.907803 containerd[1594]: 2024-10-09 07:21:15.863 [INFO][5239] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="b8084a1f6f964263c12e85d0d0eb3be3c4a1b7491315bc4a047f4f1295143e66" iface="eth0" netns="" Oct 9 07:21:15.907803 containerd[1594]: 2024-10-09 07:21:15.863 [INFO][5239] k8s.go 615: Releasing IP address(es) ContainerID="b8084a1f6f964263c12e85d0d0eb3be3c4a1b7491315bc4a047f4f1295143e66" Oct 9 07:21:15.907803 containerd[1594]: 2024-10-09 07:21:15.863 [INFO][5239] utils.go 188: Calico CNI releasing IP address ContainerID="b8084a1f6f964263c12e85d0d0eb3be3c4a1b7491315bc4a047f4f1295143e66" Oct 9 07:21:15.907803 containerd[1594]: 2024-10-09 07:21:15.893 [INFO][5245] ipam_plugin.go 417: Releasing address using handleID ContainerID="b8084a1f6f964263c12e85d0d0eb3be3c4a1b7491315bc4a047f4f1295143e66" HandleID="k8s-pod-network.b8084a1f6f964263c12e85d0d0eb3be3c4a1b7491315bc4a047f4f1295143e66" Workload="ci--3975.2.2--f--f6e42a54cc-k8s-csi--node--driver--h7tbg-eth0" Oct 9 07:21:15.907803 containerd[1594]: 2024-10-09 07:21:15.893 [INFO][5245] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:21:15.907803 containerd[1594]: 2024-10-09 07:21:15.893 [INFO][5245] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:21:15.907803 containerd[1594]: 2024-10-09 07:21:15.901 [WARNING][5245] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="b8084a1f6f964263c12e85d0d0eb3be3c4a1b7491315bc4a047f4f1295143e66" HandleID="k8s-pod-network.b8084a1f6f964263c12e85d0d0eb3be3c4a1b7491315bc4a047f4f1295143e66" Workload="ci--3975.2.2--f--f6e42a54cc-k8s-csi--node--driver--h7tbg-eth0" Oct 9 07:21:15.907803 containerd[1594]: 2024-10-09 07:21:15.901 [INFO][5245] ipam_plugin.go 445: Releasing address using workloadID ContainerID="b8084a1f6f964263c12e85d0d0eb3be3c4a1b7491315bc4a047f4f1295143e66" HandleID="k8s-pod-network.b8084a1f6f964263c12e85d0d0eb3be3c4a1b7491315bc4a047f4f1295143e66" Workload="ci--3975.2.2--f--f6e42a54cc-k8s-csi--node--driver--h7tbg-eth0" Oct 9 07:21:15.907803 containerd[1594]: 2024-10-09 07:21:15.903 [INFO][5245] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:21:15.907803 containerd[1594]: 2024-10-09 07:21:15.905 [INFO][5239] k8s.go 621: Teardown processing complete. ContainerID="b8084a1f6f964263c12e85d0d0eb3be3c4a1b7491315bc4a047f4f1295143e66" Oct 9 07:21:15.908845 containerd[1594]: time="2024-10-09T07:21:15.908245320Z" level=info msg="TearDown network for sandbox \"b8084a1f6f964263c12e85d0d0eb3be3c4a1b7491315bc4a047f4f1295143e66\" successfully" Oct 9 07:21:15.908845 containerd[1594]: time="2024-10-09T07:21:15.908280443Z" level=info msg="StopPodSandbox for \"b8084a1f6f964263c12e85d0d0eb3be3c4a1b7491315bc4a047f4f1295143e66\" returns successfully" Oct 9 07:21:15.909353 containerd[1594]: time="2024-10-09T07:21:15.909303818Z" level=info msg="RemovePodSandbox for \"b8084a1f6f964263c12e85d0d0eb3be3c4a1b7491315bc4a047f4f1295143e66\"" Oct 9 07:21:15.909353 containerd[1594]: time="2024-10-09T07:21:15.909356667Z" level=info msg="Forcibly stopping sandbox \"b8084a1f6f964263c12e85d0d0eb3be3c4a1b7491315bc4a047f4f1295143e66\"" Oct 9 07:21:16.020056 containerd[1594]: 2024-10-09 07:21:15.962 [WARNING][5263] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b8084a1f6f964263c12e85d0d0eb3be3c4a1b7491315bc4a047f4f1295143e66" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.2--f--f6e42a54cc-k8s-csi--node--driver--h7tbg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e1f970c3-0e6b-4c36-a7b6-7c163a15816e", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 20, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.2-f-f6e42a54cc", ContainerID:"589d7f337c2012bd82c67821a2e3dc46eefb61ab69903e1f4bc00967ea5fd7d7", Pod:"csi-node-driver-h7tbg", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.23.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"califd7b9fcca1e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:21:16.020056 containerd[1594]: 2024-10-09 07:21:15.963 [INFO][5263] k8s.go 608: Cleaning up netns ContainerID="b8084a1f6f964263c12e85d0d0eb3be3c4a1b7491315bc4a047f4f1295143e66" Oct 9 07:21:16.020056 containerd[1594]: 2024-10-09 07:21:15.963 [INFO][5263] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="b8084a1f6f964263c12e85d0d0eb3be3c4a1b7491315bc4a047f4f1295143e66" iface="eth0" netns="" Oct 9 07:21:16.020056 containerd[1594]: 2024-10-09 07:21:15.963 [INFO][5263] k8s.go 615: Releasing IP address(es) ContainerID="b8084a1f6f964263c12e85d0d0eb3be3c4a1b7491315bc4a047f4f1295143e66" Oct 9 07:21:16.020056 containerd[1594]: 2024-10-09 07:21:15.963 [INFO][5263] utils.go 188: Calico CNI releasing IP address ContainerID="b8084a1f6f964263c12e85d0d0eb3be3c4a1b7491315bc4a047f4f1295143e66" Oct 9 07:21:16.020056 containerd[1594]: 2024-10-09 07:21:16.002 [INFO][5269] ipam_plugin.go 417: Releasing address using handleID ContainerID="b8084a1f6f964263c12e85d0d0eb3be3c4a1b7491315bc4a047f4f1295143e66" HandleID="k8s-pod-network.b8084a1f6f964263c12e85d0d0eb3be3c4a1b7491315bc4a047f4f1295143e66" Workload="ci--3975.2.2--f--f6e42a54cc-k8s-csi--node--driver--h7tbg-eth0" Oct 9 07:21:16.020056 containerd[1594]: 2024-10-09 07:21:16.002 [INFO][5269] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:21:16.020056 containerd[1594]: 2024-10-09 07:21:16.002 [INFO][5269] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:21:16.020056 containerd[1594]: 2024-10-09 07:21:16.012 [WARNING][5269] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="b8084a1f6f964263c12e85d0d0eb3be3c4a1b7491315bc4a047f4f1295143e66" HandleID="k8s-pod-network.b8084a1f6f964263c12e85d0d0eb3be3c4a1b7491315bc4a047f4f1295143e66" Workload="ci--3975.2.2--f--f6e42a54cc-k8s-csi--node--driver--h7tbg-eth0" Oct 9 07:21:16.020056 containerd[1594]: 2024-10-09 07:21:16.013 [INFO][5269] ipam_plugin.go 445: Releasing address using workloadID ContainerID="b8084a1f6f964263c12e85d0d0eb3be3c4a1b7491315bc4a047f4f1295143e66" HandleID="k8s-pod-network.b8084a1f6f964263c12e85d0d0eb3be3c4a1b7491315bc4a047f4f1295143e66" Workload="ci--3975.2.2--f--f6e42a54cc-k8s-csi--node--driver--h7tbg-eth0" Oct 9 07:21:16.020056 containerd[1594]: 2024-10-09 07:21:16.015 [INFO][5269] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:21:16.020056 containerd[1594]: 2024-10-09 07:21:16.017 [INFO][5263] k8s.go 621: Teardown processing complete. ContainerID="b8084a1f6f964263c12e85d0d0eb3be3c4a1b7491315bc4a047f4f1295143e66" Oct 9 07:21:16.020056 containerd[1594]: time="2024-10-09T07:21:16.019948952Z" level=info msg="TearDown network for sandbox \"b8084a1f6f964263c12e85d0d0eb3be3c4a1b7491315bc4a047f4f1295143e66\" successfully" Oct 9 07:21:16.024343 containerd[1594]: time="2024-10-09T07:21:16.024274031Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b8084a1f6f964263c12e85d0d0eb3be3c4a1b7491315bc4a047f4f1295143e66\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 07:21:16.024843 containerd[1594]: time="2024-10-09T07:21:16.024377208Z" level=info msg="RemovePodSandbox \"b8084a1f6f964263c12e85d0d0eb3be3c4a1b7491315bc4a047f4f1295143e66\" returns successfully" Oct 9 07:21:19.133237 systemd[1]: Started sshd@9-161.35.237.80:22-147.75.109.163:33412.service - OpenSSH per-connection server daemon (147.75.109.163:33412). Oct 9 07:21:19.196454 sshd[5295]: Accepted publickey for core from 147.75.109.163 port 33412 ssh2: RSA SHA256:OOTuok04LPMhCB4st0aqyl5Dfz9DReS3qIQDSGH1S/w Oct 9 07:21:19.197525 sshd[5295]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:21:19.205988 systemd-logind[1573]: New session 10 of user core. Oct 9 07:21:19.213688 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 9 07:21:19.400871 sshd[5295]: pam_unix(sshd:session): session closed for user core Oct 9 07:21:19.406215 systemd[1]: sshd@9-161.35.237.80:22-147.75.109.163:33412.service: Deactivated successfully. Oct 9 07:21:19.411778 systemd-logind[1573]: Session 10 logged out. Waiting for processes to exit. Oct 9 07:21:19.418136 systemd[1]: Started sshd@10-161.35.237.80:22-147.75.109.163:33420.service - OpenSSH per-connection server daemon (147.75.109.163:33420). Oct 9 07:21:19.419837 systemd[1]: session-10.scope: Deactivated successfully. Oct 9 07:21:19.422351 systemd-logind[1573]: Removed session 10. Oct 9 07:21:19.496573 sshd[5310]: Accepted publickey for core from 147.75.109.163 port 33420 ssh2: RSA SHA256:OOTuok04LPMhCB4st0aqyl5Dfz9DReS3qIQDSGH1S/w Oct 9 07:21:19.498844 sshd[5310]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:21:19.506328 systemd-logind[1573]: New session 11 of user core. Oct 9 07:21:19.513175 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 9 07:21:19.761146 sshd[5310]: pam_unix(sshd:session): session closed for user core Oct 9 07:21:19.772022 systemd[1]: Started sshd@11-161.35.237.80:22-147.75.109.163:33434.service - OpenSSH per-connection server daemon (147.75.109.163:33434). Oct 9 07:21:19.774011 systemd[1]: sshd@10-161.35.237.80:22-147.75.109.163:33420.service: Deactivated successfully. Oct 9 07:21:19.781196 systemd-logind[1573]: Session 11 logged out. Waiting for processes to exit. Oct 9 07:21:19.782233 systemd[1]: session-11.scope: Deactivated successfully. Oct 9 07:21:19.786490 systemd-logind[1573]: Removed session 11. Oct 9 07:21:19.849412 sshd[5319]: Accepted publickey for core from 147.75.109.163 port 33434 ssh2: RSA SHA256:OOTuok04LPMhCB4st0aqyl5Dfz9DReS3qIQDSGH1S/w Oct 9 07:21:19.851395 sshd[5319]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:21:19.857664 systemd-logind[1573]: New session 12 of user core. Oct 9 07:21:19.862143 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 9 07:21:20.054932 sshd[5319]: pam_unix(sshd:session): session closed for user core Oct 9 07:21:20.061077 systemd[1]: sshd@11-161.35.237.80:22-147.75.109.163:33434.service: Deactivated successfully. Oct 9 07:21:20.067190 systemd-logind[1573]: Session 12 logged out. Waiting for processes to exit. Oct 9 07:21:20.067500 systemd[1]: session-12.scope: Deactivated successfully. Oct 9 07:21:20.069897 systemd-logind[1573]: Removed session 12. Oct 9 07:21:25.069973 systemd[1]: Started sshd@12-161.35.237.80:22-147.75.109.163:33440.service - OpenSSH per-connection server daemon (147.75.109.163:33440). Oct 9 07:21:25.117445 sshd[5348]: Accepted publickey for core from 147.75.109.163 port 33440 ssh2: RSA SHA256:OOTuok04LPMhCB4st0aqyl5Dfz9DReS3qIQDSGH1S/w Oct 9 07:21:25.119840 sshd[5348]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:21:25.126924 systemd-logind[1573]: New session 13 of user core. Oct 9 07:21:25.131452 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 9 07:21:25.276849 sshd[5348]: pam_unix(sshd:session): session closed for user core Oct 9 07:21:25.283193 systemd[1]: sshd@12-161.35.237.80:22-147.75.109.163:33440.service: Deactivated successfully. Oct 9 07:21:25.286996 systemd-logind[1573]: Session 13 logged out. Waiting for processes to exit. Oct 9 07:21:25.288119 systemd[1]: session-13.scope: Deactivated successfully. Oct 9 07:21:25.289622 systemd-logind[1573]: Removed session 13. Oct 9 07:21:30.288252 systemd[1]: Started sshd@13-161.35.237.80:22-147.75.109.163:46992.service - OpenSSH per-connection server daemon (147.75.109.163:46992). Oct 9 07:21:30.343588 sshd[5369]: Accepted publickey for core from 147.75.109.163 port 46992 ssh2: RSA SHA256:OOTuok04LPMhCB4st0aqyl5Dfz9DReS3qIQDSGH1S/w Oct 9 07:21:30.346004 sshd[5369]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:21:30.353732 systemd-logind[1573]: New session 14 of user core. Oct 9 07:21:30.358173 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 9 07:21:30.565216 sshd[5369]: pam_unix(sshd:session): session closed for user core Oct 9 07:21:30.573965 systemd[1]: sshd@13-161.35.237.80:22-147.75.109.163:46992.service: Deactivated successfully. Oct 9 07:21:30.580219 systemd[1]: session-14.scope: Deactivated successfully. Oct 9 07:21:30.581918 systemd-logind[1573]: Session 14 logged out. Waiting for processes to exit. Oct 9 07:21:30.584223 systemd-logind[1573]: Removed session 14. Oct 9 07:21:31.989719 kubelet[2700]: E1009 07:21:31.989654 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:21:35.572985 systemd[1]: Started sshd@14-161.35.237.80:22-147.75.109.163:47004.service - OpenSSH per-connection server daemon (147.75.109.163:47004). Oct 9 07:21:35.715967 sshd[5405]: Accepted publickey for core from 147.75.109.163 port 47004 ssh2: RSA SHA256:OOTuok04LPMhCB4st0aqyl5Dfz9DReS3qIQDSGH1S/w Oct 9 07:21:35.720349 sshd[5405]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:21:35.732350 systemd-logind[1573]: New session 15 of user core. Oct 9 07:21:35.736448 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 9 07:21:36.027949 sshd[5405]: pam_unix(sshd:session): session closed for user core Oct 9 07:21:36.035452 systemd[1]: sshd@14-161.35.237.80:22-147.75.109.163:47004.service: Deactivated successfully. Oct 9 07:21:36.047124 systemd-logind[1573]: Session 15 logged out. Waiting for processes to exit. Oct 9 07:21:36.048191 systemd[1]: session-15.scope: Deactivated successfully. Oct 9 07:21:36.051453 systemd-logind[1573]: Removed session 15. Oct 9 07:21:39.055304 kubelet[2700]: I1009 07:21:39.055239 2700 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-h7tbg" podStartSLOduration=58.638674864 podStartE2EDuration="1m5.054821329s" podCreationTimestamp="2024-10-09 07:20:34 +0000 UTC" firstStartedPulling="2024-10-09 07:21:03.696234547 +0000 UTC m=+48.887940495" lastFinishedPulling="2024-10-09 07:21:10.112381023 +0000 UTC m=+55.304086960" observedRunningTime="2024-10-09 07:21:10.591040236 +0000 UTC m=+55.782746191" watchObservedRunningTime="2024-10-09 07:21:39.054821329 +0000 UTC m=+84.246527285" Oct 9 07:21:39.066969 kubelet[2700]: I1009 07:21:39.066912 2700 topology_manager.go:215] "Topology Admit Handler" podUID="0be44154-60d5-4392-9d19-9529b282a542" podNamespace="calico-apiserver" podName="calico-apiserver-6f867bcb7f-vbh2j" Oct 9 07:21:39.103299 kubelet[2700]: I1009 07:21:39.100935 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvpr2\" (UniqueName: \"kubernetes.io/projected/0be44154-60d5-4392-9d19-9529b282a542-kube-api-access-jvpr2\") pod \"calico-apiserver-6f867bcb7f-vbh2j\" (UID: \"0be44154-60d5-4392-9d19-9529b282a542\") " pod="calico-apiserver/calico-apiserver-6f867bcb7f-vbh2j" Oct 9 07:21:39.111697 kubelet[2700]: I1009 07:21:39.111657 2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0be44154-60d5-4392-9d19-9529b282a542-calico-apiserver-certs\") pod \"calico-apiserver-6f867bcb7f-vbh2j\" (UID: \"0be44154-60d5-4392-9d19-9529b282a542\") " pod="calico-apiserver/calico-apiserver-6f867bcb7f-vbh2j" Oct 9 07:21:39.220785 kubelet[2700]: E1009 07:21:39.220594 2700 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Oct 9 07:21:39.233091 kubelet[2700]: E1009 07:21:39.233038 2700 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0be44154-60d5-4392-9d19-9529b282a542-calico-apiserver-certs podName:0be44154-60d5-4392-9d19-9529b282a542 nodeName:}" failed. No retries permitted until 2024-10-09 07:21:39.720717906 +0000 UTC m=+84.912423852 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/0be44154-60d5-4392-9d19-9529b282a542-calico-apiserver-certs") pod "calico-apiserver-6f867bcb7f-vbh2j" (UID: "0be44154-60d5-4392-9d19-9529b282a542") : secret "calico-apiserver-certs" not found Oct 9 07:21:39.818598 kubelet[2700]: E1009 07:21:39.818368 2700 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Oct 9 07:21:39.818849 kubelet[2700]: E1009 07:21:39.818688 2700 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0be44154-60d5-4392-9d19-9529b282a542-calico-apiserver-certs podName:0be44154-60d5-4392-9d19-9529b282a542 nodeName:}" failed. No retries permitted until 2024-10-09 07:21:40.818633051 +0000 UTC m=+86.010339007 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/0be44154-60d5-4392-9d19-9529b282a542-calico-apiserver-certs") pod "calico-apiserver-6f867bcb7f-vbh2j" (UID: "0be44154-60d5-4392-9d19-9529b282a542") : secret "calico-apiserver-certs" not found Oct 9 07:21:40.916436 containerd[1594]: time="2024-10-09T07:21:40.916371737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f867bcb7f-vbh2j,Uid:0be44154-60d5-4392-9d19-9529b282a542,Namespace:calico-apiserver,Attempt:0,}" Oct 9 07:21:41.039983 systemd[1]: Started sshd@15-161.35.237.80:22-147.75.109.163:39156.service - OpenSSH per-connection server daemon (147.75.109.163:39156). Oct 9 07:21:41.165880 sshd[5442]: Accepted publickey for core from 147.75.109.163 port 39156 ssh2: RSA SHA256:OOTuok04LPMhCB4st0aqyl5Dfz9DReS3qIQDSGH1S/w Oct 9 07:21:41.172584 sshd[5442]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:21:41.191241 systemd-logind[1573]: New session 16 of user core. Oct 9 07:21:41.195428 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 9 07:21:41.241617 systemd-networkd[1226]: calie7d80173bf7: Link UP Oct 9 07:21:41.244097 systemd-networkd[1226]: calie7d80173bf7: Gained carrier Oct 9 07:21:41.275628 containerd[1594]: 2024-10-09 07:21:41.028 [INFO][5436] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.2.2--f--f6e42a54cc-k8s-calico--apiserver--6f867bcb7f--vbh2j-eth0 calico-apiserver-6f867bcb7f- calico-apiserver 0be44154-60d5-4392-9d19-9529b282a542 1173 0 2024-10-09 07:21:39 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6f867bcb7f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3975.2.2-f-f6e42a54cc calico-apiserver-6f867bcb7f-vbh2j eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie7d80173bf7 [] []}} ContainerID="75ca574184e1132ff517e4c9a21d003c8ef94d0eaf157e655b21d016cf0e40d4" Namespace="calico-apiserver" Pod="calico-apiserver-6f867bcb7f-vbh2j" WorkloadEndpoint="ci--3975.2.2--f--f6e42a54cc-k8s-calico--apiserver--6f867bcb7f--vbh2j-" Oct 9 07:21:41.275628 containerd[1594]: 2024-10-09 07:21:41.029 [INFO][5436] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="75ca574184e1132ff517e4c9a21d003c8ef94d0eaf157e655b21d016cf0e40d4" Namespace="calico-apiserver" Pod="calico-apiserver-6f867bcb7f-vbh2j" WorkloadEndpoint="ci--3975.2.2--f--f6e42a54cc-k8s-calico--apiserver--6f867bcb7f--vbh2j-eth0" Oct 9 07:21:41.275628 containerd[1594]: 2024-10-09 07:21:41.113 [INFO][5443] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="75ca574184e1132ff517e4c9a21d003c8ef94d0eaf157e655b21d016cf0e40d4" HandleID="k8s-pod-network.75ca574184e1132ff517e4c9a21d003c8ef94d0eaf157e655b21d016cf0e40d4" Workload="ci--3975.2.2--f--f6e42a54cc-k8s-calico--apiserver--6f867bcb7f--vbh2j-eth0" Oct 9 07:21:41.275628 containerd[1594]: 2024-10-09 07:21:41.143 [INFO][5443] ipam_plugin.go 270: Auto assigning IP ContainerID="75ca574184e1132ff517e4c9a21d003c8ef94d0eaf157e655b21d016cf0e40d4" HandleID="k8s-pod-network.75ca574184e1132ff517e4c9a21d003c8ef94d0eaf157e655b21d016cf0e40d4" Workload="ci--3975.2.2--f--f6e42a54cc-k8s-calico--apiserver--6f867bcb7f--vbh2j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031b4a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3975.2.2-f-f6e42a54cc", "pod":"calico-apiserver-6f867bcb7f-vbh2j", "timestamp":"2024-10-09 07:21:41.112996654 +0000 UTC"}, Hostname:"ci-3975.2.2-f-f6e42a54cc", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 07:21:41.275628 containerd[1594]: 2024-10-09 07:21:41.143 [INFO][5443] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:21:41.275628 containerd[1594]: 2024-10-09 07:21:41.143 [INFO][5443] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:21:41.275628 containerd[1594]: 2024-10-09 07:21:41.143 [INFO][5443] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.2.2-f-f6e42a54cc' Oct 9 07:21:41.275628 containerd[1594]: 2024-10-09 07:21:41.150 [INFO][5443] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.75ca574184e1132ff517e4c9a21d003c8ef94d0eaf157e655b21d016cf0e40d4" host="ci-3975.2.2-f-f6e42a54cc" Oct 9 07:21:41.275628 containerd[1594]: 2024-10-09 07:21:41.168 [INFO][5443] ipam.go 372: Looking up existing affinities for host host="ci-3975.2.2-f-f6e42a54cc" Oct 9 07:21:41.275628 containerd[1594]: 2024-10-09 07:21:41.184 [INFO][5443] ipam.go 489: Trying affinity for 192.168.23.0/26 host="ci-3975.2.2-f-f6e42a54cc" Oct 9 07:21:41.275628 containerd[1594]: 2024-10-09 07:21:41.188 [INFO][5443] ipam.go 155: Attempting to load block cidr=192.168.23.0/26 host="ci-3975.2.2-f-f6e42a54cc" Oct 9 07:21:41.275628 containerd[1594]: 2024-10-09 07:21:41.198 [INFO][5443] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.23.0/26 host="ci-3975.2.2-f-f6e42a54cc" Oct 9 07:21:41.275628 containerd[1594]: 2024-10-09 07:21:41.201 [INFO][5443] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.23.0/26 handle="k8s-pod-network.75ca574184e1132ff517e4c9a21d003c8ef94d0eaf157e655b21d016cf0e40d4" host="ci-3975.2.2-f-f6e42a54cc" Oct 9 07:21:41.275628 containerd[1594]: 2024-10-09 07:21:41.209 [INFO][5443] ipam.go 1685: Creating new handle: k8s-pod-network.75ca574184e1132ff517e4c9a21d003c8ef94d0eaf157e655b21d016cf0e40d4 Oct 9 07:21:41.275628 containerd[1594]: 2024-10-09 07:21:41.222 [INFO][5443] ipam.go 1203: Writing block in order to claim IPs block=192.168.23.0/26 handle="k8s-pod-network.75ca574184e1132ff517e4c9a21d003c8ef94d0eaf157e655b21d016cf0e40d4" host="ci-3975.2.2-f-f6e42a54cc" Oct 9 07:21:41.275628 containerd[1594]: 2024-10-09 07:21:41.232 [INFO][5443] ipam.go 1216: Successfully claimed IPs: [192.168.23.5/26] block=192.168.23.0/26 handle="k8s-pod-network.75ca574184e1132ff517e4c9a21d003c8ef94d0eaf157e655b21d016cf0e40d4" host="ci-3975.2.2-f-f6e42a54cc" Oct 9 07:21:41.275628 containerd[1594]: 2024-10-09 07:21:41.232 [INFO][5443] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.23.5/26] handle="k8s-pod-network.75ca574184e1132ff517e4c9a21d003c8ef94d0eaf157e655b21d016cf0e40d4" host="ci-3975.2.2-f-f6e42a54cc" Oct 9 07:21:41.275628 containerd[1594]: 2024-10-09 07:21:41.233 [INFO][5443] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:21:41.275628 containerd[1594]: 2024-10-09 07:21:41.233 [INFO][5443] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.23.5/26] IPv6=[] ContainerID="75ca574184e1132ff517e4c9a21d003c8ef94d0eaf157e655b21d016cf0e40d4" HandleID="k8s-pod-network.75ca574184e1132ff517e4c9a21d003c8ef94d0eaf157e655b21d016cf0e40d4" Workload="ci--3975.2.2--f--f6e42a54cc-k8s-calico--apiserver--6f867bcb7f--vbh2j-eth0" Oct 9 07:21:41.279046 containerd[1594]: 2024-10-09 07:21:41.235 [INFO][5436] k8s.go 386: Populated endpoint ContainerID="75ca574184e1132ff517e4c9a21d003c8ef94d0eaf157e655b21d016cf0e40d4" Namespace="calico-apiserver" Pod="calico-apiserver-6f867bcb7f-vbh2j" WorkloadEndpoint="ci--3975.2.2--f--f6e42a54cc-k8s-calico--apiserver--6f867bcb7f--vbh2j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.2--f--f6e42a54cc-k8s-calico--apiserver--6f867bcb7f--vbh2j-eth0", GenerateName:"calico-apiserver-6f867bcb7f-", Namespace:"calico-apiserver", SelfLink:"", UID:"0be44154-60d5-4392-9d19-9529b282a542", ResourceVersion:"1173", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 21, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f867bcb7f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.2-f-f6e42a54cc", ContainerID:"", Pod:"calico-apiserver-6f867bcb7f-vbh2j", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.23.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie7d80173bf7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:21:41.279046 containerd[1594]: 2024-10-09 07:21:41.235 [INFO][5436] k8s.go 387: Calico CNI using IPs: [192.168.23.5/32] ContainerID="75ca574184e1132ff517e4c9a21d003c8ef94d0eaf157e655b21d016cf0e40d4" Namespace="calico-apiserver" Pod="calico-apiserver-6f867bcb7f-vbh2j" WorkloadEndpoint="ci--3975.2.2--f--f6e42a54cc-k8s-calico--apiserver--6f867bcb7f--vbh2j-eth0" Oct 9 07:21:41.279046 containerd[1594]: 2024-10-09 07:21:41.235 [INFO][5436] dataplane_linux.go 68: Setting the host side veth name to calie7d80173bf7 ContainerID="75ca574184e1132ff517e4c9a21d003c8ef94d0eaf157e655b21d016cf0e40d4" Namespace="calico-apiserver" Pod="calico-apiserver-6f867bcb7f-vbh2j" WorkloadEndpoint="ci--3975.2.2--f--f6e42a54cc-k8s-calico--apiserver--6f867bcb7f--vbh2j-eth0" Oct 9 07:21:41.279046 containerd[1594]: 2024-10-09 07:21:41.245 [INFO][5436] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="75ca574184e1132ff517e4c9a21d003c8ef94d0eaf157e655b21d016cf0e40d4" Namespace="calico-apiserver" Pod="calico-apiserver-6f867bcb7f-vbh2j" WorkloadEndpoint="ci--3975.2.2--f--f6e42a54cc-k8s-calico--apiserver--6f867bcb7f--vbh2j-eth0" Oct 9 07:21:41.279046 containerd[1594]: 2024-10-09 07:21:41.245 [INFO][5436] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="75ca574184e1132ff517e4c9a21d003c8ef94d0eaf157e655b21d016cf0e40d4" Namespace="calico-apiserver" Pod="calico-apiserver-6f867bcb7f-vbh2j" WorkloadEndpoint="ci--3975.2.2--f--f6e42a54cc-k8s-calico--apiserver--6f867bcb7f--vbh2j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.2--f--f6e42a54cc-k8s-calico--apiserver--6f867bcb7f--vbh2j-eth0", GenerateName:"calico-apiserver-6f867bcb7f-", Namespace:"calico-apiserver", SelfLink:"", UID:"0be44154-60d5-4392-9d19-9529b282a542", ResourceVersion:"1173", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 21, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f867bcb7f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.2-f-f6e42a54cc", ContainerID:"75ca574184e1132ff517e4c9a21d003c8ef94d0eaf157e655b21d016cf0e40d4", Pod:"calico-apiserver-6f867bcb7f-vbh2j", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.23.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie7d80173bf7", MAC:"86:0e:96:3d:a4:dc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:21:41.279046 containerd[1594]: 2024-10-09 07:21:41.270 [INFO][5436] k8s.go 500: Wrote updated endpoint to datastore ContainerID="75ca574184e1132ff517e4c9a21d003c8ef94d0eaf157e655b21d016cf0e40d4" Namespace="calico-apiserver" Pod="calico-apiserver-6f867bcb7f-vbh2j" WorkloadEndpoint="ci--3975.2.2--f--f6e42a54cc-k8s-calico--apiserver--6f867bcb7f--vbh2j-eth0" Oct 9 07:21:41.325027 containerd[1594]: time="2024-10-09T07:21:41.324848450Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:21:41.325457 containerd[1594]: time="2024-10-09T07:21:41.324998219Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:21:41.325457 containerd[1594]: time="2024-10-09T07:21:41.325272332Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:21:41.325457 containerd[1594]: time="2024-10-09T07:21:41.325300097Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:21:41.417262 containerd[1594]: time="2024-10-09T07:21:41.417021449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f867bcb7f-vbh2j,Uid:0be44154-60d5-4392-9d19-9529b282a542,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"75ca574184e1132ff517e4c9a21d003c8ef94d0eaf157e655b21d016cf0e40d4\"" Oct 9 07:21:41.439179 containerd[1594]: time="2024-10-09T07:21:41.438506754Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Oct 9 07:21:41.664778 sshd[5442]: pam_unix(sshd:session): session closed for user core Oct 9 07:21:41.683321 systemd[1]: Started sshd@16-161.35.237.80:22-147.75.109.163:39172.service - OpenSSH per-connection server daemon (147.75.109.163:39172). Oct 9 07:21:41.686773 systemd[1]: sshd@15-161.35.237.80:22-147.75.109.163:39156.service: Deactivated successfully. Oct 9 07:21:41.694699 systemd[1]: session-16.scope: Deactivated successfully. Oct 9 07:21:41.699006 systemd-logind[1573]: Session 16 logged out. Waiting for processes to exit. Oct 9 07:21:41.706475 systemd-logind[1573]: Removed session 16. Oct 9 07:21:41.815937 sshd[5517]: Accepted publickey for core from 147.75.109.163 port 39172 ssh2: RSA SHA256:OOTuok04LPMhCB4st0aqyl5Dfz9DReS3qIQDSGH1S/w Oct 9 07:21:41.817988 sshd[5517]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:21:41.824997 systemd-logind[1573]: New session 17 of user core. Oct 9 07:21:41.829031 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 9 07:21:42.108746 systemd-journald[1137]: Under memory pressure, flushing caches. Oct 9 07:21:42.105085 systemd-resolved[1487]: Under memory pressure, flushing caches. Oct 9 07:21:42.105128 systemd-resolved[1487]: Flushed all caches. Oct 9 07:21:42.247940 sshd[5517]: pam_unix(sshd:session): session closed for user core Oct 9 07:21:42.264029 systemd[1]: Started sshd@17-161.35.237.80:22-147.75.109.163:39180.service - OpenSSH per-connection server daemon (147.75.109.163:39180). Oct 9 07:21:42.264570 systemd[1]: sshd@16-161.35.237.80:22-147.75.109.163:39172.service: Deactivated successfully. Oct 9 07:21:42.272977 systemd-logind[1573]: Session 17 logged out. Waiting for processes to exit. Oct 9 07:21:42.273231 systemd[1]: session-17.scope: Deactivated successfully. Oct 9 07:21:42.278327 systemd-logind[1573]: Removed session 17. Oct 9 07:21:42.356266 sshd[5528]: Accepted publickey for core from 147.75.109.163 port 39180 ssh2: RSA SHA256:OOTuok04LPMhCB4st0aqyl5Dfz9DReS3qIQDSGH1S/w Oct 9 07:21:42.356768 sshd[5528]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:21:42.366747 systemd-logind[1573]: New session 18 of user core. Oct 9 07:21:42.376446 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 9 07:21:43.063803 systemd-networkd[1226]: calie7d80173bf7: Gained IPv6LL Oct 9 07:21:44.155981 systemd-journald[1137]: Under memory pressure, flushing caches. Oct 9 07:21:44.151950 systemd-resolved[1487]: Under memory pressure, flushing caches. Oct 9 07:21:44.151980 systemd-resolved[1487]: Flushed all caches. Oct 9 07:21:44.862363 containerd[1594]: time="2024-10-09T07:21:44.862075232Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:21:44.871772 containerd[1594]: time="2024-10-09T07:21:44.871687626Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=40419849" Oct 9 07:21:44.873770 containerd[1594]: time="2024-10-09T07:21:44.873699129Z" level=info msg="ImageCreate event name:\"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:21:44.883515 containerd[1594]: time="2024-10-09T07:21:44.883437629Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:21:44.889229 containerd[1594]: time="2024-10-09T07:21:44.889172863Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"41912266\" in 3.450608271s" Oct 9 07:21:44.889229 containerd[1594]: time="2024-10-09T07:21:44.889225588Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\"" Oct 9 07:21:44.894259 containerd[1594]: time="2024-10-09T07:21:44.893728957Z" level=info msg="CreateContainer within sandbox \"75ca574184e1132ff517e4c9a21d003c8ef94d0eaf157e655b21d016cf0e40d4\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 9 07:21:44.936253 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3851439442.mount: Deactivated successfully. Oct 9 07:21:44.938333 containerd[1594]: time="2024-10-09T07:21:44.938278375Z" level=info msg="CreateContainer within sandbox \"75ca574184e1132ff517e4c9a21d003c8ef94d0eaf157e655b21d016cf0e40d4\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"27de751955de86899e906119673e2828a19d12067a3182b8eee465edbe31dc44\"" Oct 9 07:21:44.939465 containerd[1594]: time="2024-10-09T07:21:44.939430041Z" level=info msg="StartContainer for \"27de751955de86899e906119673e2828a19d12067a3182b8eee465edbe31dc44\"" Oct 9 07:21:44.992573 kubelet[2700]: E1009 07:21:44.988948 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:21:45.101631 containerd[1594]: time="2024-10-09T07:21:45.101579189Z" level=info msg="StartContainer for \"27de751955de86899e906119673e2828a19d12067a3182b8eee465edbe31dc44\" returns successfully" Oct 9 07:21:45.323342 sshd[5528]: pam_unix(sshd:session): session closed for user core Oct 9 07:21:45.339174 systemd[1]: Started sshd@18-161.35.237.80:22-147.75.109.163:39196.service - OpenSSH per-connection server daemon (147.75.109.163:39196). Oct 9 07:21:45.344110 systemd-logind[1573]: Session 18 logged out. Waiting for processes to exit. Oct 9 07:21:45.346664 systemd[1]: sshd@17-161.35.237.80:22-147.75.109.163:39180.service: Deactivated successfully. Oct 9 07:21:45.353876 systemd[1]: session-18.scope: Deactivated successfully. Oct 9 07:21:45.358130 systemd-logind[1573]: Removed session 18. Oct 9 07:21:45.496375 sshd[5597]: Accepted publickey for core from 147.75.109.163 port 39196 ssh2: RSA SHA256:OOTuok04LPMhCB4st0aqyl5Dfz9DReS3qIQDSGH1S/w Oct 9 07:21:45.503164 sshd[5597]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:21:45.515508 systemd-logind[1573]: New session 19 of user core. Oct 9 07:21:45.521907 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 9 07:21:45.745661 kubelet[2700]: I1009 07:21:45.745521 2700 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6f867bcb7f-vbh2j" podStartSLOduration=3.29375867 podStartE2EDuration="6.74545679s" podCreationTimestamp="2024-10-09 07:21:39 +0000 UTC" firstStartedPulling="2024-10-09 07:21:41.437946608 +0000 UTC m=+86.629652543" lastFinishedPulling="2024-10-09 07:21:44.889644727 +0000 UTC m=+90.081350663" observedRunningTime="2024-10-09 07:21:45.741997464 +0000 UTC m=+90.933703420" watchObservedRunningTime="2024-10-09 07:21:45.74545679 +0000 UTC m=+90.937162746" Oct 9 07:21:45.989490 kubelet[2700]: E1009 07:21:45.989273 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:21:46.203768 systemd-journald[1137]: Under memory pressure, flushing caches. Oct 9 07:21:46.201292 systemd-resolved[1487]: Under memory pressure, flushing caches. Oct 9 07:21:46.201300 systemd-resolved[1487]: Flushed all caches. Oct 9 07:21:46.456041 sshd[5597]: pam_unix(sshd:session): session closed for user core Oct 9 07:21:46.476114 systemd[1]: Started sshd@19-161.35.237.80:22-147.75.109.163:39206.service - OpenSSH per-connection server daemon (147.75.109.163:39206). Oct 9 07:21:46.480608 systemd[1]: sshd@18-161.35.237.80:22-147.75.109.163:39196.service: Deactivated successfully. Oct 9 07:21:46.499384 systemd[1]: session-19.scope: Deactivated successfully. Oct 9 07:21:46.503963 systemd-logind[1573]: Session 19 logged out. Waiting for processes to exit. Oct 9 07:21:46.508075 systemd-logind[1573]: Removed session 19. Oct 9 07:21:46.576567 sshd[5621]: Accepted publickey for core from 147.75.109.163 port 39206 ssh2: RSA SHA256:OOTuok04LPMhCB4st0aqyl5Dfz9DReS3qIQDSGH1S/w Oct 9 07:21:46.578498 sshd[5621]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:21:46.595752 systemd-logind[1573]: New session 20 of user core. Oct 9 07:21:46.607524 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 9 07:21:46.870319 sshd[5621]: pam_unix(sshd:session): session closed for user core Oct 9 07:21:46.876391 systemd[1]: sshd@19-161.35.237.80:22-147.75.109.163:39206.service: Deactivated successfully. Oct 9 07:21:46.880358 systemd[1]: session-20.scope: Deactivated successfully. Oct 9 07:21:46.880413 systemd-logind[1573]: Session 20 logged out. Waiting for processes to exit. Oct 9 07:21:46.884381 systemd-logind[1573]: Removed session 20. Oct 9 07:21:51.879895 systemd[1]: Started sshd@20-161.35.237.80:22-147.75.109.163:54214.service - OpenSSH per-connection server daemon (147.75.109.163:54214). Oct 9 07:21:51.961678 sshd[5660]: Accepted publickey for core from 147.75.109.163 port 54214 ssh2: RSA SHA256:OOTuok04LPMhCB4st0aqyl5Dfz9DReS3qIQDSGH1S/w Oct 9 07:21:51.964018 sshd[5660]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:21:51.970990 systemd-logind[1573]: New session 21 of user core. Oct 9 07:21:51.976216 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 9 07:21:52.198803 sshd[5660]: pam_unix(sshd:session): session closed for user core Oct 9 07:21:52.205509 systemd[1]: sshd@20-161.35.237.80:22-147.75.109.163:54214.service: Deactivated successfully. Oct 9 07:21:52.211964 systemd-logind[1573]: Session 21 logged out. Waiting for processes to exit. Oct 9 07:21:52.212968 systemd[1]: session-21.scope: Deactivated successfully. Oct 9 07:21:52.215397 systemd-logind[1573]: Removed session 21. Oct 9 07:21:53.989235 kubelet[2700]: E1009 07:21:53.989172 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:21:57.214199 systemd[1]: Started sshd@21-161.35.237.80:22-147.75.109.163:37248.service - OpenSSH per-connection server daemon (147.75.109.163:37248). Oct 9 07:21:57.256315 sshd[5681]: Accepted publickey for core from 147.75.109.163 port 37248 ssh2: RSA SHA256:OOTuok04LPMhCB4st0aqyl5Dfz9DReS3qIQDSGH1S/w Oct 9 07:21:57.258216 sshd[5681]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:21:57.263820 systemd-logind[1573]: New session 22 of user core. Oct 9 07:21:57.269934 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 9 07:21:57.470525 sshd[5681]: pam_unix(sshd:session): session closed for user core Oct 9 07:21:57.475633 systemd-logind[1573]: Session 22 logged out. Waiting for processes to exit. Oct 9 07:21:57.477025 systemd[1]: sshd@21-161.35.237.80:22-147.75.109.163:37248.service: Deactivated successfully. Oct 9 07:21:57.481665 systemd[1]: session-22.scope: Deactivated successfully. Oct 9 07:21:57.484049 systemd-logind[1573]: Removed session 22. Oct 9 07:22:02.482071 systemd[1]: Started sshd@22-161.35.237.80:22-147.75.109.163:37262.service - OpenSSH per-connection server daemon (147.75.109.163:37262). Oct 9 07:22:02.572346 sshd[5698]: Accepted publickey for core from 147.75.109.163 port 37262 ssh2: RSA SHA256:OOTuok04LPMhCB4st0aqyl5Dfz9DReS3qIQDSGH1S/w Oct 9 07:22:02.574920 sshd[5698]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:22:02.581671 systemd-logind[1573]: New session 23 of user core. Oct 9 07:22:02.585699 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 9 07:22:02.773806 sshd[5698]: pam_unix(sshd:session): session closed for user core Oct 9 07:22:02.780187 systemd[1]: sshd@22-161.35.237.80:22-147.75.109.163:37262.service: Deactivated successfully. Oct 9 07:22:02.785438 systemd[1]: session-23.scope: Deactivated successfully. Oct 9 07:22:02.786223 systemd-logind[1573]: Session 23 logged out. Waiting for processes to exit. Oct 9 07:22:02.790152 systemd-logind[1573]: Removed session 23. Oct 9 07:22:05.989850 kubelet[2700]: E1009 07:22:05.989792 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:22:07.791031 systemd[1]: Started sshd@23-161.35.237.80:22-147.75.109.163:56236.service - OpenSSH per-connection server daemon (147.75.109.163:56236). Oct 9 07:22:07.885451 sshd[5740]: Accepted publickey for core from 147.75.109.163 port 56236 ssh2: RSA SHA256:OOTuok04LPMhCB4st0aqyl5Dfz9DReS3qIQDSGH1S/w Oct 9 07:22:07.887447 sshd[5740]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:22:07.895725 systemd-logind[1573]: New session 24 of user core. Oct 9 07:22:07.901082 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 9 07:22:08.169633 sshd[5740]: pam_unix(sshd:session): session closed for user core Oct 9 07:22:08.176504 systemd[1]: sshd@23-161.35.237.80:22-147.75.109.163:56236.service: Deactivated successfully. Oct 9 07:22:08.182353 systemd-logind[1573]: Session 24 logged out. Waiting for processes to exit. Oct 9 07:22:08.183148 systemd[1]: session-24.scope: Deactivated successfully. Oct 9 07:22:08.187384 systemd-logind[1573]: Removed session 24. Oct 9 07:22:13.182042 systemd[1]: Started sshd@24-161.35.237.80:22-147.75.109.163:56250.service - OpenSSH per-connection server daemon (147.75.109.163:56250). Oct 9 07:22:13.233117 sshd[5773]: Accepted publickey for core from 147.75.109.163 port 56250 ssh2: RSA SHA256:OOTuok04LPMhCB4st0aqyl5Dfz9DReS3qIQDSGH1S/w Oct 9 07:22:13.235210 sshd[5773]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:22:13.242440 systemd-logind[1573]: New session 25 of user core. Oct 9 07:22:13.253736 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 9 07:22:13.401841 sshd[5773]: pam_unix(sshd:session): session closed for user core Oct 9 07:22:13.405807 systemd[1]: sshd@24-161.35.237.80:22-147.75.109.163:56250.service: Deactivated successfully. Oct 9 07:22:13.410519 systemd-logind[1573]: Session 25 logged out. Waiting for processes to exit. Oct 9 07:22:13.412842 systemd[1]: session-25.scope: Deactivated successfully. Oct 9 07:22:13.414449 systemd-logind[1573]: Removed session 25. Oct 9 07:22:18.411021 systemd[1]: Started sshd@25-161.35.237.80:22-147.75.109.163:46108.service - OpenSSH per-connection server daemon (147.75.109.163:46108). Oct 9 07:22:18.463462 sshd[5814]: Accepted publickey for core from 147.75.109.163 port 46108 ssh2: RSA SHA256:OOTuok04LPMhCB4st0aqyl5Dfz9DReS3qIQDSGH1S/w Oct 9 07:22:18.465384 sshd[5814]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:22:18.472290 systemd-logind[1573]: New session 26 of user core. Oct 9 07:22:18.479135 systemd[1]: Started session-26.scope - Session 26 of User core. Oct 9 07:22:18.619713 sshd[5814]: pam_unix(sshd:session): session closed for user core Oct 9 07:22:18.623030 systemd[1]: sshd@25-161.35.237.80:22-147.75.109.163:46108.service: Deactivated successfully. Oct 9 07:22:18.628519 systemd[1]: session-26.scope: Deactivated successfully. Oct 9 07:22:18.629613 systemd-logind[1573]: Session 26 logged out. Waiting for processes to exit. Oct 9 07:22:18.631935 systemd-logind[1573]: Removed session 26. Oct 9 07:22:18.989244 kubelet[2700]: E1009 07:22:18.988720 2700 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"