Oct 9 07:52:50.886091 kernel: Linux version 6.6.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Oct 8 18:24:27 -00 2024 Oct 9 07:52:50.886119 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=ed527eaf992abc270af9987554566193214d123941456fd3066b47855e5178a5 Oct 9 07:52:50.886136 kernel: BIOS-provided physical RAM map: Oct 9 07:52:50.886143 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Oct 9 07:52:50.886149 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Oct 9 07:52:50.886155 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Oct 9 07:52:50.886163 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Oct 9 07:52:50.886170 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Oct 9 07:52:50.886177 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 9 07:52:50.886187 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Oct 9 07:52:50.886198 kernel: NX (Execute Disable) protection: active Oct 9 07:52:50.886206 kernel: APIC: Static calls initialized Oct 9 07:52:50.886217 kernel: SMBIOS 2.8 present. Oct 9 07:52:50.886227 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Oct 9 07:52:50.886239 kernel: Hypervisor detected: KVM Oct 9 07:52:50.886255 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 9 07:52:50.886266 kernel: kvm-clock: using sched offset of 2841653287 cycles Oct 9 07:52:50.886277 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 9 07:52:50.886286 kernel: tsc: Detected 2494.136 MHz processor Oct 9 07:52:50.886294 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 9 07:52:50.886302 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 9 07:52:50.886310 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Oct 9 07:52:50.886318 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Oct 9 07:52:50.886326 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 9 07:52:50.886337 kernel: ACPI: Early table checksum verification disabled Oct 9 07:52:50.886345 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Oct 9 07:52:50.886353 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 07:52:50.886360 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 07:52:50.886368 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 07:52:50.886376 kernel: ACPI: FACS 0x000000007FFE0000 000040 Oct 9 07:52:50.886383 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 07:52:50.886391 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 07:52:50.886399 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 07:52:50.886409 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 07:52:50.886417 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Oct 9 07:52:50.886424 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Oct 9 07:52:50.886432 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Oct 9 07:52:50.886439 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Oct 9 07:52:50.886447 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Oct 9 07:52:50.886455 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Oct 9 07:52:50.886469 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Oct 9 07:52:50.886479 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Oct 9 07:52:50.886487 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Oct 9 07:52:50.886496 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Oct 9 07:52:50.886504 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Oct 9 07:52:50.886513 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Oct 9 07:52:50.886521 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Oct 9 07:52:50.886532 kernel: Zone ranges: Oct 9 07:52:50.886540 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 9 07:52:50.886548 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Oct 9 07:52:50.886556 kernel: Normal empty Oct 9 07:52:50.886564 kernel: Movable zone start for each node Oct 9 07:52:50.886573 kernel: Early memory node ranges Oct 9 07:52:50.886581 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Oct 9 07:52:50.886589 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Oct 9 07:52:50.886597 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Oct 9 07:52:50.886608 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 9 07:52:50.886618 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Oct 9 07:52:50.886627 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Oct 9 07:52:50.886635 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 9 07:52:50.886643 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 9 07:52:50.886651 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 9 07:52:50.886659 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 9 07:52:50.886667 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 9 07:52:50.886675 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 9 07:52:50.886686 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 9 07:52:50.886694 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 9 07:52:50.886702 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 9 07:52:50.886710 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 9 07:52:50.886718 kernel: TSC deadline timer available Oct 9 07:52:50.886726 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Oct 9 07:52:50.886734 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 9 07:52:50.886742 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Oct 9 07:52:50.886750 kernel: Booting paravirtualized kernel on KVM Oct 9 07:52:50.886761 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 9 07:52:50.886772 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Oct 9 07:52:50.886781 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Oct 9 07:52:50.886789 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Oct 9 07:52:50.886797 kernel: pcpu-alloc: [0] 0 1 Oct 9 07:52:50.886805 kernel: kvm-guest: PV spinlocks disabled, no host support Oct 9 07:52:50.886814 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=ed527eaf992abc270af9987554566193214d123941456fd3066b47855e5178a5 Oct 9 07:52:50.886823 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 9 07:52:50.886831 kernel: random: crng init done Oct 9 07:52:50.886841 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 9 07:52:50.886850 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Oct 9 07:52:50.886858 kernel: Fallback order for Node 0: 0 Oct 9 07:52:50.886866 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Oct 9 07:52:50.886874 kernel: Policy zone: DMA32 Oct 9 07:52:50.886882 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 9 07:52:50.886890 kernel: Memory: 1971204K/2096612K available (12288K kernel code, 2305K rwdata, 22716K rodata, 42828K init, 2360K bss, 125148K reserved, 0K cma-reserved) Oct 9 07:52:50.886899 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Oct 9 07:52:50.886909 kernel: Kernel/User page tables isolation: enabled Oct 9 07:52:50.886917 kernel: ftrace: allocating 37784 entries in 148 pages Oct 9 07:52:50.886925 kernel: ftrace: allocated 148 pages with 3 groups Oct 9 07:52:50.886933 kernel: Dynamic Preempt: voluntary Oct 9 07:52:50.886942 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 9 07:52:50.886951 kernel: rcu: RCU event tracing is enabled. Oct 9 07:52:50.886959 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Oct 9 07:52:50.886967 kernel: Trampoline variant of Tasks RCU enabled. Oct 9 07:52:50.886975 kernel: Rude variant of Tasks RCU enabled. Oct 9 07:52:50.886984 kernel: Tracing variant of Tasks RCU enabled. Oct 9 07:52:50.886994 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 9 07:52:50.887002 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Oct 9 07:52:50.887010 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Oct 9 07:52:50.887021 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 9 07:52:50.887029 kernel: Console: colour VGA+ 80x25 Oct 9 07:52:50.887037 kernel: printk: console [tty0] enabled Oct 9 07:52:50.887045 kernel: printk: console [ttyS0] enabled Oct 9 07:52:50.887053 kernel: ACPI: Core revision 20230628 Oct 9 07:52:50.887086 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 9 07:52:50.887098 kernel: APIC: Switch to symmetric I/O mode setup Oct 9 07:52:50.887106 kernel: x2apic enabled Oct 9 07:52:50.887114 kernel: APIC: Switched APIC routing to: physical x2apic Oct 9 07:52:50.887122 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 9 07:52:50.887131 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39654230, max_idle_ns: 440795207432 ns Oct 9 07:52:50.887139 kernel: Calibrating delay loop (skipped) preset value.. 4988.27 BogoMIPS (lpj=2494136) Oct 9 07:52:50.887147 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Oct 9 07:52:50.887155 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Oct 9 07:52:50.887175 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 9 07:52:50.887183 kernel: Spectre V2 : Mitigation: Retpolines Oct 9 07:52:50.887192 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Oct 9 07:52:50.887203 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Oct 9 07:52:50.887213 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Oct 9 07:52:50.887226 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 9 07:52:50.887239 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 9 07:52:50.887252 kernel: MDS: Mitigation: Clear CPU buffers Oct 9 07:52:50.887264 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Oct 9 07:52:50.887284 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 9 07:52:50.887298 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 9 07:52:50.887311 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 9 07:52:50.887323 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 9 07:52:50.887336 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Oct 9 07:52:50.887348 kernel: Freeing SMP alternatives memory: 32K Oct 9 07:52:50.887360 kernel: pid_max: default: 32768 minimum: 301 Oct 9 07:52:50.887375 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Oct 9 07:52:50.887387 kernel: landlock: Up and running. Oct 9 07:52:50.887396 kernel: SELinux: Initializing. Oct 9 07:52:50.887405 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 9 07:52:50.887414 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 9 07:52:50.887422 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Oct 9 07:52:50.887431 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 9 07:52:50.887440 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 9 07:52:50.887449 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 9 07:52:50.887457 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Oct 9 07:52:50.887469 kernel: signal: max sigframe size: 1776 Oct 9 07:52:50.887478 kernel: rcu: Hierarchical SRCU implementation. Oct 9 07:52:50.887487 kernel: rcu: Max phase no-delay instances is 400. Oct 9 07:52:50.887501 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Oct 9 07:52:50.887514 kernel: smp: Bringing up secondary CPUs ... Oct 9 07:52:50.887525 kernel: smpboot: x86: Booting SMP configuration: Oct 9 07:52:50.887541 kernel: .... node #0, CPUs: #1 Oct 9 07:52:50.887554 kernel: smp: Brought up 1 node, 2 CPUs Oct 9 07:52:50.887566 kernel: smpboot: Max logical packages: 1 Oct 9 07:52:50.887585 kernel: smpboot: Total of 2 processors activated (9976.54 BogoMIPS) Oct 9 07:52:50.887596 kernel: devtmpfs: initialized Oct 9 07:52:50.887605 kernel: x86/mm: Memory block size: 128MB Oct 9 07:52:50.887613 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 9 07:52:50.887622 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Oct 9 07:52:50.887631 kernel: pinctrl core: initialized pinctrl subsystem Oct 9 07:52:50.887639 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 9 07:52:50.887648 kernel: audit: initializing netlink subsys (disabled) Oct 9 07:52:50.887657 kernel: audit: type=2000 audit(1728460369.266:1): state=initialized audit_enabled=0 res=1 Oct 9 07:52:50.887669 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 9 07:52:50.887677 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 9 07:52:50.887817 kernel: cpuidle: using governor menu Oct 9 07:52:50.887826 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 9 07:52:50.887836 kernel: dca service started, version 1.12.1 Oct 9 07:52:50.887844 kernel: PCI: Using configuration type 1 for base access Oct 9 07:52:50.887853 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 9 07:52:50.887862 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 9 07:52:50.887871 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 9 07:52:50.887883 kernel: ACPI: Added _OSI(Module Device) Oct 9 07:52:50.887892 kernel: ACPI: Added _OSI(Processor Device) Oct 9 07:52:50.887906 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 9 07:52:50.887916 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 9 07:52:50.887929 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 9 07:52:50.887943 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Oct 9 07:52:50.887952 kernel: ACPI: Interpreter enabled Oct 9 07:52:50.887965 kernel: ACPI: PM: (supports S0 S5) Oct 9 07:52:50.887973 kernel: ACPI: Using IOAPIC for interrupt routing Oct 9 07:52:50.887985 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 9 07:52:50.887994 kernel: PCI: Using E820 reservations for host bridge windows Oct 9 07:52:50.888003 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Oct 9 07:52:50.888011 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 9 07:52:50.888278 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Oct 9 07:52:50.888444 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Oct 9 07:52:50.888607 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Oct 9 07:52:50.888635 kernel: acpiphp: Slot [3] registered Oct 9 07:52:50.888652 kernel: acpiphp: Slot [4] registered Oct 9 07:52:50.888669 kernel: acpiphp: Slot [5] registered Oct 9 07:52:50.888682 kernel: acpiphp: Slot [6] registered Oct 9 07:52:50.888695 kernel: acpiphp: Slot [7] registered Oct 9 07:52:50.888707 kernel: acpiphp: Slot [8] registered Oct 9 07:52:50.888718 kernel: acpiphp: Slot [9] registered Oct 9 07:52:50.888731 kernel: acpiphp: Slot [10] registered Oct 9 07:52:50.888743 kernel: acpiphp: Slot [11] registered Oct 9 07:52:50.888755 kernel: acpiphp: Slot [12] registered Oct 9 07:52:50.888771 kernel: acpiphp: Slot [13] registered Oct 9 07:52:50.888783 kernel: acpiphp: Slot [14] registered Oct 9 07:52:50.888795 kernel: acpiphp: Slot [15] registered Oct 9 07:52:50.888808 kernel: acpiphp: Slot [16] registered Oct 9 07:52:50.888819 kernel: acpiphp: Slot [17] registered Oct 9 07:52:50.888832 kernel: acpiphp: Slot [18] registered Oct 9 07:52:50.888844 kernel: acpiphp: Slot [19] registered Oct 9 07:52:50.888856 kernel: acpiphp: Slot [20] registered Oct 9 07:52:50.888867 kernel: acpiphp: Slot [21] registered Oct 9 07:52:50.888883 kernel: acpiphp: Slot [22] registered Oct 9 07:52:50.888895 kernel: acpiphp: Slot [23] registered Oct 9 07:52:50.888907 kernel: acpiphp: Slot [24] registered Oct 9 07:52:50.888919 kernel: acpiphp: Slot [25] registered Oct 9 07:52:50.888931 kernel: acpiphp: Slot [26] registered Oct 9 07:52:50.888943 kernel: acpiphp: Slot [27] registered Oct 9 07:52:50.888955 kernel: acpiphp: Slot [28] registered Oct 9 07:52:50.888966 kernel: acpiphp: Slot [29] registered Oct 9 07:52:50.888978 kernel: acpiphp: Slot [30] registered Oct 9 07:52:50.888990 kernel: acpiphp: Slot [31] registered Oct 9 07:52:50.889007 kernel: PCI host bridge to bus 0000:00 Oct 9 07:52:50.889267 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 9 07:52:50.889410 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 9 07:52:50.889510 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 9 07:52:50.889620 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Oct 9 07:52:50.889712 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Oct 9 07:52:50.889798 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 9 07:52:50.889920 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Oct 9 07:52:50.890077 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Oct 9 07:52:50.890191 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Oct 9 07:52:50.890287 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Oct 9 07:52:50.890432 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Oct 9 07:52:50.890552 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Oct 9 07:52:50.890662 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Oct 9 07:52:50.890757 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Oct 9 07:52:50.890866 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Oct 9 07:52:50.890961 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Oct 9 07:52:50.891075 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Oct 9 07:52:50.891171 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Oct 9 07:52:50.891269 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Oct 9 07:52:50.891382 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Oct 9 07:52:50.891481 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Oct 9 07:52:50.891575 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Oct 9 07:52:50.891669 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Oct 9 07:52:50.891822 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Oct 9 07:52:50.891919 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 9 07:52:50.892036 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Oct 9 07:52:50.892153 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Oct 9 07:52:50.892247 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Oct 9 07:52:50.892341 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Oct 9 07:52:50.892462 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Oct 9 07:52:50.892580 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Oct 9 07:52:50.892675 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Oct 9 07:52:50.892775 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Oct 9 07:52:50.892888 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Oct 9 07:52:50.892985 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Oct 9 07:52:50.893089 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Oct 9 07:52:50.893185 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Oct 9 07:52:50.893298 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Oct 9 07:52:50.893393 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Oct 9 07:52:50.893492 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Oct 9 07:52:50.893586 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Oct 9 07:52:50.893716 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Oct 9 07:52:50.893811 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Oct 9 07:52:50.893905 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Oct 9 07:52:50.894024 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Oct 9 07:52:50.894216 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Oct 9 07:52:50.894350 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Oct 9 07:52:50.894449 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Oct 9 07:52:50.894461 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 9 07:52:50.894470 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 9 07:52:50.894480 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 9 07:52:50.894489 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 9 07:52:50.894498 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Oct 9 07:52:50.894511 kernel: iommu: Default domain type: Translated Oct 9 07:52:50.894521 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 9 07:52:50.894530 kernel: PCI: Using ACPI for IRQ routing Oct 9 07:52:50.894539 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 9 07:52:50.894547 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Oct 9 07:52:50.894556 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Oct 9 07:52:50.894658 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Oct 9 07:52:50.894753 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Oct 9 07:52:50.894872 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 9 07:52:50.894884 kernel: vgaarb: loaded Oct 9 07:52:50.894894 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 9 07:52:50.894903 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 9 07:52:50.894912 kernel: clocksource: Switched to clocksource kvm-clock Oct 9 07:52:50.894921 kernel: VFS: Disk quotas dquot_6.6.0 Oct 9 07:52:50.894930 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 9 07:52:50.894939 kernel: pnp: PnP ACPI init Oct 9 07:52:50.894948 kernel: pnp: PnP ACPI: found 4 devices Oct 9 07:52:50.894961 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 9 07:52:50.894970 kernel: NET: Registered PF_INET protocol family Oct 9 07:52:50.894978 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 9 07:52:50.894987 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Oct 9 07:52:50.894996 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 9 07:52:50.895005 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Oct 9 07:52:50.895014 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Oct 9 07:52:50.895023 kernel: TCP: Hash tables configured (established 16384 bind 16384) Oct 9 07:52:50.895032 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 9 07:52:50.895044 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 9 07:52:50.895053 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 9 07:52:50.895189 kernel: NET: Registered PF_XDP protocol family Oct 9 07:52:50.895293 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 9 07:52:50.895379 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 9 07:52:50.895467 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 9 07:52:50.895564 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Oct 9 07:52:50.895657 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Oct 9 07:52:50.895759 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Oct 9 07:52:50.895865 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Oct 9 07:52:50.895878 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Oct 9 07:52:50.895974 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 37208 usecs Oct 9 07:52:50.895986 kernel: PCI: CLS 0 bytes, default 64 Oct 9 07:52:50.895995 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Oct 9 07:52:50.896005 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39654230, max_idle_ns: 440795207432 ns Oct 9 07:52:50.896014 kernel: Initialise system trusted keyrings Oct 9 07:52:50.896023 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Oct 9 07:52:50.896035 kernel: Key type asymmetric registered Oct 9 07:52:50.896044 kernel: Asymmetric key parser 'x509' registered Oct 9 07:52:50.896052 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Oct 9 07:52:50.896157 kernel: io scheduler mq-deadline registered Oct 9 07:52:50.896166 kernel: io scheduler kyber registered Oct 9 07:52:50.896175 kernel: io scheduler bfq registered Oct 9 07:52:50.896184 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 9 07:52:50.896193 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Oct 9 07:52:50.896202 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Oct 9 07:52:50.896214 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Oct 9 07:52:50.896223 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 9 07:52:50.896232 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 9 07:52:50.896241 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 9 07:52:50.896250 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 9 07:52:50.896259 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 9 07:52:50.896379 kernel: rtc_cmos 00:03: RTC can wake from S4 Oct 9 07:52:50.896393 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 9 07:52:50.896483 kernel: rtc_cmos 00:03: registered as rtc0 Oct 9 07:52:50.896595 kernel: rtc_cmos 00:03: setting system clock to 2024-10-09T07:52:50 UTC (1728460370) Oct 9 07:52:50.896682 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Oct 9 07:52:50.896694 kernel: intel_pstate: CPU model not supported Oct 9 07:52:50.896703 kernel: NET: Registered PF_INET6 protocol family Oct 9 07:52:50.896713 kernel: Segment Routing with IPv6 Oct 9 07:52:50.896727 kernel: In-situ OAM (IOAM) with IPv6 Oct 9 07:52:50.896736 kernel: NET: Registered PF_PACKET protocol family Oct 9 07:52:50.896745 kernel: Key type dns_resolver registered Oct 9 07:52:50.896759 kernel: IPI shorthand broadcast: enabled Oct 9 07:52:50.896768 kernel: sched_clock: Marking stable (858002195, 80472999)->(1033661286, -95186092) Oct 9 07:52:50.896777 kernel: registered taskstats version 1 Oct 9 07:52:50.896786 kernel: Loading compiled-in X.509 certificates Oct 9 07:52:50.896795 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.54-flatcar: 14ce23fc5070d0471461f1dd6e298a5588e7ba8f' Oct 9 07:52:50.896804 kernel: Key type .fscrypt registered Oct 9 07:52:50.896813 kernel: Key type fscrypt-provisioning registered Oct 9 07:52:50.896822 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 9 07:52:50.896833 kernel: ima: Allocated hash algorithm: sha1 Oct 9 07:52:50.896842 kernel: ima: No architecture policies found Oct 9 07:52:50.896851 kernel: clk: Disabling unused clocks Oct 9 07:52:50.896860 kernel: Freeing unused kernel image (initmem) memory: 42828K Oct 9 07:52:50.896869 kernel: Write protecting the kernel read-only data: 36864k Oct 9 07:52:50.896897 kernel: Freeing unused kernel image (rodata/data gap) memory: 1860K Oct 9 07:52:50.896913 kernel: Run /init as init process Oct 9 07:52:50.896923 kernel: with arguments: Oct 9 07:52:50.896932 kernel: /init Oct 9 07:52:50.896945 kernel: with environment: Oct 9 07:52:50.896954 kernel: HOME=/ Oct 9 07:52:50.896963 kernel: TERM=linux Oct 9 07:52:50.896972 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 9 07:52:50.896984 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 9 07:52:50.896996 systemd[1]: Detected virtualization kvm. Oct 9 07:52:50.897006 systemd[1]: Detected architecture x86-64. Oct 9 07:52:50.897016 systemd[1]: Running in initrd. Oct 9 07:52:50.897028 systemd[1]: No hostname configured, using default hostname. Oct 9 07:52:50.897037 systemd[1]: Hostname set to . Oct 9 07:52:50.897047 systemd[1]: Initializing machine ID from VM UUID. Oct 9 07:52:50.897074 systemd[1]: Queued start job for default target initrd.target. Oct 9 07:52:50.897088 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 07:52:50.897098 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 07:52:50.897108 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 9 07:52:50.897118 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 9 07:52:50.897131 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 9 07:52:50.897141 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 9 07:52:50.897152 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 9 07:52:50.897162 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 9 07:52:50.897172 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 07:52:50.897182 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 9 07:52:50.897192 systemd[1]: Reached target paths.target - Path Units. Oct 9 07:52:50.897204 systemd[1]: Reached target slices.target - Slice Units. Oct 9 07:52:50.897214 systemd[1]: Reached target swap.target - Swaps. Oct 9 07:52:50.897226 systemd[1]: Reached target timers.target - Timer Units. Oct 9 07:52:50.897236 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 9 07:52:50.897246 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 9 07:52:50.897259 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 9 07:52:50.897269 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 9 07:52:50.897279 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 9 07:52:50.897288 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 9 07:52:50.897298 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 07:52:50.897308 systemd[1]: Reached target sockets.target - Socket Units. Oct 9 07:52:50.897318 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 9 07:52:50.897328 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 9 07:52:50.897337 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 9 07:52:50.897350 systemd[1]: Starting systemd-fsck-usr.service... Oct 9 07:52:50.897360 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 9 07:52:50.897370 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 9 07:52:50.897379 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 07:52:50.897417 systemd-journald[182]: Collecting audit messages is disabled. Oct 9 07:52:50.897447 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 9 07:52:50.897457 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 07:52:50.897468 systemd-journald[182]: Journal started Oct 9 07:52:50.897490 systemd-journald[182]: Runtime Journal (/run/log/journal/29b1e822a4b84d45b5501495db4db540) is 4.9M, max 39.3M, 34.4M free. Oct 9 07:52:50.901983 systemd-modules-load[183]: Inserted module 'overlay' Oct 9 07:52:50.908076 systemd[1]: Started systemd-journald.service - Journal Service. Oct 9 07:52:50.906360 systemd[1]: Finished systemd-fsck-usr.service. Oct 9 07:52:50.917330 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 9 07:52:50.930466 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 9 07:52:50.947113 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 9 07:52:50.948036 systemd-modules-load[183]: Inserted module 'br_netfilter' Oct 9 07:52:50.969521 kernel: Bridge firewalling registered Oct 9 07:52:50.950440 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 9 07:52:50.982411 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 07:52:50.983098 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 9 07:52:50.990419 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 07:52:50.991593 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 9 07:52:50.992188 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 9 07:52:50.994948 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 9 07:52:51.013371 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 07:52:51.018383 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 9 07:52:51.020124 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 9 07:52:51.021400 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 07:52:51.029292 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 9 07:52:51.036769 dracut-cmdline[216]: dracut-dracut-053 Oct 9 07:52:51.041391 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=ed527eaf992abc270af9987554566193214d123941456fd3066b47855e5178a5 Oct 9 07:52:51.080268 systemd-resolved[220]: Positive Trust Anchors: Oct 9 07:52:51.080879 systemd-resolved[220]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 9 07:52:51.080918 systemd-resolved[220]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 9 07:52:51.084622 systemd-resolved[220]: Defaulting to hostname 'linux'. Oct 9 07:52:51.085823 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 9 07:52:51.087191 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 9 07:52:51.139104 kernel: SCSI subsystem initialized Oct 9 07:52:51.149095 kernel: Loading iSCSI transport class v2.0-870. Oct 9 07:52:51.161096 kernel: iscsi: registered transport (tcp) Oct 9 07:52:51.183258 kernel: iscsi: registered transport (qla4xxx) Oct 9 07:52:51.183334 kernel: QLogic iSCSI HBA Driver Oct 9 07:52:51.231545 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 9 07:52:51.242346 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 9 07:52:51.267652 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 9 07:52:51.267748 kernel: device-mapper: uevent: version 1.0.3 Oct 9 07:52:51.268698 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Oct 9 07:52:51.311121 kernel: raid6: avx2x4 gen() 16303 MB/s Oct 9 07:52:51.328107 kernel: raid6: avx2x2 gen() 17375 MB/s Oct 9 07:52:51.345527 kernel: raid6: avx2x1 gen() 13215 MB/s Oct 9 07:52:51.345622 kernel: raid6: using algorithm avx2x2 gen() 17375 MB/s Oct 9 07:52:51.363380 kernel: raid6: .... xor() 20219 MB/s, rmw enabled Oct 9 07:52:51.363460 kernel: raid6: using avx2x2 recovery algorithm Oct 9 07:52:51.386123 kernel: xor: automatically using best checksumming function avx Oct 9 07:52:51.553102 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 9 07:52:51.567785 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 9 07:52:51.574387 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 07:52:51.597924 systemd-udevd[402]: Using default interface naming scheme 'v255'. Oct 9 07:52:51.604940 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 07:52:51.614329 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 9 07:52:51.633156 dracut-pre-trigger[408]: rd.md=0: removing MD RAID activation Oct 9 07:52:51.675897 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 9 07:52:51.682461 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 9 07:52:51.753179 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 07:52:51.762432 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 9 07:52:51.796905 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 9 07:52:51.800606 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 9 07:52:51.801786 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 07:52:51.803665 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 9 07:52:51.812257 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 9 07:52:51.850109 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Oct 9 07:52:51.852475 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 9 07:52:51.865759 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Oct 9 07:52:51.887112 kernel: libata version 3.00 loaded. Oct 9 07:52:51.890575 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 9 07:52:51.890685 kernel: GPT:9289727 != 125829119 Oct 9 07:52:51.890726 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 9 07:52:51.890742 kernel: GPT:9289727 != 125829119 Oct 9 07:52:51.891298 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 9 07:52:51.892755 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 07:52:51.894297 kernel: cryptd: max_cpu_qlen set to 1000 Oct 9 07:52:51.894371 kernel: scsi host0: Virtio SCSI HBA Oct 9 07:52:51.910202 kernel: ata_piix 0000:00:01.1: version 2.13 Oct 9 07:52:51.913103 kernel: scsi host1: ata_piix Oct 9 07:52:51.918429 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Oct 9 07:52:51.918574 kernel: virtio_blk virtio5: [vdb] 968 512-byte logical blocks (496 kB/484 KiB) Oct 9 07:52:51.918684 kernel: scsi host2: ata_piix Oct 9 07:52:51.918817 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Oct 9 07:52:51.918833 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Oct 9 07:52:51.945458 kernel: ACPI: bus type USB registered Oct 9 07:52:51.945546 kernel: usbcore: registered new interface driver usbfs Oct 9 07:52:51.945578 kernel: usbcore: registered new interface driver hub Oct 9 07:52:51.945597 kernel: usbcore: registered new device driver usb Oct 9 07:52:51.946753 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 9 07:52:51.946899 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 07:52:51.947689 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 07:52:51.948098 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 07:52:51.948242 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 07:52:51.948981 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 07:52:51.954414 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 07:52:51.998071 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 07:52:52.005358 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 07:52:52.029523 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 07:52:52.083522 kernel: AVX2 version of gcm_enc/dec engaged. Oct 9 07:52:52.083609 kernel: AES CTR mode by8 optimization enabled Oct 9 07:52:52.142089 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (462) Oct 9 07:52:52.144115 kernel: BTRFS: device fsid a8680da2-059a-4648-a8e8-f62925ab33ec devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (452) Oct 9 07:52:52.148597 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 9 07:52:52.162099 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 9 07:52:52.166091 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Oct 9 07:52:52.166368 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Oct 9 07:52:52.166494 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Oct 9 07:52:52.169120 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Oct 9 07:52:52.171270 kernel: hub 1-0:1.0: USB hub found Oct 9 07:52:52.171510 kernel: hub 1-0:1.0: 2 ports detected Oct 9 07:52:52.177218 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 9 07:52:52.181785 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Oct 9 07:52:52.183117 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 9 07:52:52.188363 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 9 07:52:52.208923 disk-uuid[548]: Primary Header is updated. Oct 9 07:52:52.208923 disk-uuid[548]: Secondary Entries is updated. Oct 9 07:52:52.208923 disk-uuid[548]: Secondary Header is updated. Oct 9 07:52:52.217106 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 07:52:52.227101 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 07:52:52.232085 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 07:52:53.233096 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 07:52:53.233780 disk-uuid[549]: The operation has completed successfully. Oct 9 07:52:53.279463 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 9 07:52:53.279572 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 9 07:52:53.286357 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 9 07:52:53.291578 sh[562]: Success Oct 9 07:52:53.305110 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Oct 9 07:52:53.369919 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 9 07:52:53.372250 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 9 07:52:53.373327 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 9 07:52:53.393290 kernel: BTRFS info (device dm-0): first mount of filesystem a8680da2-059a-4648-a8e8-f62925ab33ec Oct 9 07:52:53.393368 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 9 07:52:53.393387 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Oct 9 07:52:53.395375 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 9 07:52:53.395459 kernel: BTRFS info (device dm-0): using free space tree Oct 9 07:52:53.402362 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 9 07:52:53.403465 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 9 07:52:53.408242 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 9 07:52:53.410250 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 9 07:52:53.422091 kernel: BTRFS info (device vda6): first mount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 9 07:52:53.422157 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 9 07:52:53.422171 kernel: BTRFS info (device vda6): using free space tree Oct 9 07:52:53.426272 kernel: BTRFS info (device vda6): auto enabling async discard Oct 9 07:52:53.439107 kernel: BTRFS info (device vda6): last unmount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 9 07:52:53.439184 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 9 07:52:53.446272 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 9 07:52:53.453427 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 9 07:52:53.560661 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 9 07:52:53.574847 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 9 07:52:53.605382 ignition[650]: Ignition 2.19.0 Oct 9 07:52:53.605396 ignition[650]: Stage: fetch-offline Oct 9 07:52:53.605457 ignition[650]: no configs at "/usr/lib/ignition/base.d" Oct 9 07:52:53.605473 ignition[650]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 9 07:52:53.605590 ignition[650]: parsed url from cmdline: "" Oct 9 07:52:53.605593 ignition[650]: no config URL provided Oct 9 07:52:53.605598 ignition[650]: reading system config file "/usr/lib/ignition/user.ign" Oct 9 07:52:53.605607 ignition[650]: no config at "/usr/lib/ignition/user.ign" Oct 9 07:52:53.610197 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 9 07:52:53.605613 ignition[650]: failed to fetch config: resource requires networking Oct 9 07:52:53.605818 ignition[650]: Ignition finished successfully Oct 9 07:52:53.618721 systemd-networkd[750]: lo: Link UP Oct 9 07:52:53.618732 systemd-networkd[750]: lo: Gained carrier Oct 9 07:52:53.621230 systemd-networkd[750]: Enumeration completed Oct 9 07:52:53.621619 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Oct 9 07:52:53.621623 systemd-networkd[750]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Oct 9 07:52:53.622472 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 9 07:52:53.623274 systemd[1]: Reached target network.target - Network. Oct 9 07:52:53.623495 systemd-networkd[750]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 07:52:53.623516 systemd-networkd[750]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 9 07:52:53.624406 systemd-networkd[750]: eth0: Link UP Oct 9 07:52:53.624411 systemd-networkd[750]: eth0: Gained carrier Oct 9 07:52:53.624420 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Oct 9 07:52:53.628776 systemd-networkd[750]: eth1: Link UP Oct 9 07:52:53.628782 systemd-networkd[750]: eth1: Gained carrier Oct 9 07:52:53.628799 systemd-networkd[750]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 07:52:53.633394 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Oct 9 07:52:53.643166 systemd-networkd[750]: eth0: DHCPv4 address 143.198.229.119/20, gateway 143.198.224.1 acquired from 169.254.169.253 Oct 9 07:52:53.647206 systemd-networkd[750]: eth1: DHCPv4 address 10.124.0.4/20 acquired from 169.254.169.253 Oct 9 07:52:53.655363 ignition[754]: Ignition 2.19.0 Oct 9 07:52:53.655380 ignition[754]: Stage: fetch Oct 9 07:52:53.655653 ignition[754]: no configs at "/usr/lib/ignition/base.d" Oct 9 07:52:53.655672 ignition[754]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 9 07:52:53.655849 ignition[754]: parsed url from cmdline: "" Oct 9 07:52:53.655855 ignition[754]: no config URL provided Oct 9 07:52:53.655864 ignition[754]: reading system config file "/usr/lib/ignition/user.ign" Oct 9 07:52:53.655884 ignition[754]: no config at "/usr/lib/ignition/user.ign" Oct 9 07:52:53.655917 ignition[754]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Oct 9 07:52:53.671690 ignition[754]: GET result: OK Oct 9 07:52:53.671881 ignition[754]: parsing config with SHA512: 6aea0100703d4820f53479f4d7b8d1fff1d2aa080519cceba8e3915866e41dd116d81af5f01cdec616caacf4a8400a184acec5d7438b960f6215e68274dcf17d Oct 9 07:52:53.677207 unknown[754]: fetched base config from "system" Oct 9 07:52:53.677220 unknown[754]: fetched base config from "system" Oct 9 07:52:53.677746 ignition[754]: fetch: fetch complete Oct 9 07:52:53.677239 unknown[754]: fetched user config from "digitalocean" Oct 9 07:52:53.677752 ignition[754]: fetch: fetch passed Oct 9 07:52:53.679984 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Oct 9 07:52:53.677810 ignition[754]: Ignition finished successfully Oct 9 07:52:53.685292 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 9 07:52:53.705222 ignition[761]: Ignition 2.19.0 Oct 9 07:52:53.705238 ignition[761]: Stage: kargs Oct 9 07:52:53.705441 ignition[761]: no configs at "/usr/lib/ignition/base.d" Oct 9 07:52:53.705454 ignition[761]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 9 07:52:53.706336 ignition[761]: kargs: kargs passed Oct 9 07:52:53.707999 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 9 07:52:53.706391 ignition[761]: Ignition finished successfully Oct 9 07:52:53.714313 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 9 07:52:53.735230 ignition[767]: Ignition 2.19.0 Oct 9 07:52:53.735241 ignition[767]: Stage: disks Oct 9 07:52:53.735430 ignition[767]: no configs at "/usr/lib/ignition/base.d" Oct 9 07:52:53.735442 ignition[767]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 9 07:52:53.736366 ignition[767]: disks: disks passed Oct 9 07:52:53.737628 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 9 07:52:53.736424 ignition[767]: Ignition finished successfully Oct 9 07:52:53.741917 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 9 07:52:53.742638 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 9 07:52:53.743253 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 9 07:52:53.743991 systemd[1]: Reached target sysinit.target - System Initialization. Oct 9 07:52:53.744914 systemd[1]: Reached target basic.target - Basic System. Oct 9 07:52:53.751335 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 9 07:52:53.768972 systemd-fsck[776]: ROOT: clean, 14/553520 files, 52654/553472 blocks Oct 9 07:52:53.771754 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 9 07:52:53.776194 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 9 07:52:53.882090 kernel: EXT4-fs (vda9): mounted filesystem 1df90f14-3ad0-4280-9b7d-a34f65d70e4d r/w with ordered data mode. Quota mode: none. Oct 9 07:52:53.882795 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 9 07:52:53.883772 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 9 07:52:53.889209 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 9 07:52:53.891184 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 9 07:52:53.895221 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Oct 9 07:52:53.903335 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Oct 9 07:52:53.906840 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (784) Oct 9 07:52:53.907296 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 9 07:52:53.907340 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 9 07:52:53.911096 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 9 07:52:53.918551 kernel: BTRFS info (device vda6): first mount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 9 07:52:53.918580 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 9 07:52:53.918594 kernel: BTRFS info (device vda6): using free space tree Oct 9 07:52:53.918606 kernel: BTRFS info (device vda6): auto enabling async discard Oct 9 07:52:53.918929 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 9 07:52:53.929328 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 9 07:52:53.990282 coreos-metadata[787]: Oct 09 07:52:53.990 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Oct 9 07:52:54.000941 initrd-setup-root[815]: cut: /sysroot/etc/passwd: No such file or directory Oct 9 07:52:54.002009 coreos-metadata[786]: Oct 09 07:52:54.001 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Oct 9 07:52:54.007073 coreos-metadata[787]: Oct 09 07:52:54.004 INFO Fetch successful Oct 9 07:52:54.007526 initrd-setup-root[822]: cut: /sysroot/etc/group: No such file or directory Oct 9 07:52:54.012209 coreos-metadata[787]: Oct 09 07:52:54.011 INFO wrote hostname ci-4081.1.0-5-a4f881141a to /sysroot/etc/hostname Oct 9 07:52:54.012966 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Oct 9 07:52:54.014147 coreos-metadata[786]: Oct 09 07:52:54.012 INFO Fetch successful Oct 9 07:52:54.016109 initrd-setup-root[830]: cut: /sysroot/etc/shadow: No such file or directory Oct 9 07:52:54.023814 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Oct 9 07:52:54.023943 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Oct 9 07:52:54.026923 initrd-setup-root[837]: cut: /sysroot/etc/gshadow: No such file or directory Oct 9 07:52:54.135821 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 9 07:52:54.140246 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 9 07:52:54.145333 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 9 07:52:54.159089 kernel: BTRFS info (device vda6): last unmount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 9 07:52:54.182967 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 9 07:52:54.191722 ignition[907]: INFO : Ignition 2.19.0 Oct 9 07:52:54.191722 ignition[907]: INFO : Stage: mount Oct 9 07:52:54.193054 ignition[907]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 07:52:54.193054 ignition[907]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 9 07:52:54.194398 ignition[907]: INFO : mount: mount passed Oct 9 07:52:54.194398 ignition[907]: INFO : Ignition finished successfully Oct 9 07:52:54.194466 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 9 07:52:54.200250 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 9 07:52:54.392955 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 9 07:52:54.400395 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 9 07:52:54.409091 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (918) Oct 9 07:52:54.411483 kernel: BTRFS info (device vda6): first mount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 9 07:52:54.411557 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 9 07:52:54.411579 kernel: BTRFS info (device vda6): using free space tree Oct 9 07:52:54.415113 kernel: BTRFS info (device vda6): auto enabling async discard Oct 9 07:52:54.417966 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 9 07:52:54.442777 ignition[935]: INFO : Ignition 2.19.0 Oct 9 07:52:54.442777 ignition[935]: INFO : Stage: files Oct 9 07:52:54.443856 ignition[935]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 07:52:54.443856 ignition[935]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 9 07:52:54.444811 ignition[935]: DEBUG : files: compiled without relabeling support, skipping Oct 9 07:52:54.445510 ignition[935]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 9 07:52:54.445510 ignition[935]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 9 07:52:54.449532 ignition[935]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 9 07:52:54.450196 ignition[935]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 9 07:52:54.450196 ignition[935]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 9 07:52:54.450104 unknown[935]: wrote ssh authorized keys file for user: core Oct 9 07:52:54.451984 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 9 07:52:54.451984 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Oct 9 07:52:54.491107 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 9 07:52:54.578626 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 9 07:52:54.578626 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 9 07:52:54.580989 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 9 07:52:54.580989 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 9 07:52:54.580989 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 9 07:52:54.580989 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 9 07:52:54.580989 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 9 07:52:54.580989 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 9 07:52:54.580989 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 9 07:52:54.580989 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 9 07:52:54.580989 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 9 07:52:54.580989 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Oct 9 07:52:54.580989 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Oct 9 07:52:54.580989 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Oct 9 07:52:54.580989 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Oct 9 07:52:54.782341 systemd-networkd[750]: eth1: Gained IPv6LL Oct 9 07:52:55.025940 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 9 07:52:55.166601 systemd-networkd[750]: eth0: Gained IPv6LL Oct 9 07:52:55.244390 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Oct 9 07:52:55.245395 ignition[935]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 9 07:52:55.246881 ignition[935]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 9 07:52:55.248352 ignition[935]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 9 07:52:55.248352 ignition[935]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 9 07:52:55.248352 ignition[935]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Oct 9 07:52:55.248352 ignition[935]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Oct 9 07:52:55.251870 ignition[935]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 9 07:52:55.251870 ignition[935]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 9 07:52:55.251870 ignition[935]: INFO : files: files passed Oct 9 07:52:55.251870 ignition[935]: INFO : Ignition finished successfully Oct 9 07:52:55.250378 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 9 07:52:55.257319 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 9 07:52:55.259380 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 9 07:52:55.262589 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 9 07:52:55.263180 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 9 07:52:55.287848 initrd-setup-root-after-ignition[964]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 9 07:52:55.287848 initrd-setup-root-after-ignition[964]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 9 07:52:55.290572 initrd-setup-root-after-ignition[968]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 9 07:52:55.292493 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 9 07:52:55.294119 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 9 07:52:55.310421 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 9 07:52:55.355134 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 9 07:52:55.355317 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 9 07:52:55.356661 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 9 07:52:55.357371 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 9 07:52:55.358488 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 9 07:52:55.372397 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 9 07:52:55.391121 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 9 07:52:55.398264 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 9 07:52:55.422315 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 9 07:52:55.422415 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 9 07:52:55.424883 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 9 07:52:55.425302 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 07:52:55.426220 systemd[1]: Stopped target timers.target - Timer Units. Oct 9 07:52:55.427046 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 9 07:52:55.427208 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 9 07:52:55.428236 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 9 07:52:55.428796 systemd[1]: Stopped target basic.target - Basic System. Oct 9 07:52:55.429684 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 9 07:52:55.430535 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 9 07:52:55.431350 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 9 07:52:55.432218 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 9 07:52:55.432980 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 9 07:52:55.433800 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 9 07:52:55.434455 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 9 07:52:55.435195 systemd[1]: Stopped target swap.target - Swaps. Oct 9 07:52:55.435551 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 9 07:52:55.435646 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 9 07:52:55.436706 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 9 07:52:55.437128 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 07:52:55.437795 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 9 07:52:55.437951 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 07:52:55.438443 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 9 07:52:55.438507 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 9 07:52:55.439624 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 9 07:52:55.439674 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 9 07:52:55.440129 systemd[1]: ignition-files.service: Deactivated successfully. Oct 9 07:52:55.440172 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 9 07:52:55.440990 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Oct 9 07:52:55.441035 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Oct 9 07:52:55.449205 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 9 07:52:55.453228 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 9 07:52:55.454362 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 9 07:52:55.454463 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 07:52:55.456873 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 9 07:52:55.456947 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 9 07:52:55.465708 ignition[989]: INFO : Ignition 2.19.0 Oct 9 07:52:55.465708 ignition[989]: INFO : Stage: umount Oct 9 07:52:55.465708 ignition[989]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 07:52:55.465708 ignition[989]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 9 07:52:55.465708 ignition[989]: INFO : umount: umount passed Oct 9 07:52:55.469697 ignition[989]: INFO : Ignition finished successfully Oct 9 07:52:55.467519 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 9 07:52:55.467699 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 9 07:52:55.478716 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 9 07:52:55.478804 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 9 07:52:55.479273 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 9 07:52:55.479325 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 9 07:52:55.479652 systemd[1]: ignition-fetch.service: Deactivated successfully. Oct 9 07:52:55.479687 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Oct 9 07:52:55.480003 systemd[1]: Stopped target network.target - Network. Oct 9 07:52:55.485282 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 9 07:52:55.485353 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 9 07:52:55.485950 systemd[1]: Stopped target paths.target - Path Units. Oct 9 07:52:55.486314 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 9 07:52:55.486422 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 07:52:55.487386 systemd[1]: Stopped target slices.target - Slice Units. Oct 9 07:52:55.488205 systemd[1]: Stopped target sockets.target - Socket Units. Oct 9 07:52:55.489125 systemd[1]: iscsid.socket: Deactivated successfully. Oct 9 07:52:55.489173 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 9 07:52:55.490046 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 9 07:52:55.490106 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 9 07:52:55.490474 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 9 07:52:55.490523 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 9 07:52:55.491518 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 9 07:52:55.491568 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 9 07:52:55.493279 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 9 07:52:55.494484 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 9 07:52:55.498129 systemd-networkd[750]: eth0: DHCPv6 lease lost Oct 9 07:52:55.500415 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 9 07:52:55.501151 systemd-networkd[750]: eth1: DHCPv6 lease lost Oct 9 07:52:55.503375 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 9 07:52:55.503496 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 9 07:52:55.504860 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 9 07:52:55.504920 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 9 07:52:55.512253 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 9 07:52:55.512852 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 9 07:52:55.512961 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 9 07:52:55.514056 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 07:52:55.517932 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 9 07:52:55.518115 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 9 07:52:55.524609 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 9 07:52:55.524767 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 9 07:52:55.532671 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 9 07:52:55.532842 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 9 07:52:55.533971 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 9 07:52:55.534053 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 9 07:52:55.535485 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 9 07:52:55.535559 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 9 07:52:55.536093 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 9 07:52:55.536163 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 9 07:52:55.538329 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 9 07:52:55.542628 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 07:52:55.544626 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 9 07:52:55.544760 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 9 07:52:55.547852 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 9 07:52:55.547940 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 9 07:52:55.549108 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 9 07:52:55.549170 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 07:52:55.549997 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 9 07:52:55.550088 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 9 07:52:55.551378 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 9 07:52:55.551453 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 9 07:52:55.552322 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 9 07:52:55.552390 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 07:52:55.560373 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 9 07:52:55.561011 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 9 07:52:55.561128 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 07:52:55.561713 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Oct 9 07:52:55.561767 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 9 07:52:55.564702 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 9 07:52:55.564786 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 07:52:55.565704 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 07:52:55.565775 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 07:52:55.573155 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 9 07:52:55.573330 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 9 07:52:55.574696 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 9 07:52:55.580323 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 9 07:52:55.604451 systemd[1]: Switching root. Oct 9 07:52:55.652430 systemd-journald[182]: Journal stopped Oct 9 07:52:56.810605 systemd-journald[182]: Received SIGTERM from PID 1 (systemd). Oct 9 07:52:56.810700 kernel: SELinux: policy capability network_peer_controls=1 Oct 9 07:52:56.810716 kernel: SELinux: policy capability open_perms=1 Oct 9 07:52:56.810728 kernel: SELinux: policy capability extended_socket_class=1 Oct 9 07:52:56.810739 kernel: SELinux: policy capability always_check_network=0 Oct 9 07:52:56.810751 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 9 07:52:56.810769 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 9 07:52:56.810780 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 9 07:52:56.810795 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 9 07:52:56.810806 kernel: audit: type=1403 audit(1728460375.817:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 9 07:52:56.810822 systemd[1]: Successfully loaded SELinux policy in 41.915ms. Oct 9 07:52:56.810844 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.034ms. Oct 9 07:52:56.810858 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 9 07:52:56.810870 systemd[1]: Detected virtualization kvm. Oct 9 07:52:56.810887 systemd[1]: Detected architecture x86-64. Oct 9 07:52:56.810899 systemd[1]: Detected first boot. Oct 9 07:52:56.810915 systemd[1]: Hostname set to . Oct 9 07:52:56.810929 systemd[1]: Initializing machine ID from VM UUID. Oct 9 07:52:56.810942 zram_generator::config[1030]: No configuration found. Oct 9 07:52:56.810955 systemd[1]: Populated /etc with preset unit settings. Oct 9 07:52:56.810967 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 9 07:52:56.810980 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 9 07:52:56.810995 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 9 07:52:56.811008 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 9 07:52:56.811021 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 9 07:52:56.811034 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 9 07:52:56.811046 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 9 07:52:56.811072 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 9 07:52:56.811086 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 9 07:52:56.813288 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 9 07:52:56.813326 systemd[1]: Created slice user.slice - User and Session Slice. Oct 9 07:52:56.813348 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 07:52:56.813362 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 07:52:56.813374 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 9 07:52:56.813387 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 9 07:52:56.813404 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 9 07:52:56.813416 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 9 07:52:56.813429 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 9 07:52:56.813442 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 07:52:56.813455 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 9 07:52:56.813471 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 9 07:52:56.813483 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 9 07:52:56.813495 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 9 07:52:56.813508 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 07:52:56.813521 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 9 07:52:56.813533 systemd[1]: Reached target slices.target - Slice Units. Oct 9 07:52:56.813548 systemd[1]: Reached target swap.target - Swaps. Oct 9 07:52:56.813561 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 9 07:52:56.813573 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 9 07:52:56.813586 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 9 07:52:56.813619 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 9 07:52:56.813633 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 07:52:56.813645 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 9 07:52:56.813658 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 9 07:52:56.813670 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 9 07:52:56.813686 systemd[1]: Mounting media.mount - External Media Directory... Oct 9 07:52:56.813698 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:52:56.813711 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 9 07:52:56.813723 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 9 07:52:56.813736 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 9 07:52:56.813750 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 9 07:52:56.813762 systemd[1]: Reached target machines.target - Containers. Oct 9 07:52:56.813775 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 9 07:52:56.813788 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 07:52:56.813804 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 9 07:52:56.813816 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 9 07:52:56.813829 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 07:52:56.813842 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 9 07:52:56.813854 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 07:52:56.813866 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 9 07:52:56.813878 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 07:52:56.813896 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 9 07:52:56.813911 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 9 07:52:56.813924 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 9 07:52:56.813937 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 9 07:52:56.813949 systemd[1]: Stopped systemd-fsck-usr.service. Oct 9 07:52:56.813961 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 9 07:52:56.813974 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 9 07:52:56.813986 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 9 07:52:56.813999 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 9 07:52:56.814011 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 9 07:52:56.814026 systemd[1]: verity-setup.service: Deactivated successfully. Oct 9 07:52:56.814038 systemd[1]: Stopped verity-setup.service. Oct 9 07:52:56.814051 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:52:56.816018 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 9 07:52:56.816047 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 9 07:52:56.816071 systemd[1]: Mounted media.mount - External Media Directory. Oct 9 07:52:56.816085 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 9 07:52:56.816106 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 9 07:52:56.816127 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 9 07:52:56.816140 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 07:52:56.816156 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 9 07:52:56.816169 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 9 07:52:56.816183 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 07:52:56.816205 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 07:52:56.816223 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 07:52:56.816254 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 07:52:56.816268 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 9 07:52:56.816281 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 9 07:52:56.816294 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 9 07:52:56.816311 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 9 07:52:56.816324 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 9 07:52:56.816337 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 9 07:52:56.816349 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 9 07:52:56.816362 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 9 07:52:56.816377 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 9 07:52:56.816395 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Oct 9 07:52:56.816408 kernel: loop: module loaded Oct 9 07:52:56.816471 systemd-journald[1110]: Collecting audit messages is disabled. Oct 9 07:52:56.816501 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 9 07:52:56.816515 systemd-journald[1110]: Journal started Oct 9 07:52:56.816554 systemd-journald[1110]: Runtime Journal (/run/log/journal/29b1e822a4b84d45b5501495db4db540) is 4.9M, max 39.3M, 34.4M free. Oct 9 07:52:56.500704 systemd[1]: Queued start job for default target multi-user.target. Oct 9 07:52:56.525073 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 9 07:52:56.525505 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 9 07:52:56.825360 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 9 07:52:56.825444 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 07:52:56.829085 kernel: fuse: init (API version 7.39) Oct 9 07:52:56.832157 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 9 07:52:56.835916 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 9 07:52:56.848097 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 9 07:52:56.856404 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 9 07:52:56.861314 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 9 07:52:56.861394 systemd[1]: Started systemd-journald.service - Journal Service. Oct 9 07:52:56.865144 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 9 07:52:56.865985 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 9 07:52:56.866152 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 9 07:52:56.866874 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 07:52:56.867023 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 07:52:56.867695 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 9 07:52:56.886319 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 9 07:52:56.898752 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 9 07:52:56.909230 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 9 07:52:56.929093 kernel: ACPI: bus type drm_connector registered Oct 9 07:52:56.933121 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 9 07:52:56.938392 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Oct 9 07:52:56.938877 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 9 07:52:56.941531 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 9 07:52:56.943174 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 9 07:52:56.943840 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 9 07:52:56.973742 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 9 07:52:56.974546 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Oct 9 07:52:56.992845 systemd-journald[1110]: Time spent on flushing to /var/log/journal/29b1e822a4b84d45b5501495db4db540 is 61.465ms for 994 entries. Oct 9 07:52:56.992845 systemd-journald[1110]: System Journal (/var/log/journal/29b1e822a4b84d45b5501495db4db540) is 8.0M, max 195.6M, 187.6M free. Oct 9 07:52:57.077118 systemd-journald[1110]: Received client request to flush runtime journal. Oct 9 07:52:57.077226 kernel: loop0: detected capacity change from 0 to 140768 Oct 9 07:52:57.077245 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 9 07:52:56.994700 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 9 07:52:57.012287 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 07:52:57.024368 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Oct 9 07:52:57.030961 systemd-tmpfiles[1126]: ACLs are not supported, ignoring. Oct 9 07:52:57.030977 systemd-tmpfiles[1126]: ACLs are not supported, ignoring. Oct 9 07:52:57.049680 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 9 07:52:57.063488 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 9 07:52:57.077050 udevadm[1163]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Oct 9 07:52:57.081074 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 9 07:52:57.088088 kernel: loop1: detected capacity change from 0 to 211296 Oct 9 07:52:57.125253 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 9 07:52:57.137851 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 9 07:52:57.150478 kernel: loop2: detected capacity change from 0 to 142488 Oct 9 07:52:57.177681 systemd-tmpfiles[1173]: ACLs are not supported, ignoring. Oct 9 07:52:57.177701 systemd-tmpfiles[1173]: ACLs are not supported, ignoring. Oct 9 07:52:57.185522 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 07:52:57.204488 kernel: loop3: detected capacity change from 0 to 8 Oct 9 07:52:57.229119 kernel: loop4: detected capacity change from 0 to 140768 Oct 9 07:52:57.255302 kernel: loop5: detected capacity change from 0 to 211296 Oct 9 07:52:57.273093 kernel: loop6: detected capacity change from 0 to 142488 Oct 9 07:52:57.287101 kernel: loop7: detected capacity change from 0 to 8 Oct 9 07:52:57.288102 (sd-merge)[1178]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Oct 9 07:52:57.288705 (sd-merge)[1178]: Merged extensions into '/usr'. Oct 9 07:52:57.294425 systemd[1]: Reloading requested from client PID 1133 ('systemd-sysext') (unit systemd-sysext.service)... Oct 9 07:52:57.294541 systemd[1]: Reloading... Oct 9 07:52:57.473251 zram_generator::config[1204]: No configuration found. Oct 9 07:52:57.484095 ldconfig[1128]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 9 07:52:57.641928 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 07:52:57.702627 systemd[1]: Reloading finished in 407 ms. Oct 9 07:52:57.736411 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 9 07:52:57.740497 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 9 07:52:57.748424 systemd[1]: Starting ensure-sysext.service... Oct 9 07:52:57.758542 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 9 07:52:57.770217 systemd[1]: Reloading requested from client PID 1247 ('systemctl') (unit ensure-sysext.service)... Oct 9 07:52:57.770231 systemd[1]: Reloading... Oct 9 07:52:57.809658 systemd-tmpfiles[1248]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 9 07:52:57.812048 systemd-tmpfiles[1248]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 9 07:52:57.814238 systemd-tmpfiles[1248]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 9 07:52:57.814628 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. Oct 9 07:52:57.814744 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. Oct 9 07:52:57.819997 systemd-tmpfiles[1248]: Detected autofs mount point /boot during canonicalization of boot. Oct 9 07:52:57.820183 systemd-tmpfiles[1248]: Skipping /boot Oct 9 07:52:57.860094 zram_generator::config[1270]: No configuration found. Oct 9 07:52:57.867242 systemd-tmpfiles[1248]: Detected autofs mount point /boot during canonicalization of boot. Oct 9 07:52:57.870117 systemd-tmpfiles[1248]: Skipping /boot Oct 9 07:52:58.018424 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 07:52:58.075407 systemd[1]: Reloading finished in 304 ms. Oct 9 07:52:58.095895 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 9 07:52:58.101674 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 9 07:52:58.114337 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 9 07:52:58.117449 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 9 07:52:58.121274 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 9 07:52:58.124327 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 9 07:52:58.129364 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 07:52:58.137839 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 9 07:52:58.145561 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:52:58.145780 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 07:52:58.154803 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 07:52:58.160394 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 07:52:58.162371 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 07:52:58.165430 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 07:52:58.165569 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:52:58.170942 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:52:58.171304 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 07:52:58.171479 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 07:52:58.171574 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:52:58.176111 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:52:58.176340 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 07:52:58.183475 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 9 07:52:58.184065 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 07:52:58.191371 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 9 07:52:58.193192 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:52:58.197497 systemd[1]: Finished ensure-sysext.service. Oct 9 07:52:58.210191 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 9 07:52:58.214108 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 9 07:52:58.215008 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 9 07:52:58.217825 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 9 07:52:58.228476 systemd-udevd[1327]: Using default interface naming scheme 'v255'. Oct 9 07:52:58.229467 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 07:52:58.229641 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 07:52:58.263145 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 9 07:52:58.263985 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 07:52:58.264147 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 07:52:58.264882 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 9 07:52:58.265012 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 9 07:52:58.268420 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 9 07:52:58.276816 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 9 07:52:58.277454 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 9 07:52:58.278158 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 07:52:58.279117 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 07:52:58.279928 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 07:52:58.291362 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 9 07:52:58.291820 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 9 07:52:58.295844 augenrules[1365]: No rules Oct 9 07:52:58.298630 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 9 07:52:58.314001 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 9 07:52:58.403090 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1361) Oct 9 07:52:58.413192 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1361) Oct 9 07:52:58.445183 systemd-networkd[1366]: lo: Link UP Oct 9 07:52:58.445194 systemd-networkd[1366]: lo: Gained carrier Oct 9 07:52:58.453225 systemd-networkd[1366]: Enumeration completed Oct 9 07:52:58.453694 systemd-networkd[1366]: eth0: Configuring with /run/systemd/network/10-22:7c:a0:f0:72:24.network. Oct 9 07:52:58.454237 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 9 07:52:58.460352 systemd-networkd[1366]: eth1: Configuring with /run/systemd/network/10-9a:d4:a7:c6:52:95.network. Oct 9 07:52:58.461286 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 9 07:52:58.461761 systemd-networkd[1366]: eth0: Link UP Oct 9 07:52:58.461769 systemd-networkd[1366]: eth0: Gained carrier Oct 9 07:52:58.468416 systemd-networkd[1366]: eth1: Link UP Oct 9 07:52:58.468428 systemd-networkd[1366]: eth1: Gained carrier Oct 9 07:52:58.500707 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 9 07:52:58.501229 systemd[1]: Reached target time-set.target - System Time Set. Oct 9 07:52:58.507487 systemd-resolved[1324]: Positive Trust Anchors: Oct 9 07:52:58.507505 systemd-resolved[1324]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 9 07:52:58.507567 systemd-resolved[1324]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 9 07:52:58.511526 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 9 07:52:58.515717 systemd-resolved[1324]: Using system hostname 'ci-4081.1.0-5-a4f881141a'. Oct 9 07:52:58.522609 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 9 07:52:58.523285 systemd[1]: Reached target network.target - Network. Oct 9 07:52:58.523704 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 9 07:52:58.555342 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Oct 9 07:52:58.555937 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:52:58.556283 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 07:52:58.563459 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 07:52:58.571360 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 07:52:58.584393 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 07:52:58.585151 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 07:52:58.585207 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 9 07:52:58.585244 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:52:58.590659 kernel: ISO 9660 Extensions: RRIP_1991A Oct 9 07:52:58.601182 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Oct 9 07:52:58.607423 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Oct 9 07:52:58.607769 kernel: ACPI: button: Power Button [PWRF] Oct 9 07:52:58.610607 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Oct 9 07:52:58.611761 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 07:52:58.611980 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 07:52:58.612877 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 07:52:58.614036 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 07:52:58.620507 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 07:52:58.621688 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 07:52:58.622082 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Oct 9 07:52:58.626877 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 9 07:52:58.626978 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 9 07:52:58.633086 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1372) Oct 9 07:52:58.725210 systemd-timesyncd[1339]: Contacted time server 50.205.57.38:123 (0.flatcar.pool.ntp.org). Oct 9 07:52:58.725371 systemd-timesyncd[1339]: Initial clock synchronization to Wed 2024-10-09 07:52:58.417760 UTC. Oct 9 07:52:58.732088 kernel: mousedev: PS/2 mouse device common for all mice Oct 9 07:52:58.737113 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Oct 9 07:52:58.737488 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 07:52:58.745087 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Oct 9 07:52:58.753167 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 9 07:52:58.760733 kernel: Console: switching to colour dummy device 80x25 Oct 9 07:52:58.763087 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Oct 9 07:52:58.763164 kernel: [drm] features: -context_init Oct 9 07:52:58.771367 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 9 07:52:58.776491 kernel: [drm] number of scanouts: 1 Oct 9 07:52:58.776591 kernel: [drm] number of cap sets: 0 Oct 9 07:52:58.780282 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Oct 9 07:52:58.790561 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 07:52:58.790890 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 07:52:58.810480 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 07:52:58.817157 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Oct 9 07:52:58.817202 kernel: Console: switching to colour frame buffer device 128x48 Oct 9 07:52:58.827905 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Oct 9 07:52:58.830913 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 9 07:52:58.851156 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 07:52:58.851353 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 07:52:58.893514 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 07:52:58.964105 kernel: EDAC MC: Ver: 3.0.0 Oct 9 07:52:58.973972 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 07:52:58.987601 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Oct 9 07:52:58.996428 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Oct 9 07:52:59.013089 lvm[1432]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 9 07:52:59.045201 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Oct 9 07:52:59.047795 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 9 07:52:59.047904 systemd[1]: Reached target sysinit.target - System Initialization. Oct 9 07:52:59.048087 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 9 07:52:59.048191 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 9 07:52:59.048437 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 9 07:52:59.048628 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 9 07:52:59.048713 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 9 07:52:59.048773 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 9 07:52:59.048798 systemd[1]: Reached target paths.target - Path Units. Oct 9 07:52:59.048845 systemd[1]: Reached target timers.target - Timer Units. Oct 9 07:52:59.049942 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 9 07:52:59.051697 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 9 07:52:59.057333 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 9 07:52:59.058883 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Oct 9 07:52:59.062453 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 9 07:52:59.063868 systemd[1]: Reached target sockets.target - Socket Units. Oct 9 07:52:59.066639 systemd[1]: Reached target basic.target - Basic System. Oct 9 07:52:59.067200 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 9 07:52:59.067228 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 9 07:52:59.071273 systemd[1]: Starting containerd.service - containerd container runtime... Oct 9 07:52:59.072569 lvm[1436]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 9 07:52:59.097628 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Oct 9 07:52:59.109447 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 9 07:52:59.124415 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 9 07:52:59.129656 coreos-metadata[1438]: Oct 09 07:52:59.129 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Oct 9 07:52:59.130297 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 9 07:52:59.130934 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 9 07:52:59.134263 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 9 07:52:59.138690 jq[1442]: false Oct 9 07:52:59.146215 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 9 07:52:59.150196 coreos-metadata[1438]: Oct 09 07:52:59.150 INFO Fetch successful Oct 9 07:52:59.151383 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 9 07:52:59.159303 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 9 07:52:59.174622 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 9 07:52:59.179579 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 9 07:52:59.180367 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 9 07:52:59.185348 systemd[1]: Starting update-engine.service - Update Engine... Oct 9 07:52:59.188596 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 9 07:52:59.197193 extend-filesystems[1443]: Found loop4 Oct 9 07:52:59.197193 extend-filesystems[1443]: Found loop5 Oct 9 07:52:59.197193 extend-filesystems[1443]: Found loop6 Oct 9 07:52:59.197193 extend-filesystems[1443]: Found loop7 Oct 9 07:52:59.197193 extend-filesystems[1443]: Found vda Oct 9 07:52:59.197193 extend-filesystems[1443]: Found vda1 Oct 9 07:52:59.197193 extend-filesystems[1443]: Found vda2 Oct 9 07:52:59.197193 extend-filesystems[1443]: Found vda3 Oct 9 07:52:59.197193 extend-filesystems[1443]: Found usr Oct 9 07:52:59.197193 extend-filesystems[1443]: Found vda4 Oct 9 07:52:59.197193 extend-filesystems[1443]: Found vda6 Oct 9 07:52:59.197193 extend-filesystems[1443]: Found vda7 Oct 9 07:52:59.197193 extend-filesystems[1443]: Found vda9 Oct 9 07:52:59.197193 extend-filesystems[1443]: Checking size of /dev/vda9 Oct 9 07:52:59.193304 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Oct 9 07:52:59.262247 dbus-daemon[1441]: [system] SELinux support is enabled Oct 9 07:52:59.319951 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Oct 9 07:52:59.319982 extend-filesystems[1443]: Resized partition /dev/vda9 Oct 9 07:52:59.208537 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 9 07:52:59.321363 update_engine[1450]: I20241009 07:52:59.225369 1450 main.cc:92] Flatcar Update Engine starting Oct 9 07:52:59.321363 update_engine[1450]: I20241009 07:52:59.283587 1450 update_check_scheduler.cc:74] Next update check in 3m3s Oct 9 07:52:59.321734 extend-filesystems[1472]: resize2fs 1.47.1 (20-May-2024) Oct 9 07:52:59.209246 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 9 07:52:59.251550 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 9 07:52:59.328705 tar[1457]: linux-amd64/helm Oct 9 07:52:59.251734 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 9 07:52:59.263252 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 9 07:52:59.294220 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 9 07:52:59.331845 jq[1451]: true Oct 9 07:52:59.294253 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 9 07:52:59.295917 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 9 07:52:59.295997 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Oct 9 07:52:59.296019 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 9 07:52:59.297592 systemd[1]: Started update-engine.service - Update Engine. Oct 9 07:52:59.313677 (ntainerd)[1473]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 9 07:52:59.317371 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 9 07:52:59.330082 systemd[1]: motdgen.service: Deactivated successfully. Oct 9 07:52:59.330563 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 9 07:52:59.363485 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1364) Oct 9 07:52:59.370832 jq[1480]: true Oct 9 07:52:59.380231 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Oct 9 07:52:59.384317 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 9 07:52:59.429581 bash[1501]: Updated "/home/core/.ssh/authorized_keys" Oct 9 07:52:59.430288 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 9 07:52:59.445374 systemd[1]: Starting sshkeys.service... Oct 9 07:52:59.482081 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Oct 9 07:52:59.499921 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Oct 9 07:52:59.508429 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Oct 9 07:52:59.513832 extend-filesystems[1472]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 9 07:52:59.513832 extend-filesystems[1472]: old_desc_blocks = 1, new_desc_blocks = 8 Oct 9 07:52:59.513832 extend-filesystems[1472]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Oct 9 07:52:59.516166 extend-filesystems[1443]: Resized filesystem in /dev/vda9 Oct 9 07:52:59.516166 extend-filesystems[1443]: Found vdb Oct 9 07:52:59.513927 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 9 07:52:59.514125 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 9 07:52:59.519629 systemd-networkd[1366]: eth0: Gained IPv6LL Oct 9 07:52:59.543964 systemd-logind[1448]: New seat seat0. Oct 9 07:52:59.544482 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 9 07:52:59.545342 systemd[1]: Reached target network-online.target - Network is Online. Oct 9 07:52:59.561491 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:52:59.567027 systemd-logind[1448]: Watching system buttons on /dev/input/event1 (Power Button) Oct 9 07:52:59.567107 systemd-logind[1448]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 9 07:52:59.578800 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 9 07:52:59.581146 systemd[1]: Started systemd-logind.service - User Login Management. Oct 9 07:52:59.673901 coreos-metadata[1504]: Oct 09 07:52:59.673 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Oct 9 07:52:59.677189 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 9 07:52:59.710974 coreos-metadata[1504]: Oct 09 07:52:59.706 INFO Fetch successful Oct 9 07:52:59.728476 unknown[1504]: wrote ssh authorized keys file for user: core Oct 9 07:52:59.737512 locksmithd[1479]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 9 07:52:59.767859 update-ssh-keys[1529]: Updated "/home/core/.ssh/authorized_keys" Oct 9 07:52:59.769292 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Oct 9 07:52:59.777871 systemd[1]: Finished sshkeys.service. Oct 9 07:52:59.808387 sshd_keygen[1476]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 9 07:52:59.838807 systemd-networkd[1366]: eth1: Gained IPv6LL Oct 9 07:52:59.871625 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 9 07:52:59.883458 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 9 07:52:59.905452 systemd[1]: issuegen.service: Deactivated successfully. Oct 9 07:52:59.905749 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 9 07:52:59.919651 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 9 07:52:59.936373 containerd[1473]: time="2024-10-09T07:52:59.936155249Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Oct 9 07:52:59.954584 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 9 07:52:59.963701 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 9 07:52:59.974687 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 9 07:52:59.979687 systemd[1]: Reached target getty.target - Login Prompts. Oct 9 07:53:00.013817 containerd[1473]: time="2024-10-09T07:53:00.013554963Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 9 07:53:00.019755 containerd[1473]: time="2024-10-09T07:53:00.019696485Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.54-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 9 07:53:00.019886 containerd[1473]: time="2024-10-09T07:53:00.019873435Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 9 07:53:00.020875 containerd[1473]: time="2024-10-09T07:53:00.020841910Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 9 07:53:00.022075 containerd[1473]: time="2024-10-09T07:53:00.021150776Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Oct 9 07:53:00.022075 containerd[1473]: time="2024-10-09T07:53:00.021173011Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Oct 9 07:53:00.022075 containerd[1473]: time="2024-10-09T07:53:00.021235226Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 07:53:00.022075 containerd[1473]: time="2024-10-09T07:53:00.021248076Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 9 07:53:00.022075 containerd[1473]: time="2024-10-09T07:53:00.021477594Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 07:53:00.022075 containerd[1473]: time="2024-10-09T07:53:00.021495742Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 9 07:53:00.022075 containerd[1473]: time="2024-10-09T07:53:00.021530646Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 07:53:00.022075 containerd[1473]: time="2024-10-09T07:53:00.021540529Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 9 07:53:00.022075 containerd[1473]: time="2024-10-09T07:53:00.021637912Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 9 07:53:00.022075 containerd[1473]: time="2024-10-09T07:53:00.021913703Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 9 07:53:00.024177 containerd[1473]: time="2024-10-09T07:53:00.024144945Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 07:53:00.024501 containerd[1473]: time="2024-10-09T07:53:00.024274321Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 9 07:53:00.024501 containerd[1473]: time="2024-10-09T07:53:00.024416728Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 9 07:53:00.024501 containerd[1473]: time="2024-10-09T07:53:00.024468261Z" level=info msg="metadata content store policy set" policy=shared Oct 9 07:53:00.034617 containerd[1473]: time="2024-10-09T07:53:00.034557436Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 9 07:53:00.035324 containerd[1473]: time="2024-10-09T07:53:00.035288111Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 9 07:53:00.035553 containerd[1473]: time="2024-10-09T07:53:00.035501322Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Oct 9 07:53:00.035553 containerd[1473]: time="2024-10-09T07:53:00.035527208Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Oct 9 07:53:00.035799 containerd[1473]: time="2024-10-09T07:53:00.035652280Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 9 07:53:00.037466 containerd[1473]: time="2024-10-09T07:53:00.037388574Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 9 07:53:00.038227 containerd[1473]: time="2024-10-09T07:53:00.037790740Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 9 07:53:00.038227 containerd[1473]: time="2024-10-09T07:53:00.037970202Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Oct 9 07:53:00.038227 containerd[1473]: time="2024-10-09T07:53:00.037986744Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Oct 9 07:53:00.038227 containerd[1473]: time="2024-10-09T07:53:00.038000765Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Oct 9 07:53:00.038227 containerd[1473]: time="2024-10-09T07:53:00.038013959Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 9 07:53:00.038227 containerd[1473]: time="2024-10-09T07:53:00.038034623Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 9 07:53:00.038227 containerd[1473]: time="2024-10-09T07:53:00.038049636Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 9 07:53:00.038227 containerd[1473]: time="2024-10-09T07:53:00.038102808Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 9 07:53:00.038227 containerd[1473]: time="2024-10-09T07:53:00.038128497Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 9 07:53:00.038227 containerd[1473]: time="2024-10-09T07:53:00.038157605Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 9 07:53:00.038227 containerd[1473]: time="2024-10-09T07:53:00.038173699Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 9 07:53:00.038227 containerd[1473]: time="2024-10-09T07:53:00.038185490Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 9 07:53:00.038227 containerd[1473]: time="2024-10-09T07:53:00.038205771Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 9 07:53:00.038634 containerd[1473]: time="2024-10-09T07:53:00.038572934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 9 07:53:00.038634 containerd[1473]: time="2024-10-09T07:53:00.038597278Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 9 07:53:00.038634 containerd[1473]: time="2024-10-09T07:53:00.038611159Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 9 07:53:00.038728 containerd[1473]: time="2024-10-09T07:53:00.038623156Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 9 07:53:00.038790 containerd[1473]: time="2024-10-09T07:53:00.038766551Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 9 07:53:00.038830 containerd[1473]: time="2024-10-09T07:53:00.038781851Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 9 07:53:00.038893 containerd[1473]: time="2024-10-09T07:53:00.038867260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 9 07:53:00.039071 containerd[1473]: time="2024-10-09T07:53:00.038883296Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Oct 9 07:53:00.039071 containerd[1473]: time="2024-10-09T07:53:00.039010722Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Oct 9 07:53:00.039071 containerd[1473]: time="2024-10-09T07:53:00.039036531Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 9 07:53:00.039177 containerd[1473]: time="2024-10-09T07:53:00.039164846Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Oct 9 07:53:00.039227 containerd[1473]: time="2024-10-09T07:53:00.039214120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 9 07:53:00.039300 containerd[1473]: time="2024-10-09T07:53:00.039289421Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Oct 9 07:53:00.039945 containerd[1473]: time="2024-10-09T07:53:00.039359629Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Oct 9 07:53:00.039945 containerd[1473]: time="2024-10-09T07:53:00.039374067Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 9 07:53:00.039945 containerd[1473]: time="2024-10-09T07:53:00.039384180Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 9 07:53:00.040683 containerd[1473]: time="2024-10-09T07:53:00.040188592Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 9 07:53:00.040683 containerd[1473]: time="2024-10-09T07:53:00.040350821Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Oct 9 07:53:00.040683 containerd[1473]: time="2024-10-09T07:53:00.040378980Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 9 07:53:00.040683 containerd[1473]: time="2024-10-09T07:53:00.040396476Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Oct 9 07:53:00.040683 containerd[1473]: time="2024-10-09T07:53:00.040410666Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 9 07:53:00.040683 containerd[1473]: time="2024-10-09T07:53:00.040468738Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Oct 9 07:53:00.040683 containerd[1473]: time="2024-10-09T07:53:00.040487634Z" level=info msg="NRI interface is disabled by configuration." Oct 9 07:53:00.040683 containerd[1473]: time="2024-10-09T07:53:00.040505461Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 9 07:53:00.041799 containerd[1473]: time="2024-10-09T07:53:00.041348313Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 9 07:53:00.043289 containerd[1473]: time="2024-10-09T07:53:00.042091798Z" level=info msg="Connect containerd service" Oct 9 07:53:00.043289 containerd[1473]: time="2024-10-09T07:53:00.042147164Z" level=info msg="using legacy CRI server" Oct 9 07:53:00.043289 containerd[1473]: time="2024-10-09T07:53:00.042155518Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 9 07:53:00.043289 containerd[1473]: time="2024-10-09T07:53:00.042289634Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 9 07:53:00.044353 containerd[1473]: time="2024-10-09T07:53:00.044104213Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 9 07:53:00.045669 containerd[1473]: time="2024-10-09T07:53:00.045620965Z" level=info msg="Start subscribing containerd event" Oct 9 07:53:00.046326 containerd[1473]: time="2024-10-09T07:53:00.046300888Z" level=info msg="Start recovering state" Oct 9 07:53:00.052096 containerd[1473]: time="2024-10-09T07:53:00.050175913Z" level=info msg="Start event monitor" Oct 9 07:53:00.052096 containerd[1473]: time="2024-10-09T07:53:00.050238254Z" level=info msg="Start snapshots syncer" Oct 9 07:53:00.052096 containerd[1473]: time="2024-10-09T07:53:00.050250475Z" level=info msg="Start cni network conf syncer for default" Oct 9 07:53:00.052096 containerd[1473]: time="2024-10-09T07:53:00.050259223Z" level=info msg="Start streaming server" Oct 9 07:53:00.052096 containerd[1473]: time="2024-10-09T07:53:00.047455050Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 9 07:53:00.052096 containerd[1473]: time="2024-10-09T07:53:00.050483969Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 9 07:53:00.052096 containerd[1473]: time="2024-10-09T07:53:00.050536761Z" level=info msg="containerd successfully booted in 0.119525s" Oct 9 07:53:00.050777 systemd[1]: Started containerd.service - containerd container runtime. Oct 9 07:53:00.357444 tar[1457]: linux-amd64/LICENSE Oct 9 07:53:00.357444 tar[1457]: linux-amd64/README.md Oct 9 07:53:00.369216 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 9 07:53:00.729700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:53:00.732414 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 9 07:53:00.733973 systemd[1]: Startup finished in 996ms (kernel) + 5.118s (initrd) + 4.957s (userspace) = 11.071s. Oct 9 07:53:00.748459 (kubelet)[1562]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 07:53:01.497019 kubelet[1562]: E1009 07:53:01.496857 1562 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 07:53:01.499687 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 07:53:01.499900 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 07:53:01.500716 systemd[1]: kubelet.service: Consumed 1.232s CPU time. Oct 9 07:53:03.786526 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 9 07:53:03.792551 systemd[1]: Started sshd@0-143.198.229.119:22-139.178.89.65:58556.service - OpenSSH per-connection server daemon (139.178.89.65:58556). Oct 9 07:53:03.853831 sshd[1576]: Accepted publickey for core from 139.178.89.65 port 58556 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:53:03.855893 sshd[1576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:53:03.865575 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 9 07:53:03.869353 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 9 07:53:03.871954 systemd-logind[1448]: New session 1 of user core. Oct 9 07:53:03.888844 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 9 07:53:03.897527 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 9 07:53:03.902187 (systemd)[1580]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:53:04.004863 systemd[1580]: Queued start job for default target default.target. Oct 9 07:53:04.015959 systemd[1580]: Created slice app.slice - User Application Slice. Oct 9 07:53:04.015999 systemd[1580]: Reached target paths.target - Paths. Oct 9 07:53:04.016014 systemd[1580]: Reached target timers.target - Timers. Oct 9 07:53:04.017527 systemd[1580]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 9 07:53:04.031332 systemd[1580]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 9 07:53:04.031463 systemd[1580]: Reached target sockets.target - Sockets. Oct 9 07:53:04.031478 systemd[1580]: Reached target basic.target - Basic System. Oct 9 07:53:04.031522 systemd[1580]: Reached target default.target - Main User Target. Oct 9 07:53:04.031555 systemd[1580]: Startup finished in 121ms. Oct 9 07:53:04.032052 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 9 07:53:04.039358 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 9 07:53:04.104393 systemd[1]: Started sshd@1-143.198.229.119:22-139.178.89.65:58562.service - OpenSSH per-connection server daemon (139.178.89.65:58562). Oct 9 07:53:04.159284 sshd[1591]: Accepted publickey for core from 139.178.89.65 port 58562 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:53:04.160864 sshd[1591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:53:04.165793 systemd-logind[1448]: New session 2 of user core. Oct 9 07:53:04.173414 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 9 07:53:04.234723 sshd[1591]: pam_unix(sshd:session): session closed for user core Oct 9 07:53:04.247020 systemd[1]: sshd@1-143.198.229.119:22-139.178.89.65:58562.service: Deactivated successfully. Oct 9 07:53:04.249438 systemd[1]: session-2.scope: Deactivated successfully. Oct 9 07:53:04.251224 systemd-logind[1448]: Session 2 logged out. Waiting for processes to exit. Oct 9 07:53:04.255497 systemd[1]: Started sshd@2-143.198.229.119:22-139.178.89.65:58574.service - OpenSSH per-connection server daemon (139.178.89.65:58574). Oct 9 07:53:04.257481 systemd-logind[1448]: Removed session 2. Oct 9 07:53:04.302465 sshd[1598]: Accepted publickey for core from 139.178.89.65 port 58574 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:53:04.304504 sshd[1598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:53:04.311127 systemd-logind[1448]: New session 3 of user core. Oct 9 07:53:04.318351 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 9 07:53:04.376312 sshd[1598]: pam_unix(sshd:session): session closed for user core Oct 9 07:53:04.389509 systemd[1]: sshd@2-143.198.229.119:22-139.178.89.65:58574.service: Deactivated successfully. Oct 9 07:53:04.392132 systemd[1]: session-3.scope: Deactivated successfully. Oct 9 07:53:04.393178 systemd-logind[1448]: Session 3 logged out. Waiting for processes to exit. Oct 9 07:53:04.400583 systemd[1]: Started sshd@3-143.198.229.119:22-139.178.89.65:58590.service - OpenSSH per-connection server daemon (139.178.89.65:58590). Oct 9 07:53:04.402401 systemd-logind[1448]: Removed session 3. Oct 9 07:53:04.444164 sshd[1605]: Accepted publickey for core from 139.178.89.65 port 58590 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:53:04.445866 sshd[1605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:53:04.453379 systemd-logind[1448]: New session 4 of user core. Oct 9 07:53:04.459330 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 9 07:53:04.518450 sshd[1605]: pam_unix(sshd:session): session closed for user core Oct 9 07:53:04.530983 systemd[1]: sshd@3-143.198.229.119:22-139.178.89.65:58590.service: Deactivated successfully. Oct 9 07:53:04.532926 systemd[1]: session-4.scope: Deactivated successfully. Oct 9 07:53:04.533597 systemd-logind[1448]: Session 4 logged out. Waiting for processes to exit. Oct 9 07:53:04.541589 systemd[1]: Started sshd@4-143.198.229.119:22-139.178.89.65:58594.service - OpenSSH per-connection server daemon (139.178.89.65:58594). Oct 9 07:53:04.543576 systemd-logind[1448]: Removed session 4. Oct 9 07:53:04.584837 sshd[1612]: Accepted publickey for core from 139.178.89.65 port 58594 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:53:04.586502 sshd[1612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:53:04.592514 systemd-logind[1448]: New session 5 of user core. Oct 9 07:53:04.602422 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 9 07:53:04.668312 sudo[1615]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 9 07:53:04.668779 sudo[1615]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 07:53:04.681913 sudo[1615]: pam_unix(sudo:session): session closed for user root Oct 9 07:53:04.685757 sshd[1612]: pam_unix(sshd:session): session closed for user core Oct 9 07:53:04.692967 systemd[1]: sshd@4-143.198.229.119:22-139.178.89.65:58594.service: Deactivated successfully. Oct 9 07:53:04.695046 systemd[1]: session-5.scope: Deactivated successfully. Oct 9 07:53:04.695998 systemd-logind[1448]: Session 5 logged out. Waiting for processes to exit. Oct 9 07:53:04.704415 systemd[1]: Started sshd@5-143.198.229.119:22-139.178.89.65:58600.service - OpenSSH per-connection server daemon (139.178.89.65:58600). Oct 9 07:53:04.707611 systemd-logind[1448]: Removed session 5. Oct 9 07:53:04.745735 sshd[1620]: Accepted publickey for core from 139.178.89.65 port 58600 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:53:04.747825 sshd[1620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:53:04.753562 systemd-logind[1448]: New session 6 of user core. Oct 9 07:53:04.760342 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 9 07:53:04.819183 sudo[1624]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 9 07:53:04.819513 sudo[1624]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 07:53:04.824192 sudo[1624]: pam_unix(sudo:session): session closed for user root Oct 9 07:53:04.832523 sudo[1623]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 9 07:53:04.832984 sudo[1623]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 07:53:04.848473 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Oct 9 07:53:04.862273 auditctl[1627]: No rules Oct 9 07:53:04.862791 systemd[1]: audit-rules.service: Deactivated successfully. Oct 9 07:53:04.862990 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Oct 9 07:53:04.869525 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 9 07:53:04.910327 augenrules[1645]: No rules Oct 9 07:53:04.911809 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 9 07:53:04.913205 sudo[1623]: pam_unix(sudo:session): session closed for user root Oct 9 07:53:04.918090 sshd[1620]: pam_unix(sshd:session): session closed for user core Oct 9 07:53:04.924939 systemd[1]: sshd@5-143.198.229.119:22-139.178.89.65:58600.service: Deactivated successfully. Oct 9 07:53:04.926958 systemd[1]: session-6.scope: Deactivated successfully. Oct 9 07:53:04.929129 systemd-logind[1448]: Session 6 logged out. Waiting for processes to exit. Oct 9 07:53:04.933493 systemd[1]: Started sshd@6-143.198.229.119:22-139.178.89.65:58614.service - OpenSSH per-connection server daemon (139.178.89.65:58614). Oct 9 07:53:04.935826 systemd-logind[1448]: Removed session 6. Oct 9 07:53:04.990099 sshd[1653]: Accepted publickey for core from 139.178.89.65 port 58614 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:53:04.991849 sshd[1653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:53:04.996436 systemd-logind[1448]: New session 7 of user core. Oct 9 07:53:05.004300 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 9 07:53:05.063990 sudo[1656]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 9 07:53:05.064958 sudo[1656]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 07:53:05.494435 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 9 07:53:05.495706 (dockerd)[1673]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 9 07:53:05.951162 dockerd[1673]: time="2024-10-09T07:53:05.950744419Z" level=info msg="Starting up" Oct 9 07:53:06.069648 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3101770630-merged.mount: Deactivated successfully. Oct 9 07:53:06.091843 dockerd[1673]: time="2024-10-09T07:53:06.091602537Z" level=info msg="Loading containers: start." Oct 9 07:53:06.203090 kernel: Initializing XFRM netlink socket Oct 9 07:53:06.287364 systemd-networkd[1366]: docker0: Link UP Oct 9 07:53:06.302748 dockerd[1673]: time="2024-10-09T07:53:06.302703994Z" level=info msg="Loading containers: done." Oct 9 07:53:06.316531 dockerd[1673]: time="2024-10-09T07:53:06.316132028Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 9 07:53:06.316531 dockerd[1673]: time="2024-10-09T07:53:06.316253966Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Oct 9 07:53:06.316531 dockerd[1673]: time="2024-10-09T07:53:06.316387948Z" level=info msg="Daemon has completed initialization" Oct 9 07:53:06.317663 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4211143441-merged.mount: Deactivated successfully. Oct 9 07:53:06.351971 dockerd[1673]: time="2024-10-09T07:53:06.351889794Z" level=info msg="API listen on /run/docker.sock" Oct 9 07:53:06.352504 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 9 07:53:07.196000 containerd[1473]: time="2024-10-09T07:53:07.195950828Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.9\"" Oct 9 07:53:07.788383 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2632369169.mount: Deactivated successfully. Oct 9 07:53:09.324404 containerd[1473]: time="2024-10-09T07:53:09.324345445Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:53:09.325694 containerd[1473]: time="2024-10-09T07:53:09.325637943Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.9: active requests=0, bytes read=35213841" Oct 9 07:53:09.326518 containerd[1473]: time="2024-10-09T07:53:09.326429801Z" level=info msg="ImageCreate event name:\"sha256:bc1ec5c2b6c60a3b18e7f54a99f0452c038400ecaaa2576931fd5342a0586abb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:53:09.330021 containerd[1473]: time="2024-10-09T07:53:09.329920450Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b88538e7fdf73583c8670540eec5b3620af75c9ec200434a5815ee7fba5021f3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:53:09.331887 containerd[1473]: time="2024-10-09T07:53:09.331608381Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.9\" with image id \"sha256:bc1ec5c2b6c60a3b18e7f54a99f0452c038400ecaaa2576931fd5342a0586abb\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b88538e7fdf73583c8670540eec5b3620af75c9ec200434a5815ee7fba5021f3\", size \"35210641\" in 2.135602097s" Oct 9 07:53:09.331887 containerd[1473]: time="2024-10-09T07:53:09.331673760Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.9\" returns image reference \"sha256:bc1ec5c2b6c60a3b18e7f54a99f0452c038400ecaaa2576931fd5342a0586abb\"" Oct 9 07:53:09.369945 containerd[1473]: time="2024-10-09T07:53:09.369901364Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.9\"" Oct 9 07:53:11.054104 containerd[1473]: time="2024-10-09T07:53:11.054020848Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:53:11.055238 containerd[1473]: time="2024-10-09T07:53:11.055190371Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.9: active requests=0, bytes read=32208673" Oct 9 07:53:11.055682 containerd[1473]: time="2024-10-09T07:53:11.055658722Z" level=info msg="ImageCreate event name:\"sha256:5abda0d0a9153cd1f90fd828be379f7a16a6c814e6efbbbf31e247e13c3843e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:53:11.058948 containerd[1473]: time="2024-10-09T07:53:11.058888569Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f2f18973ccb6996687d10ba5bd1b8f303e3dd2fed80f831a44d2ac8191e5bb9b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:53:11.060090 containerd[1473]: time="2024-10-09T07:53:11.059894509Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.9\" with image id \"sha256:5abda0d0a9153cd1f90fd828be379f7a16a6c814e6efbbbf31e247e13c3843e5\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f2f18973ccb6996687d10ba5bd1b8f303e3dd2fed80f831a44d2ac8191e5bb9b\", size \"33739229\" in 1.68995484s" Oct 9 07:53:11.060090 containerd[1473]: time="2024-10-09T07:53:11.059933092Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.9\" returns image reference \"sha256:5abda0d0a9153cd1f90fd828be379f7a16a6c814e6efbbbf31e247e13c3843e5\"" Oct 9 07:53:11.089934 containerd[1473]: time="2024-10-09T07:53:11.089783285Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.9\"" Oct 9 07:53:11.750382 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 9 07:53:11.758839 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:53:11.896377 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:53:11.909944 (kubelet)[1902]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 07:53:12.010729 kubelet[1902]: E1009 07:53:12.010141 1902 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 07:53:12.016412 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 07:53:12.016568 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 07:53:12.259099 containerd[1473]: time="2024-10-09T07:53:12.259000370Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:53:12.260376 containerd[1473]: time="2024-10-09T07:53:12.260282343Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.9: active requests=0, bytes read=17320456" Oct 9 07:53:12.261223 containerd[1473]: time="2024-10-09T07:53:12.260478914Z" level=info msg="ImageCreate event name:\"sha256:059957505b3370d4c57d793e79cc70f9063d7ab75767f7040f5cc85572fe7e8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:53:12.263609 containerd[1473]: time="2024-10-09T07:53:12.263532770Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c164076eebaefdaebad46a5ccd550e9f38c63588c02d35163c6a09e164ab8a8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:53:12.265043 containerd[1473]: time="2024-10-09T07:53:12.264630425Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.9\" with image id \"sha256:059957505b3370d4c57d793e79cc70f9063d7ab75767f7040f5cc85572fe7e8d\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c164076eebaefdaebad46a5ccd550e9f38c63588c02d35163c6a09e164ab8a8\", size \"18851030\" in 1.174809555s" Oct 9 07:53:12.265043 containerd[1473]: time="2024-10-09T07:53:12.264665991Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.9\" returns image reference \"sha256:059957505b3370d4c57d793e79cc70f9063d7ab75767f7040f5cc85572fe7e8d\"" Oct 9 07:53:12.292100 containerd[1473]: time="2024-10-09T07:53:12.291838588Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.9\"" Oct 9 07:53:13.369642 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1253390177.mount: Deactivated successfully. Oct 9 07:53:13.778077 containerd[1473]: time="2024-10-09T07:53:13.777906283Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:53:13.779350 containerd[1473]: time="2024-10-09T07:53:13.779084049Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.9: active requests=0, bytes read=28601750" Oct 9 07:53:13.779914 containerd[1473]: time="2024-10-09T07:53:13.779870586Z" level=info msg="ImageCreate event name:\"sha256:dd650d127e51776919ec1622a4469a8b141b2dfee5a33fbc5cb9729372e0dcfa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:53:13.781817 containerd[1473]: time="2024-10-09T07:53:13.781760085Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:124040dbe6b5294352355f5d34c692ecbc940cdc57a8fd06d0f38f76b6138906\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:53:13.782660 containerd[1473]: time="2024-10-09T07:53:13.782529503Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.9\" with image id \"sha256:dd650d127e51776919ec1622a4469a8b141b2dfee5a33fbc5cb9729372e0dcfa\", repo tag \"registry.k8s.io/kube-proxy:v1.29.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:124040dbe6b5294352355f5d34c692ecbc940cdc57a8fd06d0f38f76b6138906\", size \"28600769\" in 1.490647436s" Oct 9 07:53:13.782660 containerd[1473]: time="2024-10-09T07:53:13.782562531Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.9\" returns image reference \"sha256:dd650d127e51776919ec1622a4469a8b141b2dfee5a33fbc5cb9729372e0dcfa\"" Oct 9 07:53:13.814795 containerd[1473]: time="2024-10-09T07:53:13.814730405Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Oct 9 07:53:13.816328 systemd-resolved[1324]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Oct 9 07:53:14.571112 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1928716253.mount: Deactivated successfully. Oct 9 07:53:15.484127 containerd[1473]: time="2024-10-09T07:53:15.483754057Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:53:15.485414 containerd[1473]: time="2024-10-09T07:53:15.485357351Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Oct 9 07:53:15.486087 containerd[1473]: time="2024-10-09T07:53:15.485848431Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:53:15.490091 containerd[1473]: time="2024-10-09T07:53:15.488686964Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:53:15.492957 containerd[1473]: time="2024-10-09T07:53:15.492903616Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.677979717s" Oct 9 07:53:15.493345 containerd[1473]: time="2024-10-09T07:53:15.493305127Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Oct 9 07:53:15.517850 containerd[1473]: time="2024-10-09T07:53:15.517800251Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Oct 9 07:53:16.046980 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount761846936.mount: Deactivated successfully. Oct 9 07:53:16.053012 containerd[1473]: time="2024-10-09T07:53:16.052928133Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:53:16.054368 containerd[1473]: time="2024-10-09T07:53:16.054240316Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Oct 9 07:53:16.055085 containerd[1473]: time="2024-10-09T07:53:16.054971065Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:53:16.058784 containerd[1473]: time="2024-10-09T07:53:16.058719631Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:53:16.060325 containerd[1473]: time="2024-10-09T07:53:16.060133688Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 542.292807ms" Oct 9 07:53:16.060325 containerd[1473]: time="2024-10-09T07:53:16.060184676Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Oct 9 07:53:16.091845 containerd[1473]: time="2024-10-09T07:53:16.091763601Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Oct 9 07:53:16.660737 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3552076820.mount: Deactivated successfully. Oct 9 07:53:16.926259 systemd-resolved[1324]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Oct 9 07:53:18.480113 containerd[1473]: time="2024-10-09T07:53:18.479609473Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:53:18.480954 containerd[1473]: time="2024-10-09T07:53:18.480825010Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Oct 9 07:53:18.481787 containerd[1473]: time="2024-10-09T07:53:18.481318006Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:53:18.486097 containerd[1473]: time="2024-10-09T07:53:18.485457741Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:53:18.488869 containerd[1473]: time="2024-10-09T07:53:18.488451415Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 2.39665022s" Oct 9 07:53:18.488869 containerd[1473]: time="2024-10-09T07:53:18.488509942Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Oct 9 07:53:20.986776 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:53:20.997435 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:53:21.029453 systemd[1]: Reloading requested from client PID 2091 ('systemctl') (unit session-7.scope)... Oct 9 07:53:21.029471 systemd[1]: Reloading... Oct 9 07:53:21.178118 zram_generator::config[2130]: No configuration found. Oct 9 07:53:21.312951 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 07:53:21.392310 systemd[1]: Reloading finished in 362 ms. Oct 9 07:53:21.457535 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 9 07:53:21.457670 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 9 07:53:21.457959 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:53:21.464617 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:53:21.578327 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:53:21.590509 (kubelet)[2183]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 9 07:53:21.642631 kubelet[2183]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 07:53:21.642631 kubelet[2183]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 9 07:53:21.642631 kubelet[2183]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 07:53:21.644043 kubelet[2183]: I1009 07:53:21.643950 2183 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 9 07:53:21.858204 kubelet[2183]: I1009 07:53:21.858027 2183 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Oct 9 07:53:21.858204 kubelet[2183]: I1009 07:53:21.858073 2183 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 9 07:53:21.858388 kubelet[2183]: I1009 07:53:21.858348 2183 server.go:919] "Client rotation is on, will bootstrap in background" Oct 9 07:53:21.879809 kubelet[2183]: E1009 07:53:21.879765 2183 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://143.198.229.119:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 143.198.229.119:6443: connect: connection refused Oct 9 07:53:21.882599 kubelet[2183]: I1009 07:53:21.882428 2183 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 9 07:53:21.898987 kubelet[2183]: I1009 07:53:21.898938 2183 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 9 07:53:21.900205 kubelet[2183]: I1009 07:53:21.900166 2183 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 9 07:53:21.901550 kubelet[2183]: I1009 07:53:21.901468 2183 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 9 07:53:21.901550 kubelet[2183]: I1009 07:53:21.901537 2183 topology_manager.go:138] "Creating topology manager with none policy" Oct 9 07:53:21.901550 kubelet[2183]: I1009 07:53:21.901554 2183 container_manager_linux.go:301] "Creating device plugin manager" Oct 9 07:53:21.901794 kubelet[2183]: I1009 07:53:21.901733 2183 state_mem.go:36] "Initialized new in-memory state store" Oct 9 07:53:21.901928 kubelet[2183]: I1009 07:53:21.901903 2183 kubelet.go:396] "Attempting to sync node with API server" Oct 9 07:53:21.902356 kubelet[2183]: I1009 07:53:21.902329 2183 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 9 07:53:21.902401 kubelet[2183]: I1009 07:53:21.902392 2183 kubelet.go:312] "Adding apiserver pod source" Oct 9 07:53:21.902438 kubelet[2183]: I1009 07:53:21.902423 2183 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 9 07:53:21.903969 kubelet[2183]: W1009 07:53:21.903909 2183 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://143.198.229.119:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.1.0-5-a4f881141a&limit=500&resourceVersion=0": dial tcp 143.198.229.119:6443: connect: connection refused Oct 9 07:53:21.903969 kubelet[2183]: E1009 07:53:21.903962 2183 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://143.198.229.119:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.1.0-5-a4f881141a&limit=500&resourceVersion=0": dial tcp 143.198.229.119:6443: connect: connection refused Oct 9 07:53:21.905843 kubelet[2183]: W1009 07:53:21.905116 2183 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://143.198.229.119:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 143.198.229.119:6443: connect: connection refused Oct 9 07:53:21.905843 kubelet[2183]: E1009 07:53:21.905167 2183 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://143.198.229.119:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 143.198.229.119:6443: connect: connection refused Oct 9 07:53:21.908528 kubelet[2183]: I1009 07:53:21.908186 2183 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Oct 9 07:53:21.915847 kubelet[2183]: I1009 07:53:21.915794 2183 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 9 07:53:21.918194 kubelet[2183]: W1009 07:53:21.917413 2183 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 9 07:53:21.920262 kubelet[2183]: I1009 07:53:21.919883 2183 server.go:1256] "Started kubelet" Oct 9 07:53:21.924054 kubelet[2183]: I1009 07:53:21.923762 2183 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Oct 9 07:53:21.926098 kubelet[2183]: I1009 07:53:21.924867 2183 server.go:461] "Adding debug handlers to kubelet server" Oct 9 07:53:21.926245 kubelet[2183]: I1009 07:53:21.926173 2183 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 9 07:53:21.926505 kubelet[2183]: I1009 07:53:21.926486 2183 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 9 07:53:21.928236 kubelet[2183]: E1009 07:53:21.928102 2183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://143.198.229.119:6443/api/v1/namespaces/default/events\": dial tcp 143.198.229.119:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.1.0-5-a4f881141a.17fcb993e8c2b97c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.1.0-5-a4f881141a,UID:ci-4081.1.0-5-a4f881141a,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.1.0-5-a4f881141a,},FirstTimestamp:2024-10-09 07:53:21.919834492 +0000 UTC m=+0.324274226,LastTimestamp:2024-10-09 07:53:21.919834492 +0000 UTC m=+0.324274226,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.1.0-5-a4f881141a,}" Oct 9 07:53:21.928563 kubelet[2183]: I1009 07:53:21.928541 2183 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 9 07:53:21.935566 kubelet[2183]: I1009 07:53:21.935528 2183 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 9 07:53:21.935696 kubelet[2183]: I1009 07:53:21.935644 2183 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Oct 9 07:53:21.935729 kubelet[2183]: I1009 07:53:21.935712 2183 reconciler_new.go:29] "Reconciler: start to sync state" Oct 9 07:53:21.936182 kubelet[2183]: W1009 07:53:21.936135 2183 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://143.198.229.119:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 143.198.229.119:6443: connect: connection refused Oct 9 07:53:21.936252 kubelet[2183]: E1009 07:53:21.936187 2183 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://143.198.229.119:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 143.198.229.119:6443: connect: connection refused Oct 9 07:53:21.936447 kubelet[2183]: E1009 07:53:21.936428 2183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.229.119:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.1.0-5-a4f881141a?timeout=10s\": dial tcp 143.198.229.119:6443: connect: connection refused" interval="200ms" Oct 9 07:53:21.939786 kubelet[2183]: I1009 07:53:21.939756 2183 factory.go:221] Registration of the systemd container factory successfully Oct 9 07:53:21.940617 kubelet[2183]: I1009 07:53:21.940581 2183 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 9 07:53:21.946281 kubelet[2183]: I1009 07:53:21.946252 2183 factory.go:221] Registration of the containerd container factory successfully Oct 9 07:53:21.951539 kubelet[2183]: I1009 07:53:21.950343 2183 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 9 07:53:21.952095 kubelet[2183]: I1009 07:53:21.951796 2183 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 9 07:53:21.952095 kubelet[2183]: I1009 07:53:21.951831 2183 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 9 07:53:21.952095 kubelet[2183]: I1009 07:53:21.951859 2183 kubelet.go:2329] "Starting kubelet main sync loop" Oct 9 07:53:21.952095 kubelet[2183]: E1009 07:53:21.951938 2183 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 9 07:53:21.958490 kubelet[2183]: E1009 07:53:21.958459 2183 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 9 07:53:21.960898 kubelet[2183]: W1009 07:53:21.960835 2183 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://143.198.229.119:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 143.198.229.119:6443: connect: connection refused Oct 9 07:53:21.960898 kubelet[2183]: E1009 07:53:21.960889 2183 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://143.198.229.119:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 143.198.229.119:6443: connect: connection refused Oct 9 07:53:21.978336 kubelet[2183]: I1009 07:53:21.978307 2183 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 9 07:53:21.978853 kubelet[2183]: I1009 07:53:21.978568 2183 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 9 07:53:21.978853 kubelet[2183]: I1009 07:53:21.978600 2183 state_mem.go:36] "Initialized new in-memory state store" Oct 9 07:53:21.980202 kubelet[2183]: I1009 07:53:21.980048 2183 policy_none.go:49] "None policy: Start" Oct 9 07:53:21.980944 kubelet[2183]: I1009 07:53:21.980922 2183 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 9 07:53:21.981032 kubelet[2183]: I1009 07:53:21.980954 2183 state_mem.go:35] "Initializing new in-memory state store" Oct 9 07:53:21.988780 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 9 07:53:22.003083 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 9 07:53:22.006349 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 9 07:53:22.014546 kubelet[2183]: I1009 07:53:22.014030 2183 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 9 07:53:22.014546 kubelet[2183]: I1009 07:53:22.014412 2183 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 9 07:53:22.016457 kubelet[2183]: E1009 07:53:22.016156 2183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.1.0-5-a4f881141a\" not found" Oct 9 07:53:22.037334 kubelet[2183]: I1009 07:53:22.037292 2183 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.1.0-5-a4f881141a" Oct 9 07:53:22.037792 kubelet[2183]: E1009 07:53:22.037758 2183 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://143.198.229.119:6443/api/v1/nodes\": dial tcp 143.198.229.119:6443: connect: connection refused" node="ci-4081.1.0-5-a4f881141a" Oct 9 07:53:22.053161 kubelet[2183]: I1009 07:53:22.053105 2183 topology_manager.go:215] "Topology Admit Handler" podUID="b059749d46d77f6528709e1f7f638ced" podNamespace="kube-system" podName="kube-scheduler-ci-4081.1.0-5-a4f881141a" Oct 9 07:53:22.054783 kubelet[2183]: I1009 07:53:22.054160 2183 topology_manager.go:215] "Topology Admit Handler" podUID="96eaa91428c174c1dfbcff37e8e48ec0" podNamespace="kube-system" podName="kube-apiserver-ci-4081.1.0-5-a4f881141a" Oct 9 07:53:22.055646 kubelet[2183]: I1009 07:53:22.055629 2183 topology_manager.go:215] "Topology Admit Handler" podUID="9ed219a7fa7510be81e5a3626b8a6fb6" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.1.0-5-a4f881141a" Oct 9 07:53:22.063649 systemd[1]: Created slice kubepods-burstable-podb059749d46d77f6528709e1f7f638ced.slice - libcontainer container kubepods-burstable-podb059749d46d77f6528709e1f7f638ced.slice. Oct 9 07:53:22.076633 systemd[1]: Created slice kubepods-burstable-pod96eaa91428c174c1dfbcff37e8e48ec0.slice - libcontainer container kubepods-burstable-pod96eaa91428c174c1dfbcff37e8e48ec0.slice. Oct 9 07:53:22.089474 systemd[1]: Created slice kubepods-burstable-pod9ed219a7fa7510be81e5a3626b8a6fb6.slice - libcontainer container kubepods-burstable-pod9ed219a7fa7510be81e5a3626b8a6fb6.slice. Oct 9 07:53:22.137534 kubelet[2183]: E1009 07:53:22.137373 2183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.229.119:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.1.0-5-a4f881141a?timeout=10s\": dial tcp 143.198.229.119:6443: connect: connection refused" interval="400ms" Oct 9 07:53:22.238409 kubelet[2183]: I1009 07:53:22.237779 2183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9ed219a7fa7510be81e5a3626b8a6fb6-kubeconfig\") pod \"kube-controller-manager-ci-4081.1.0-5-a4f881141a\" (UID: \"9ed219a7fa7510be81e5a3626b8a6fb6\") " pod="kube-system/kube-controller-manager-ci-4081.1.0-5-a4f881141a" Oct 9 07:53:22.238409 kubelet[2183]: I1009 07:53:22.238119 2183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b059749d46d77f6528709e1f7f638ced-kubeconfig\") pod \"kube-scheduler-ci-4081.1.0-5-a4f881141a\" (UID: \"b059749d46d77f6528709e1f7f638ced\") " pod="kube-system/kube-scheduler-ci-4081.1.0-5-a4f881141a" Oct 9 07:53:22.238409 kubelet[2183]: I1009 07:53:22.238172 2183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/96eaa91428c174c1dfbcff37e8e48ec0-ca-certs\") pod \"kube-apiserver-ci-4081.1.0-5-a4f881141a\" (UID: \"96eaa91428c174c1dfbcff37e8e48ec0\") " pod="kube-system/kube-apiserver-ci-4081.1.0-5-a4f881141a" Oct 9 07:53:22.238409 kubelet[2183]: I1009 07:53:22.238203 2183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9ed219a7fa7510be81e5a3626b8a6fb6-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.1.0-5-a4f881141a\" (UID: \"9ed219a7fa7510be81e5a3626b8a6fb6\") " pod="kube-system/kube-controller-manager-ci-4081.1.0-5-a4f881141a" Oct 9 07:53:22.238409 kubelet[2183]: I1009 07:53:22.238230 2183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9ed219a7fa7510be81e5a3626b8a6fb6-k8s-certs\") pod \"kube-controller-manager-ci-4081.1.0-5-a4f881141a\" (UID: \"9ed219a7fa7510be81e5a3626b8a6fb6\") " pod="kube-system/kube-controller-manager-ci-4081.1.0-5-a4f881141a" Oct 9 07:53:22.238791 kubelet[2183]: I1009 07:53:22.238255 2183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9ed219a7fa7510be81e5a3626b8a6fb6-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.1.0-5-a4f881141a\" (UID: \"9ed219a7fa7510be81e5a3626b8a6fb6\") " pod="kube-system/kube-controller-manager-ci-4081.1.0-5-a4f881141a" Oct 9 07:53:22.238791 kubelet[2183]: I1009 07:53:22.238273 2183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/96eaa91428c174c1dfbcff37e8e48ec0-k8s-certs\") pod \"kube-apiserver-ci-4081.1.0-5-a4f881141a\" (UID: \"96eaa91428c174c1dfbcff37e8e48ec0\") " pod="kube-system/kube-apiserver-ci-4081.1.0-5-a4f881141a" Oct 9 07:53:22.238791 kubelet[2183]: I1009 07:53:22.238315 2183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/96eaa91428c174c1dfbcff37e8e48ec0-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.1.0-5-a4f881141a\" (UID: \"96eaa91428c174c1dfbcff37e8e48ec0\") " pod="kube-system/kube-apiserver-ci-4081.1.0-5-a4f881141a" Oct 9 07:53:22.238791 kubelet[2183]: I1009 07:53:22.238340 2183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9ed219a7fa7510be81e5a3626b8a6fb6-ca-certs\") pod \"kube-controller-manager-ci-4081.1.0-5-a4f881141a\" (UID: \"9ed219a7fa7510be81e5a3626b8a6fb6\") " pod="kube-system/kube-controller-manager-ci-4081.1.0-5-a4f881141a" Oct 9 07:53:22.239289 kubelet[2183]: I1009 07:53:22.239257 2183 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.1.0-5-a4f881141a" Oct 9 07:53:22.239931 kubelet[2183]: E1009 07:53:22.239901 2183 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://143.198.229.119:6443/api/v1/nodes\": dial tcp 143.198.229.119:6443: connect: connection refused" node="ci-4081.1.0-5-a4f881141a" Oct 9 07:53:22.375773 kubelet[2183]: E1009 07:53:22.375666 2183 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:53:22.376516 containerd[1473]: time="2024-10-09T07:53:22.376470748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.1.0-5-a4f881141a,Uid:b059749d46d77f6528709e1f7f638ced,Namespace:kube-system,Attempt:0,}" Oct 9 07:53:22.377994 systemd-resolved[1324]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Oct 9 07:53:22.386911 kubelet[2183]: E1009 07:53:22.386828 2183 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:53:22.393254 kubelet[2183]: E1009 07:53:22.392940 2183 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:53:22.394374 containerd[1473]: time="2024-10-09T07:53:22.393500795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.1.0-5-a4f881141a,Uid:9ed219a7fa7510be81e5a3626b8a6fb6,Namespace:kube-system,Attempt:0,}" Oct 9 07:53:22.394374 containerd[1473]: time="2024-10-09T07:53:22.394008985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.1.0-5-a4f881141a,Uid:96eaa91428c174c1dfbcff37e8e48ec0,Namespace:kube-system,Attempt:0,}" Oct 9 07:53:22.538840 kubelet[2183]: E1009 07:53:22.538800 2183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.229.119:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.1.0-5-a4f881141a?timeout=10s\": dial tcp 143.198.229.119:6443: connect: connection refused" interval="800ms" Oct 9 07:53:22.641433 kubelet[2183]: I1009 07:53:22.641401 2183 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.1.0-5-a4f881141a" Oct 9 07:53:22.642012 kubelet[2183]: E1009 07:53:22.641977 2183 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://143.198.229.119:6443/api/v1/nodes\": dial tcp 143.198.229.119:6443: connect: connection refused" node="ci-4081.1.0-5-a4f881141a" Oct 9 07:53:22.759240 kubelet[2183]: W1009 07:53:22.759044 2183 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://143.198.229.119:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 143.198.229.119:6443: connect: connection refused Oct 9 07:53:22.759240 kubelet[2183]: E1009 07:53:22.759147 2183 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://143.198.229.119:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 143.198.229.119:6443: connect: connection refused Oct 9 07:53:22.881535 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3436859779.mount: Deactivated successfully. Oct 9 07:53:22.889313 containerd[1473]: time="2024-10-09T07:53:22.889172003Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 07:53:22.890646 containerd[1473]: time="2024-10-09T07:53:22.890574833Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 07:53:22.891502 containerd[1473]: time="2024-10-09T07:53:22.891400402Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 9 07:53:22.891774 containerd[1473]: time="2024-10-09T07:53:22.891621546Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Oct 9 07:53:22.893155 containerd[1473]: time="2024-10-09T07:53:22.892962038Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 9 07:53:22.893155 containerd[1473]: time="2024-10-09T07:53:22.893098079Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 07:53:22.896537 containerd[1473]: time="2024-10-09T07:53:22.896468503Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 07:53:22.899251 containerd[1473]: time="2024-10-09T07:53:22.898614565Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 522.049092ms" Oct 9 07:53:22.901041 containerd[1473]: time="2024-10-09T07:53:22.900535079Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 506.414374ms" Oct 9 07:53:22.903736 containerd[1473]: time="2024-10-09T07:53:22.903650127Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 07:53:22.907097 containerd[1473]: time="2024-10-09T07:53:22.907007988Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 513.41878ms" Oct 9 07:53:23.093469 containerd[1473]: time="2024-10-09T07:53:23.093220039Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:53:23.093742 containerd[1473]: time="2024-10-09T07:53:23.093581708Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:53:23.093742 containerd[1473]: time="2024-10-09T07:53:23.093646415Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:53:23.093856 containerd[1473]: time="2024-10-09T07:53:23.093804880Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:53:23.095300 containerd[1473]: time="2024-10-09T07:53:23.092836957Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:53:23.095300 containerd[1473]: time="2024-10-09T07:53:23.093310749Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:53:23.095822 containerd[1473]: time="2024-10-09T07:53:23.095486218Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:53:23.095822 containerd[1473]: time="2024-10-09T07:53:23.095560333Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:53:23.095822 containerd[1473]: time="2024-10-09T07:53:23.095713232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:53:23.096490 containerd[1473]: time="2024-10-09T07:53:23.096429000Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:53:23.096984 containerd[1473]: time="2024-10-09T07:53:23.096461823Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:53:23.098330 containerd[1473]: time="2024-10-09T07:53:23.098279702Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:53:23.130649 systemd[1]: Started cri-containerd-f8a401115cabad4697e242bb79aa38ad7f0fcce60451427fecd01c02300811d2.scope - libcontainer container f8a401115cabad4697e242bb79aa38ad7f0fcce60451427fecd01c02300811d2. Oct 9 07:53:23.144967 systemd[1]: Started cri-containerd-418cb058074a540993608a498cbf4a92121256cbe3a723d42feb571da359af02.scope - libcontainer container 418cb058074a540993608a498cbf4a92121256cbe3a723d42feb571da359af02. Oct 9 07:53:23.151243 systemd[1]: Started cri-containerd-5e14235e396c2cb7bb2ca02694016683d8dbfe3b17a2f9ce113c38fead684083.scope - libcontainer container 5e14235e396c2cb7bb2ca02694016683d8dbfe3b17a2f9ce113c38fead684083. Oct 9 07:53:23.247585 containerd[1473]: time="2024-10-09T07:53:23.247535710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.1.0-5-a4f881141a,Uid:9ed219a7fa7510be81e5a3626b8a6fb6,Namespace:kube-system,Attempt:0,} returns sandbox id \"f8a401115cabad4697e242bb79aa38ad7f0fcce60451427fecd01c02300811d2\"" Oct 9 07:53:23.249904 kubelet[2183]: E1009 07:53:23.249683 2183 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:53:23.257508 containerd[1473]: time="2024-10-09T07:53:23.257449800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.1.0-5-a4f881141a,Uid:96eaa91428c174c1dfbcff37e8e48ec0,Namespace:kube-system,Attempt:0,} returns sandbox id \"418cb058074a540993608a498cbf4a92121256cbe3a723d42feb571da359af02\"" Oct 9 07:53:23.258495 kubelet[2183]: E1009 07:53:23.258308 2183 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:53:23.261236 containerd[1473]: time="2024-10-09T07:53:23.260927018Z" level=info msg="CreateContainer within sandbox \"f8a401115cabad4697e242bb79aa38ad7f0fcce60451427fecd01c02300811d2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 9 07:53:23.262053 containerd[1473]: time="2024-10-09T07:53:23.261998952Z" level=info msg="CreateContainer within sandbox \"418cb058074a540993608a498cbf4a92121256cbe3a723d42feb571da359af02\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 9 07:53:23.269349 containerd[1473]: time="2024-10-09T07:53:23.269280339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.1.0-5-a4f881141a,Uid:b059749d46d77f6528709e1f7f638ced,Namespace:kube-system,Attempt:0,} returns sandbox id \"5e14235e396c2cb7bb2ca02694016683d8dbfe3b17a2f9ce113c38fead684083\"" Oct 9 07:53:23.270291 kubelet[2183]: E1009 07:53:23.270256 2183 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:53:23.272351 containerd[1473]: time="2024-10-09T07:53:23.272310017Z" level=info msg="CreateContainer within sandbox \"5e14235e396c2cb7bb2ca02694016683d8dbfe3b17a2f9ce113c38fead684083\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 9 07:53:23.300792 containerd[1473]: time="2024-10-09T07:53:23.300730095Z" level=info msg="CreateContainer within sandbox \"418cb058074a540993608a498cbf4a92121256cbe3a723d42feb571da359af02\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"60c5e8fcc38e612dc0075afc67ab25750aebe227c8274ad28510d7ef4b3f1f59\"" Oct 9 07:53:23.301621 containerd[1473]: time="2024-10-09T07:53:23.301583250Z" level=info msg="StartContainer for \"60c5e8fcc38e612dc0075afc67ab25750aebe227c8274ad28510d7ef4b3f1f59\"" Oct 9 07:53:23.307111 containerd[1473]: time="2024-10-09T07:53:23.304935238Z" level=info msg="CreateContainer within sandbox \"f8a401115cabad4697e242bb79aa38ad7f0fcce60451427fecd01c02300811d2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f5436b7c88bd90f107f25708e8e619d3f44aaa37f6c1586429083436ca1ec2be\"" Oct 9 07:53:23.307111 containerd[1473]: time="2024-10-09T07:53:23.306277274Z" level=info msg="StartContainer for \"f5436b7c88bd90f107f25708e8e619d3f44aaa37f6c1586429083436ca1ec2be\"" Oct 9 07:53:23.311680 containerd[1473]: time="2024-10-09T07:53:23.311614785Z" level=info msg="CreateContainer within sandbox \"5e14235e396c2cb7bb2ca02694016683d8dbfe3b17a2f9ce113c38fead684083\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"33723f1f7fb44a128fefd2366ebd3f8a9468a73c92def465c6776105e6afcd92\"" Oct 9 07:53:23.314910 containerd[1473]: time="2024-10-09T07:53:23.314853459Z" level=info msg="StartContainer for \"33723f1f7fb44a128fefd2366ebd3f8a9468a73c92def465c6776105e6afcd92\"" Oct 9 07:53:23.322189 kubelet[2183]: W1009 07:53:23.321030 2183 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://143.198.229.119:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 143.198.229.119:6443: connect: connection refused Oct 9 07:53:23.322333 kubelet[2183]: E1009 07:53:23.322226 2183 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://143.198.229.119:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 143.198.229.119:6443: connect: connection refused Oct 9 07:53:23.339610 kubelet[2183]: E1009 07:53:23.339576 2183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.229.119:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.1.0-5-a4f881141a?timeout=10s\": dial tcp 143.198.229.119:6443: connect: connection refused" interval="1.6s" Oct 9 07:53:23.341561 systemd[1]: Started cri-containerd-60c5e8fcc38e612dc0075afc67ab25750aebe227c8274ad28510d7ef4b3f1f59.scope - libcontainer container 60c5e8fcc38e612dc0075afc67ab25750aebe227c8274ad28510d7ef4b3f1f59. Oct 9 07:53:23.374309 systemd[1]: Started cri-containerd-f5436b7c88bd90f107f25708e8e619d3f44aaa37f6c1586429083436ca1ec2be.scope - libcontainer container f5436b7c88bd90f107f25708e8e619d3f44aaa37f6c1586429083436ca1ec2be. Oct 9 07:53:23.393038 systemd[1]: Started cri-containerd-33723f1f7fb44a128fefd2366ebd3f8a9468a73c92def465c6776105e6afcd92.scope - libcontainer container 33723f1f7fb44a128fefd2366ebd3f8a9468a73c92def465c6776105e6afcd92. Oct 9 07:53:23.418226 containerd[1473]: time="2024-10-09T07:53:23.418150457Z" level=info msg="StartContainer for \"60c5e8fcc38e612dc0075afc67ab25750aebe227c8274ad28510d7ef4b3f1f59\" returns successfully" Oct 9 07:53:23.420631 kubelet[2183]: W1009 07:53:23.420464 2183 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://143.198.229.119:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 143.198.229.119:6443: connect: connection refused Oct 9 07:53:23.420631 kubelet[2183]: E1009 07:53:23.420541 2183 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://143.198.229.119:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 143.198.229.119:6443: connect: connection refused Oct 9 07:53:23.433706 kubelet[2183]: W1009 07:53:23.433631 2183 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://143.198.229.119:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.1.0-5-a4f881141a&limit=500&resourceVersion=0": dial tcp 143.198.229.119:6443: connect: connection refused Oct 9 07:53:23.433706 kubelet[2183]: E1009 07:53:23.433696 2183 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://143.198.229.119:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.1.0-5-a4f881141a&limit=500&resourceVersion=0": dial tcp 143.198.229.119:6443: connect: connection refused Oct 9 07:53:23.444753 kubelet[2183]: I1009 07:53:23.444659 2183 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.1.0-5-a4f881141a" Oct 9 07:53:23.445043 kubelet[2183]: E1009 07:53:23.445027 2183 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://143.198.229.119:6443/api/v1/nodes\": dial tcp 143.198.229.119:6443: connect: connection refused" node="ci-4081.1.0-5-a4f881141a" Oct 9 07:53:23.455464 containerd[1473]: time="2024-10-09T07:53:23.455407626Z" level=info msg="StartContainer for \"f5436b7c88bd90f107f25708e8e619d3f44aaa37f6c1586429083436ca1ec2be\" returns successfully" Oct 9 07:53:23.477643 containerd[1473]: time="2024-10-09T07:53:23.477543565Z" level=info msg="StartContainer for \"33723f1f7fb44a128fefd2366ebd3f8a9468a73c92def465c6776105e6afcd92\" returns successfully" Oct 9 07:53:23.986463 kubelet[2183]: E1009 07:53:23.986427 2183 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:53:23.988942 kubelet[2183]: E1009 07:53:23.988870 2183 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:53:23.992082 kubelet[2183]: E1009 07:53:23.991495 2183 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:53:24.993460 kubelet[2183]: E1009 07:53:24.993426 2183 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:53:25.046348 kubelet[2183]: I1009 07:53:25.046296 2183 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.1.0-5-a4f881141a" Oct 9 07:53:25.588585 kubelet[2183]: E1009 07:53:25.588532 2183 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.1.0-5-a4f881141a\" not found" node="ci-4081.1.0-5-a4f881141a" Oct 9 07:53:25.664268 kubelet[2183]: I1009 07:53:25.664009 2183 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.1.0-5-a4f881141a" Oct 9 07:53:25.687971 kubelet[2183]: E1009 07:53:25.687653 2183 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.1.0-5-a4f881141a\" not found" Oct 9 07:53:25.788017 kubelet[2183]: E1009 07:53:25.787978 2183 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.1.0-5-a4f881141a\" not found" Oct 9 07:53:25.889435 kubelet[2183]: E1009 07:53:25.888941 2183 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.1.0-5-a4f881141a\" not found" Oct 9 07:53:25.990149 kubelet[2183]: E1009 07:53:25.990085 2183 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.1.0-5-a4f881141a\" not found" Oct 9 07:53:26.090924 kubelet[2183]: E1009 07:53:26.090861 2183 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.1.0-5-a4f881141a\" not found" Oct 9 07:53:26.191740 kubelet[2183]: E1009 07:53:26.191548 2183 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.1.0-5-a4f881141a\" not found" Oct 9 07:53:26.292600 kubelet[2183]: E1009 07:53:26.292534 2183 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.1.0-5-a4f881141a\" not found" Oct 9 07:53:26.393700 kubelet[2183]: E1009 07:53:26.393622 2183 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.1.0-5-a4f881141a\" not found" Oct 9 07:53:26.494604 kubelet[2183]: E1009 07:53:26.494434 2183 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.1.0-5-a4f881141a\" not found" Oct 9 07:53:26.595237 kubelet[2183]: E1009 07:53:26.595168 2183 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.1.0-5-a4f881141a\" not found" Oct 9 07:53:26.905902 kubelet[2183]: I1009 07:53:26.905656 2183 apiserver.go:52] "Watching apiserver" Oct 9 07:53:26.936257 kubelet[2183]: I1009 07:53:26.936178 2183 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Oct 9 07:53:29.169329 kubelet[2183]: W1009 07:53:29.168922 2183 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 9 07:53:29.170463 kubelet[2183]: E1009 07:53:29.169889 2183 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:53:29.822395 systemd[1]: Reloading requested from client PID 2460 ('systemctl') (unit session-7.scope)... Oct 9 07:53:29.822417 systemd[1]: Reloading... Oct 9 07:53:29.944181 zram_generator::config[2499]: No configuration found. Oct 9 07:53:30.005013 kubelet[2183]: E1009 07:53:30.004336 2183 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:53:30.135150 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 07:53:30.236716 systemd[1]: Reloading finished in 413 ms. Oct 9 07:53:30.282349 kubelet[2183]: I1009 07:53:30.282202 2183 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 9 07:53:30.282440 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:53:30.288118 systemd[1]: kubelet.service: Deactivated successfully. Oct 9 07:53:30.288434 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:53:30.296886 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:53:30.465297 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:53:30.478679 (kubelet)[2549]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 9 07:53:30.570333 kubelet[2549]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 07:53:30.570333 kubelet[2549]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 9 07:53:30.570333 kubelet[2549]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 07:53:30.571778 kubelet[2549]: I1009 07:53:30.571101 2549 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 9 07:53:30.583186 kubelet[2549]: I1009 07:53:30.582480 2549 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Oct 9 07:53:30.583186 kubelet[2549]: I1009 07:53:30.582521 2549 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 9 07:53:30.583186 kubelet[2549]: I1009 07:53:30.582858 2549 server.go:919] "Client rotation is on, will bootstrap in background" Oct 9 07:53:30.585715 kubelet[2549]: I1009 07:53:30.585661 2549 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 9 07:53:30.591942 kubelet[2549]: I1009 07:53:30.591861 2549 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 9 07:53:30.607855 kubelet[2549]: I1009 07:53:30.607768 2549 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 9 07:53:30.608265 kubelet[2549]: I1009 07:53:30.608240 2549 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 9 07:53:30.608535 kubelet[2549]: I1009 07:53:30.608516 2549 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 9 07:53:30.608636 kubelet[2549]: I1009 07:53:30.608549 2549 topology_manager.go:138] "Creating topology manager with none policy" Oct 9 07:53:30.608636 kubelet[2549]: I1009 07:53:30.608565 2549 container_manager_linux.go:301] "Creating device plugin manager" Oct 9 07:53:30.608636 kubelet[2549]: I1009 07:53:30.608622 2549 state_mem.go:36] "Initialized new in-memory state store" Oct 9 07:53:30.608788 kubelet[2549]: I1009 07:53:30.608769 2549 kubelet.go:396] "Attempting to sync node with API server" Oct 9 07:53:30.608821 kubelet[2549]: I1009 07:53:30.608796 2549 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 9 07:53:30.608845 kubelet[2549]: I1009 07:53:30.608834 2549 kubelet.go:312] "Adding apiserver pod source" Oct 9 07:53:30.608890 kubelet[2549]: I1009 07:53:30.608856 2549 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 9 07:53:30.613135 kubelet[2549]: I1009 07:53:30.612187 2549 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Oct 9 07:53:30.613135 kubelet[2549]: I1009 07:53:30.612464 2549 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 9 07:53:30.614543 kubelet[2549]: I1009 07:53:30.614513 2549 server.go:1256] "Started kubelet" Oct 9 07:53:30.620177 kubelet[2549]: I1009 07:53:30.618196 2549 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Oct 9 07:53:30.620446 kubelet[2549]: I1009 07:53:30.620424 2549 server.go:461] "Adding debug handlers to kubelet server" Oct 9 07:53:30.621405 kubelet[2549]: I1009 07:53:30.621372 2549 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 9 07:53:30.623767 kubelet[2549]: I1009 07:53:30.623718 2549 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 9 07:53:30.624068 kubelet[2549]: I1009 07:53:30.624044 2549 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 9 07:53:30.634085 kubelet[2549]: I1009 07:53:30.634012 2549 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 9 07:53:30.634824 kubelet[2549]: I1009 07:53:30.634716 2549 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Oct 9 07:53:30.635146 kubelet[2549]: I1009 07:53:30.635032 2549 reconciler_new.go:29] "Reconciler: start to sync state" Oct 9 07:53:30.650366 kubelet[2549]: I1009 07:53:30.650227 2549 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 9 07:53:30.653097 kubelet[2549]: I1009 07:53:30.652925 2549 factory.go:221] Registration of the systemd container factory successfully Oct 9 07:53:30.653097 kubelet[2549]: I1009 07:53:30.653052 2549 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 9 07:53:30.655118 kubelet[2549]: I1009 07:53:30.653468 2549 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 9 07:53:30.655118 kubelet[2549]: I1009 07:53:30.653511 2549 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 9 07:53:30.655118 kubelet[2549]: I1009 07:53:30.653537 2549 kubelet.go:2329] "Starting kubelet main sync loop" Oct 9 07:53:30.655118 kubelet[2549]: E1009 07:53:30.653611 2549 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 9 07:53:30.664134 kubelet[2549]: I1009 07:53:30.663572 2549 factory.go:221] Registration of the containerd container factory successfully Oct 9 07:53:30.740261 kubelet[2549]: I1009 07:53:30.737229 2549 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.1.0-5-a4f881141a" Oct 9 07:53:30.753592 kubelet[2549]: I1009 07:53:30.753558 2549 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 9 07:53:30.754302 kubelet[2549]: I1009 07:53:30.754274 2549 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 9 07:53:30.754460 kubelet[2549]: I1009 07:53:30.754449 2549 state_mem.go:36] "Initialized new in-memory state store" Oct 9 07:53:30.755902 kubelet[2549]: I1009 07:53:30.754753 2549 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 9 07:53:30.755902 kubelet[2549]: I1009 07:53:30.755744 2549 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 9 07:53:30.755902 kubelet[2549]: I1009 07:53:30.755761 2549 policy_none.go:49] "None policy: Start" Oct 9 07:53:30.755902 kubelet[2549]: I1009 07:53:30.755663 2549 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081.1.0-5-a4f881141a" Oct 9 07:53:30.756601 kubelet[2549]: I1009 07:53:30.756563 2549 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.1.0-5-a4f881141a" Oct 9 07:53:30.756958 kubelet[2549]: E1009 07:53:30.753716 2549 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 9 07:53:30.759186 kubelet[2549]: I1009 07:53:30.758020 2549 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 9 07:53:30.759186 kubelet[2549]: I1009 07:53:30.758074 2549 state_mem.go:35] "Initializing new in-memory state store" Oct 9 07:53:30.759186 kubelet[2549]: I1009 07:53:30.758308 2549 state_mem.go:75] "Updated machine memory state" Oct 9 07:53:30.770435 kubelet[2549]: I1009 07:53:30.770403 2549 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 9 07:53:30.771794 kubelet[2549]: I1009 07:53:30.771766 2549 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 9 07:53:30.957218 kubelet[2549]: I1009 07:53:30.957168 2549 topology_manager.go:215] "Topology Admit Handler" podUID="96eaa91428c174c1dfbcff37e8e48ec0" podNamespace="kube-system" podName="kube-apiserver-ci-4081.1.0-5-a4f881141a" Oct 9 07:53:30.957405 kubelet[2549]: I1009 07:53:30.957289 2549 topology_manager.go:215] "Topology Admit Handler" podUID="9ed219a7fa7510be81e5a3626b8a6fb6" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.1.0-5-a4f881141a" Oct 9 07:53:30.957405 kubelet[2549]: I1009 07:53:30.957333 2549 topology_manager.go:215] "Topology Admit Handler" podUID="b059749d46d77f6528709e1f7f638ced" podNamespace="kube-system" podName="kube-scheduler-ci-4081.1.0-5-a4f881141a" Oct 9 07:53:30.964444 kubelet[2549]: W1009 07:53:30.964224 2549 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 9 07:53:30.976586 kubelet[2549]: W1009 07:53:30.976045 2549 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 9 07:53:30.976586 kubelet[2549]: E1009 07:53:30.976164 2549 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081.1.0-5-a4f881141a\" already exists" pod="kube-system/kube-controller-manager-ci-4081.1.0-5-a4f881141a" Oct 9 07:53:30.977677 kubelet[2549]: W1009 07:53:30.977370 2549 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 9 07:53:31.037258 kubelet[2549]: I1009 07:53:31.036780 2549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/96eaa91428c174c1dfbcff37e8e48ec0-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.1.0-5-a4f881141a\" (UID: \"96eaa91428c174c1dfbcff37e8e48ec0\") " pod="kube-system/kube-apiserver-ci-4081.1.0-5-a4f881141a" Oct 9 07:53:31.037258 kubelet[2549]: I1009 07:53:31.036822 2549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9ed219a7fa7510be81e5a3626b8a6fb6-ca-certs\") pod \"kube-controller-manager-ci-4081.1.0-5-a4f881141a\" (UID: \"9ed219a7fa7510be81e5a3626b8a6fb6\") " pod="kube-system/kube-controller-manager-ci-4081.1.0-5-a4f881141a" Oct 9 07:53:31.037258 kubelet[2549]: I1009 07:53:31.036842 2549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9ed219a7fa7510be81e5a3626b8a6fb6-kubeconfig\") pod \"kube-controller-manager-ci-4081.1.0-5-a4f881141a\" (UID: \"9ed219a7fa7510be81e5a3626b8a6fb6\") " pod="kube-system/kube-controller-manager-ci-4081.1.0-5-a4f881141a" Oct 9 07:53:31.037258 kubelet[2549]: I1009 07:53:31.036873 2549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9ed219a7fa7510be81e5a3626b8a6fb6-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.1.0-5-a4f881141a\" (UID: \"9ed219a7fa7510be81e5a3626b8a6fb6\") " pod="kube-system/kube-controller-manager-ci-4081.1.0-5-a4f881141a" Oct 9 07:53:31.037258 kubelet[2549]: I1009 07:53:31.036892 2549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/96eaa91428c174c1dfbcff37e8e48ec0-k8s-certs\") pod \"kube-apiserver-ci-4081.1.0-5-a4f881141a\" (UID: \"96eaa91428c174c1dfbcff37e8e48ec0\") " pod="kube-system/kube-apiserver-ci-4081.1.0-5-a4f881141a" Oct 9 07:53:31.037546 kubelet[2549]: I1009 07:53:31.036911 2549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/96eaa91428c174c1dfbcff37e8e48ec0-ca-certs\") pod \"kube-apiserver-ci-4081.1.0-5-a4f881141a\" (UID: \"96eaa91428c174c1dfbcff37e8e48ec0\") " pod="kube-system/kube-apiserver-ci-4081.1.0-5-a4f881141a" Oct 9 07:53:31.037989 kubelet[2549]: I1009 07:53:31.037928 2549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9ed219a7fa7510be81e5a3626b8a6fb6-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.1.0-5-a4f881141a\" (UID: \"9ed219a7fa7510be81e5a3626b8a6fb6\") " pod="kube-system/kube-controller-manager-ci-4081.1.0-5-a4f881141a" Oct 9 07:53:31.038247 kubelet[2549]: I1009 07:53:31.038166 2549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9ed219a7fa7510be81e5a3626b8a6fb6-k8s-certs\") pod \"kube-controller-manager-ci-4081.1.0-5-a4f881141a\" (UID: \"9ed219a7fa7510be81e5a3626b8a6fb6\") " pod="kube-system/kube-controller-manager-ci-4081.1.0-5-a4f881141a" Oct 9 07:53:31.038336 kubelet[2549]: I1009 07:53:31.038206 2549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b059749d46d77f6528709e1f7f638ced-kubeconfig\") pod \"kube-scheduler-ci-4081.1.0-5-a4f881141a\" (UID: \"b059749d46d77f6528709e1f7f638ced\") " pod="kube-system/kube-scheduler-ci-4081.1.0-5-a4f881141a" Oct 9 07:53:31.267233 kubelet[2549]: E1009 07:53:31.267111 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:53:31.277650 kubelet[2549]: E1009 07:53:31.277374 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:53:31.278158 kubelet[2549]: E1009 07:53:31.278096 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:53:31.611123 kubelet[2549]: I1009 07:53:31.610832 2549 apiserver.go:52] "Watching apiserver" Oct 9 07:53:31.724032 kubelet[2549]: E1009 07:53:31.721513 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:53:31.724846 kubelet[2549]: E1009 07:53:31.722899 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:53:31.730098 kubelet[2549]: E1009 07:53:31.729655 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:53:31.735052 kubelet[2549]: I1009 07:53:31.735015 2549 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Oct 9 07:53:31.798361 kubelet[2549]: I1009 07:53:31.798315 2549 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.1.0-5-a4f881141a" podStartSLOduration=1.798261105 podStartE2EDuration="1.798261105s" podCreationTimestamp="2024-10-09 07:53:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 07:53:31.743220678 +0000 UTC m=+1.255103902" watchObservedRunningTime="2024-10-09 07:53:31.798261105 +0000 UTC m=+1.310144306" Oct 9 07:53:31.849591 kubelet[2549]: I1009 07:53:31.849184 2549 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.1.0-5-a4f881141a" podStartSLOduration=2.8491407730000002 podStartE2EDuration="2.849140773s" podCreationTimestamp="2024-10-09 07:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 07:53:31.799231737 +0000 UTC m=+1.311114960" watchObservedRunningTime="2024-10-09 07:53:31.849140773 +0000 UTC m=+1.361023996" Oct 9 07:53:31.906748 kubelet[2549]: I1009 07:53:31.906569 2549 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.1.0-5-a4f881141a" podStartSLOduration=1.9065282049999999 podStartE2EDuration="1.906528205s" podCreationTimestamp="2024-10-09 07:53:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 07:53:31.849710075 +0000 UTC m=+1.361593298" watchObservedRunningTime="2024-10-09 07:53:31.906528205 +0000 UTC m=+1.418411455" Oct 9 07:53:32.729107 kubelet[2549]: E1009 07:53:32.727132 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:53:33.725645 kubelet[2549]: E1009 07:53:33.725579 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:53:34.366789 kubelet[2549]: E1009 07:53:34.366748 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:53:34.727882 kubelet[2549]: E1009 07:53:34.727254 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:53:35.898852 sudo[1656]: pam_unix(sudo:session): session closed for user root Oct 9 07:53:35.902745 sshd[1653]: pam_unix(sshd:session): session closed for user core Oct 9 07:53:35.907021 systemd[1]: sshd@6-143.198.229.119:22-139.178.89.65:58614.service: Deactivated successfully. Oct 9 07:53:35.909565 systemd[1]: session-7.scope: Deactivated successfully. Oct 9 07:53:35.909767 systemd[1]: session-7.scope: Consumed 4.842s CPU time, 186.0M memory peak, 0B memory swap peak. Oct 9 07:53:35.911302 systemd-logind[1448]: Session 7 logged out. Waiting for processes to exit. Oct 9 07:53:35.913570 systemd-logind[1448]: Removed session 7. Oct 9 07:53:39.052534 kubelet[2549]: E1009 07:53:39.052126 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:53:39.736788 kubelet[2549]: E1009 07:53:39.736364 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:53:42.232778 kubelet[2549]: E1009 07:53:42.232203 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:53:43.125297 kubelet[2549]: I1009 07:53:43.125263 2549 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 9 07:53:43.125665 containerd[1473]: time="2024-10-09T07:53:43.125633118Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 9 07:53:43.126049 kubelet[2549]: I1009 07:53:43.125840 2549 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 9 07:53:43.672493 kubelet[2549]: I1009 07:53:43.672256 2549 topology_manager.go:215] "Topology Admit Handler" podUID="d4db3edc-0f34-44ee-b84d-86df5dddb59f" podNamespace="kube-system" podName="kube-proxy-nclvj" Oct 9 07:53:43.683996 systemd[1]: Created slice kubepods-besteffort-podd4db3edc_0f34_44ee_b84d_86df5dddb59f.slice - libcontainer container kubepods-besteffort-podd4db3edc_0f34_44ee_b84d_86df5dddb59f.slice. Oct 9 07:53:43.688416 kubelet[2549]: W1009 07:53:43.688381 2549 reflector.go:539] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-4081.1.0-5-a4f881141a" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081.1.0-5-a4f881141a' and this object Oct 9 07:53:43.688416 kubelet[2549]: E1009 07:53:43.688419 2549 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-4081.1.0-5-a4f881141a" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081.1.0-5-a4f881141a' and this object Oct 9 07:53:43.710263 kubelet[2549]: I1009 07:53:43.710213 2549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d4db3edc-0f34-44ee-b84d-86df5dddb59f-kube-proxy\") pod \"kube-proxy-nclvj\" (UID: \"d4db3edc-0f34-44ee-b84d-86df5dddb59f\") " pod="kube-system/kube-proxy-nclvj" Oct 9 07:53:43.710263 kubelet[2549]: I1009 07:53:43.710259 2549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d4db3edc-0f34-44ee-b84d-86df5dddb59f-xtables-lock\") pod \"kube-proxy-nclvj\" (UID: \"d4db3edc-0f34-44ee-b84d-86df5dddb59f\") " pod="kube-system/kube-proxy-nclvj" Oct 9 07:53:43.710443 kubelet[2549]: I1009 07:53:43.710284 2549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkzx2\" (UniqueName: \"kubernetes.io/projected/d4db3edc-0f34-44ee-b84d-86df5dddb59f-kube-api-access-rkzx2\") pod \"kube-proxy-nclvj\" (UID: \"d4db3edc-0f34-44ee-b84d-86df5dddb59f\") " pod="kube-system/kube-proxy-nclvj" Oct 9 07:53:43.710443 kubelet[2549]: I1009 07:53:43.710305 2549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d4db3edc-0f34-44ee-b84d-86df5dddb59f-lib-modules\") pod \"kube-proxy-nclvj\" (UID: \"d4db3edc-0f34-44ee-b84d-86df5dddb59f\") " pod="kube-system/kube-proxy-nclvj" Oct 9 07:53:43.838685 kubelet[2549]: E1009 07:53:43.838580 2549 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Oct 9 07:53:43.838685 kubelet[2549]: E1009 07:53:43.838684 2549 projected.go:200] Error preparing data for projected volume kube-api-access-rkzx2 for pod kube-system/kube-proxy-nclvj: configmap "kube-root-ca.crt" not found Oct 9 07:53:43.839977 kubelet[2549]: E1009 07:53:43.839899 2549 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d4db3edc-0f34-44ee-b84d-86df5dddb59f-kube-api-access-rkzx2 podName:d4db3edc-0f34-44ee-b84d-86df5dddb59f nodeName:}" failed. No retries permitted until 2024-10-09 07:53:44.338788158 +0000 UTC m=+13.850671382 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rkzx2" (UniqueName: "kubernetes.io/projected/d4db3edc-0f34-44ee-b84d-86df5dddb59f-kube-api-access-rkzx2") pod "kube-proxy-nclvj" (UID: "d4db3edc-0f34-44ee-b84d-86df5dddb59f") : configmap "kube-root-ca.crt" not found Oct 9 07:53:44.207453 kubelet[2549]: I1009 07:53:44.207404 2549 topology_manager.go:215] "Topology Admit Handler" podUID="cad684a5-6a26-4ce1-8227-a2b8004db6bb" podNamespace="tigera-operator" podName="tigera-operator-5d56685c77-ms8x6" Oct 9 07:53:44.214041 kubelet[2549]: I1009 07:53:44.213915 2549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/cad684a5-6a26-4ce1-8227-a2b8004db6bb-var-lib-calico\") pod \"tigera-operator-5d56685c77-ms8x6\" (UID: \"cad684a5-6a26-4ce1-8227-a2b8004db6bb\") " pod="tigera-operator/tigera-operator-5d56685c77-ms8x6" Oct 9 07:53:44.214041 kubelet[2549]: I1009 07:53:44.213992 2549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22mst\" (UniqueName: \"kubernetes.io/projected/cad684a5-6a26-4ce1-8227-a2b8004db6bb-kube-api-access-22mst\") pod \"tigera-operator-5d56685c77-ms8x6\" (UID: \"cad684a5-6a26-4ce1-8227-a2b8004db6bb\") " pod="tigera-operator/tigera-operator-5d56685c77-ms8x6" Oct 9 07:53:44.217882 systemd[1]: Created slice kubepods-besteffort-podcad684a5_6a26_4ce1_8227_a2b8004db6bb.slice - libcontainer container kubepods-besteffort-podcad684a5_6a26_4ce1_8227_a2b8004db6bb.slice. Oct 9 07:53:44.527422 containerd[1473]: time="2024-10-09T07:53:44.527153831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-ms8x6,Uid:cad684a5-6a26-4ce1-8227-a2b8004db6bb,Namespace:tigera-operator,Attempt:0,}" Oct 9 07:53:44.559914 containerd[1473]: time="2024-10-09T07:53:44.559269982Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:53:44.559914 containerd[1473]: time="2024-10-09T07:53:44.559376181Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:53:44.559914 containerd[1473]: time="2024-10-09T07:53:44.559400785Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:53:44.560999 containerd[1473]: time="2024-10-09T07:53:44.560876550Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:53:44.593375 systemd[1]: Started cri-containerd-0bca2fb2d3fe12fa5a2dad9d7d41cc04e7a4a535070028faeb4802e976bad7a3.scope - libcontainer container 0bca2fb2d3fe12fa5a2dad9d7d41cc04e7a4a535070028faeb4802e976bad7a3. Oct 9 07:53:44.638344 containerd[1473]: time="2024-10-09T07:53:44.638296248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-ms8x6,Uid:cad684a5-6a26-4ce1-8227-a2b8004db6bb,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"0bca2fb2d3fe12fa5a2dad9d7d41cc04e7a4a535070028faeb4802e976bad7a3\"" Oct 9 07:53:44.642116 containerd[1473]: time="2024-10-09T07:53:44.642079693Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\"" Oct 9 07:53:44.663352 update_engine[1450]: I20241009 07:53:44.662525 1450 update_attempter.cc:509] Updating boot flags... Oct 9 07:53:44.700205 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2678) Oct 9 07:53:44.813751 kubelet[2549]: E1009 07:53:44.813170 2549 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Oct 9 07:53:44.813751 kubelet[2549]: E1009 07:53:44.813307 2549 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d4db3edc-0f34-44ee-b84d-86df5dddb59f-kube-proxy podName:d4db3edc-0f34-44ee-b84d-86df5dddb59f nodeName:}" failed. No retries permitted until 2024-10-09 07:53:45.313281253 +0000 UTC m=+14.825164476 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/d4db3edc-0f34-44ee-b84d-86df5dddb59f-kube-proxy") pod "kube-proxy-nclvj" (UID: "d4db3edc-0f34-44ee-b84d-86df5dddb59f") : failed to sync configmap cache: timed out waiting for the condition Oct 9 07:53:45.493432 kubelet[2549]: E1009 07:53:45.493359 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:53:45.495136 containerd[1473]: time="2024-10-09T07:53:45.494839279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nclvj,Uid:d4db3edc-0f34-44ee-b84d-86df5dddb59f,Namespace:kube-system,Attempt:0,}" Oct 9 07:53:45.527765 containerd[1473]: time="2024-10-09T07:53:45.527513936Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:53:45.527765 containerd[1473]: time="2024-10-09T07:53:45.527698059Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:53:45.527765 containerd[1473]: time="2024-10-09T07:53:45.527730017Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:53:45.528731 containerd[1473]: time="2024-10-09T07:53:45.528508331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:53:45.552608 systemd[1]: run-containerd-runc-k8s.io-d5ad08c88d5de8c925234e05e450285df1ff9c116ea20b3611a0e3db02719b1c-runc.6brg0m.mount: Deactivated successfully. Oct 9 07:53:45.563357 systemd[1]: Started cri-containerd-d5ad08c88d5de8c925234e05e450285df1ff9c116ea20b3611a0e3db02719b1c.scope - libcontainer container d5ad08c88d5de8c925234e05e450285df1ff9c116ea20b3611a0e3db02719b1c. Oct 9 07:53:45.595138 containerd[1473]: time="2024-10-09T07:53:45.595056920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nclvj,Uid:d4db3edc-0f34-44ee-b84d-86df5dddb59f,Namespace:kube-system,Attempt:0,} returns sandbox id \"d5ad08c88d5de8c925234e05e450285df1ff9c116ea20b3611a0e3db02719b1c\"" Oct 9 07:53:45.596047 kubelet[2549]: E1009 07:53:45.596018 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:53:45.599583 containerd[1473]: time="2024-10-09T07:53:45.599527402Z" level=info msg="CreateContainer within sandbox \"d5ad08c88d5de8c925234e05e450285df1ff9c116ea20b3611a0e3db02719b1c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 9 07:53:45.630027 containerd[1473]: time="2024-10-09T07:53:45.629876482Z" level=info msg="CreateContainer within sandbox \"d5ad08c88d5de8c925234e05e450285df1ff9c116ea20b3611a0e3db02719b1c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"59d1141cef9f6c791622efcf6a638c2e8ca086ac436cfe761e6339d747b99588\"" Oct 9 07:53:45.634254 containerd[1473]: time="2024-10-09T07:53:45.631887203Z" level=info msg="StartContainer for \"59d1141cef9f6c791622efcf6a638c2e8ca086ac436cfe761e6339d747b99588\"" Oct 9 07:53:45.680378 systemd[1]: Started cri-containerd-59d1141cef9f6c791622efcf6a638c2e8ca086ac436cfe761e6339d747b99588.scope - libcontainer container 59d1141cef9f6c791622efcf6a638c2e8ca086ac436cfe761e6339d747b99588. Oct 9 07:53:45.777276 containerd[1473]: time="2024-10-09T07:53:45.776985403Z" level=info msg="StartContainer for \"59d1141cef9f6c791622efcf6a638c2e8ca086ac436cfe761e6339d747b99588\" returns successfully" Oct 9 07:53:46.506886 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2369639937.mount: Deactivated successfully. Oct 9 07:53:46.537129 containerd[1473]: time="2024-10-09T07:53:46.537043434Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:53:46.537881 containerd[1473]: time="2024-10-09T07:53:46.537815305Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.3: active requests=0, bytes read=22136517" Oct 9 07:53:46.539695 containerd[1473]: time="2024-10-09T07:53:46.539251295Z" level=info msg="ImageCreate event name:\"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:53:46.541849 containerd[1473]: time="2024-10-09T07:53:46.541683750Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:53:46.543028 containerd[1473]: time="2024-10-09T07:53:46.542986839Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.3\" with image id \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\", repo tag \"quay.io/tigera/operator:v1.34.3\", repo digest \"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\", size \"22130728\" in 1.900868517s" Oct 9 07:53:46.543390 containerd[1473]: time="2024-10-09T07:53:46.543264788Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\" returns image reference \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\"" Oct 9 07:53:46.546973 containerd[1473]: time="2024-10-09T07:53:46.546922999Z" level=info msg="CreateContainer within sandbox \"0bca2fb2d3fe12fa5a2dad9d7d41cc04e7a4a535070028faeb4802e976bad7a3\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 9 07:53:46.562745 containerd[1473]: time="2024-10-09T07:53:46.562503333Z" level=info msg="CreateContainer within sandbox \"0bca2fb2d3fe12fa5a2dad9d7d41cc04e7a4a535070028faeb4802e976bad7a3\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"d8fce54e81e70ec6324dff9c1a8a1da23de916dc66da0f45c6c516ba7de790b8\"" Oct 9 07:53:46.564055 containerd[1473]: time="2024-10-09T07:53:46.563293202Z" level=info msg="StartContainer for \"d8fce54e81e70ec6324dff9c1a8a1da23de916dc66da0f45c6c516ba7de790b8\"" Oct 9 07:53:46.613464 systemd[1]: Started cri-containerd-d8fce54e81e70ec6324dff9c1a8a1da23de916dc66da0f45c6c516ba7de790b8.scope - libcontainer container d8fce54e81e70ec6324dff9c1a8a1da23de916dc66da0f45c6c516ba7de790b8. Oct 9 07:53:46.653006 containerd[1473]: time="2024-10-09T07:53:46.652401098Z" level=info msg="StartContainer for \"d8fce54e81e70ec6324dff9c1a8a1da23de916dc66da0f45c6c516ba7de790b8\" returns successfully" Oct 9 07:53:46.796958 kubelet[2549]: E1009 07:53:46.795883 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:53:46.828264 kubelet[2549]: I1009 07:53:46.827640 2549 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5d56685c77-ms8x6" podStartSLOduration=0.923754372 podStartE2EDuration="2.827595544s" podCreationTimestamp="2024-10-09 07:53:44 +0000 UTC" firstStartedPulling="2024-10-09 07:53:44.639854783 +0000 UTC m=+14.151737985" lastFinishedPulling="2024-10-09 07:53:46.543695954 +0000 UTC m=+16.055579157" observedRunningTime="2024-10-09 07:53:46.811254797 +0000 UTC m=+16.323138016" watchObservedRunningTime="2024-10-09 07:53:46.827595544 +0000 UTC m=+16.339478769" Oct 9 07:53:47.797735 kubelet[2549]: E1009 07:53:47.797556 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:53:49.994368 kubelet[2549]: I1009 07:53:49.994307 2549 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-nclvj" podStartSLOduration=6.994243602 podStartE2EDuration="6.994243602s" podCreationTimestamp="2024-10-09 07:53:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 07:53:46.829041029 +0000 UTC m=+16.340924251" watchObservedRunningTime="2024-10-09 07:53:49.994243602 +0000 UTC m=+19.506126826" Oct 9 07:53:49.996147 kubelet[2549]: I1009 07:53:49.994562 2549 topology_manager.go:215] "Topology Admit Handler" podUID="6e7bf4f5-c2c4-42d1-b886-907689aa5c51" podNamespace="calico-system" podName="calico-typha-7854d879b7-sfz4n" Oct 9 07:53:50.008959 systemd[1]: Created slice kubepods-besteffort-pod6e7bf4f5_c2c4_42d1_b886_907689aa5c51.slice - libcontainer container kubepods-besteffort-pod6e7bf4f5_c2c4_42d1_b886_907689aa5c51.slice. Oct 9 07:53:50.052103 kubelet[2549]: I1009 07:53:50.051844 2549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e7bf4f5-c2c4-42d1-b886-907689aa5c51-tigera-ca-bundle\") pod \"calico-typha-7854d879b7-sfz4n\" (UID: \"6e7bf4f5-c2c4-42d1-b886-907689aa5c51\") " pod="calico-system/calico-typha-7854d879b7-sfz4n" Oct 9 07:53:50.052103 kubelet[2549]: I1009 07:53:50.051920 2549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/6e7bf4f5-c2c4-42d1-b886-907689aa5c51-typha-certs\") pod \"calico-typha-7854d879b7-sfz4n\" (UID: \"6e7bf4f5-c2c4-42d1-b886-907689aa5c51\") " pod="calico-system/calico-typha-7854d879b7-sfz4n" Oct 9 07:53:50.052103 kubelet[2549]: I1009 07:53:50.051955 2549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmndb\" (UniqueName: \"kubernetes.io/projected/6e7bf4f5-c2c4-42d1-b886-907689aa5c51-kube-api-access-qmndb\") pod \"calico-typha-7854d879b7-sfz4n\" (UID: \"6e7bf4f5-c2c4-42d1-b886-907689aa5c51\") " pod="calico-system/calico-typha-7854d879b7-sfz4n" Oct 9 07:53:50.131969 kubelet[2549]: I1009 07:53:50.131906 2549 topology_manager.go:215] "Topology Admit Handler" podUID="66d5816d-8583-4e63-a341-9b01884b0ba6" podNamespace="calico-system" podName="calico-node-ww74c" Oct 9 07:53:50.146238 systemd[1]: Created slice kubepods-besteffort-pod66d5816d_8583_4e63_a341_9b01884b0ba6.slice - libcontainer container kubepods-besteffort-pod66d5816d_8583_4e63_a341_9b01884b0ba6.slice. Oct 9 07:53:50.152578 kubelet[2549]: I1009 07:53:50.152521 2549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/66d5816d-8583-4e63-a341-9b01884b0ba6-tigera-ca-bundle\") pod \"calico-node-ww74c\" (UID: \"66d5816d-8583-4e63-a341-9b01884b0ba6\") " pod="calico-system/calico-node-ww74c" Oct 9 07:53:50.152863 kubelet[2549]: I1009 07:53:50.152841 2549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/66d5816d-8583-4e63-a341-9b01884b0ba6-node-certs\") pod \"calico-node-ww74c\" (UID: \"66d5816d-8583-4e63-a341-9b01884b0ba6\") " pod="calico-system/calico-node-ww74c" Oct 9 07:53:50.152970 kubelet[2549]: I1009 07:53:50.152893 2549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/66d5816d-8583-4e63-a341-9b01884b0ba6-xtables-lock\") pod \"calico-node-ww74c\" (UID: \"66d5816d-8583-4e63-a341-9b01884b0ba6\") " pod="calico-system/calico-node-ww74c" Oct 9 07:53:50.152970 kubelet[2549]: I1009 07:53:50.152918 2549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/66d5816d-8583-4e63-a341-9b01884b0ba6-flexvol-driver-host\") pod \"calico-node-ww74c\" (UID: \"66d5816d-8583-4e63-a341-9b01884b0ba6\") " pod="calico-system/calico-node-ww74c" Oct 9 07:53:50.152970 kubelet[2549]: I1009 07:53:50.152942 2549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/66d5816d-8583-4e63-a341-9b01884b0ba6-lib-modules\") pod \"calico-node-ww74c\" (UID: \"66d5816d-8583-4e63-a341-9b01884b0ba6\") " pod="calico-system/calico-node-ww74c" Oct 9 07:53:50.153084 kubelet[2549]: I1009 07:53:50.152983 2549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/66d5816d-8583-4e63-a341-9b01884b0ba6-var-lib-calico\") pod \"calico-node-ww74c\" (UID: \"66d5816d-8583-4e63-a341-9b01884b0ba6\") " pod="calico-system/calico-node-ww74c" Oct 9 07:53:50.153084 kubelet[2549]: I1009 07:53:50.153004 2549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/66d5816d-8583-4e63-a341-9b01884b0ba6-cni-log-dir\") pod \"calico-node-ww74c\" (UID: \"66d5816d-8583-4e63-a341-9b01884b0ba6\") " pod="calico-system/calico-node-ww74c" Oct 9 07:53:50.153084 kubelet[2549]: I1009 07:53:50.153022 2549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/66d5816d-8583-4e63-a341-9b01884b0ba6-var-run-calico\") pod \"calico-node-ww74c\" (UID: \"66d5816d-8583-4e63-a341-9b01884b0ba6\") " pod="calico-system/calico-node-ww74c" Oct 9 07:53:50.153084 kubelet[2549]: I1009 07:53:50.153043 2549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/66d5816d-8583-4e63-a341-9b01884b0ba6-cni-net-dir\") pod \"calico-node-ww74c\" (UID: \"66d5816d-8583-4e63-a341-9b01884b0ba6\") " pod="calico-system/calico-node-ww74c" Oct 9 07:53:50.153084 kubelet[2549]: I1009 07:53:50.153079 2549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n62c5\" (UniqueName: \"kubernetes.io/projected/66d5816d-8583-4e63-a341-9b01884b0ba6-kube-api-access-n62c5\") pod \"calico-node-ww74c\" (UID: \"66d5816d-8583-4e63-a341-9b01884b0ba6\") " pod="calico-system/calico-node-ww74c" Oct 9 07:53:50.153240 kubelet[2549]: I1009 07:53:50.153108 2549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/66d5816d-8583-4e63-a341-9b01884b0ba6-policysync\") pod \"calico-node-ww74c\" (UID: \"66d5816d-8583-4e63-a341-9b01884b0ba6\") " pod="calico-system/calico-node-ww74c" Oct 9 07:53:50.153240 kubelet[2549]: I1009 07:53:50.153138 2549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/66d5816d-8583-4e63-a341-9b01884b0ba6-cni-bin-dir\") pod \"calico-node-ww74c\" (UID: \"66d5816d-8583-4e63-a341-9b01884b0ba6\") " pod="calico-system/calico-node-ww74c" Oct 9 07:53:50.269650 kubelet[2549]: E1009 07:53:50.269003 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:50.269650 kubelet[2549]: W1009 07:53:50.269054 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:50.269650 kubelet[2549]: E1009 07:53:50.269146 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:50.271330 kubelet[2549]: E1009 07:53:50.271278 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:50.272017 kubelet[2549]: W1009 07:53:50.271761 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:50.272017 kubelet[2549]: E1009 07:53:50.271799 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:50.297007 kubelet[2549]: E1009 07:53:50.296790 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:50.297007 kubelet[2549]: W1009 07:53:50.296814 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:50.297007 kubelet[2549]: E1009 07:53:50.296837 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:50.317383 kubelet[2549]: E1009 07:53:50.317333 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:53:50.318858 containerd[1473]: time="2024-10-09T07:53:50.318404010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7854d879b7-sfz4n,Uid:6e7bf4f5-c2c4-42d1-b886-907689aa5c51,Namespace:calico-system,Attempt:0,}" Oct 9 07:53:50.352195 kubelet[2549]: I1009 07:53:50.352145 2549 topology_manager.go:215] "Topology Admit Handler" podUID="3cebfdf7-f604-4870-8e68-e3e120793ced" podNamespace="calico-system" podName="csi-node-driver-87rvr" Oct 9 07:53:50.352911 kubelet[2549]: E1009 07:53:50.352540 2549 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-87rvr" podUID="3cebfdf7-f604-4870-8e68-e3e120793ced" Oct 9 07:53:50.369705 containerd[1473]: time="2024-10-09T07:53:50.366167920Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:53:50.369705 containerd[1473]: time="2024-10-09T07:53:50.366290764Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:53:50.369705 containerd[1473]: time="2024-10-09T07:53:50.366338053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:53:50.369705 containerd[1473]: time="2024-10-09T07:53:50.366482957Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:53:50.408928 systemd[1]: Started cri-containerd-80cda21956b6cf461af0b9598cd9c6c3812572a59ceab6076e15223e3a4d5e4c.scope - libcontainer container 80cda21956b6cf461af0b9598cd9c6c3812572a59ceab6076e15223e3a4d5e4c. Oct 9 07:53:50.450620 kubelet[2549]: E1009 07:53:50.450297 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:50.450620 kubelet[2549]: W1009 07:53:50.450331 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:50.450620 kubelet[2549]: E1009 07:53:50.450364 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:50.451726 kubelet[2549]: E1009 07:53:50.451160 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:50.451726 kubelet[2549]: W1009 07:53:50.451193 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:50.451726 kubelet[2549]: E1009 07:53:50.451216 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:50.452737 kubelet[2549]: E1009 07:53:50.452709 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:50.454350 kubelet[2549]: W1009 07:53:50.454154 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:50.454350 kubelet[2549]: E1009 07:53:50.454197 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:50.454676 kubelet[2549]: E1009 07:53:50.454658 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:50.454948 kubelet[2549]: W1009 07:53:50.454760 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:50.454948 kubelet[2549]: E1009 07:53:50.454788 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:50.455278 kubelet[2549]: E1009 07:53:50.455258 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:50.455554 kubelet[2549]: W1009 07:53:50.455377 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:50.455554 kubelet[2549]: E1009 07:53:50.455408 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:50.458181 kubelet[2549]: E1009 07:53:50.458151 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:50.458524 kubelet[2549]: W1009 07:53:50.458333 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:50.458524 kubelet[2549]: E1009 07:53:50.458364 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:50.458828 kubelet[2549]: E1009 07:53:50.458812 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:50.459104 kubelet[2549]: W1009 07:53:50.458921 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:50.459104 kubelet[2549]: E1009 07:53:50.458949 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:50.460260 kubelet[2549]: E1009 07:53:50.460240 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:50.461141 kubelet[2549]: W1009 07:53:50.460360 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:50.461141 kubelet[2549]: E1009 07:53:50.460386 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:50.461141 kubelet[2549]: E1009 07:53:50.460565 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:53:50.461675 kubelet[2549]: E1009 07:53:50.461659 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:50.461790 kubelet[2549]: W1009 07:53:50.461777 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:50.461905 kubelet[2549]: E1009 07:53:50.461866 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:50.462931 containerd[1473]: time="2024-10-09T07:53:50.462134956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ww74c,Uid:66d5816d-8583-4e63-a341-9b01884b0ba6,Namespace:calico-system,Attempt:0,}" Oct 9 07:53:50.463430 kubelet[2549]: E1009 07:53:50.463415 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:50.463561 kubelet[2549]: W1009 07:53:50.463546 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:50.463671 kubelet[2549]: E1009 07:53:50.463658 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:50.464361 kubelet[2549]: E1009 07:53:50.464345 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:50.464474 kubelet[2549]: W1009 07:53:50.464461 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:50.464552 kubelet[2549]: E1009 07:53:50.464543 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:50.465820 kubelet[2549]: E1009 07:53:50.465802 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:50.465928 kubelet[2549]: W1009 07:53:50.465914 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:50.466001 kubelet[2549]: E1009 07:53:50.465992 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:50.466371 kubelet[2549]: E1009 07:53:50.466357 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:50.466584 kubelet[2549]: W1009 07:53:50.466470 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:50.466584 kubelet[2549]: E1009 07:53:50.466492 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:50.467237 kubelet[2549]: E1009 07:53:50.466870 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:50.467237 kubelet[2549]: W1009 07:53:50.466885 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:50.467237 kubelet[2549]: E1009 07:53:50.466907 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:50.467934 kubelet[2549]: E1009 07:53:50.467464 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:50.468218 kubelet[2549]: W1009 07:53:50.468036 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:50.468218 kubelet[2549]: E1009 07:53:50.468091 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:50.468480 kubelet[2549]: E1009 07:53:50.468422 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:50.468480 kubelet[2549]: W1009 07:53:50.468436 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:50.468480 kubelet[2549]: E1009 07:53:50.468458 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:50.469957 kubelet[2549]: E1009 07:53:50.469771 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:50.469957 kubelet[2549]: W1009 07:53:50.469788 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:50.469957 kubelet[2549]: E1009 07:53:50.469811 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:50.470598 kubelet[2549]: E1009 07:53:50.470416 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:50.470598 kubelet[2549]: W1009 07:53:50.470432 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:50.470598 kubelet[2549]: E1009 07:53:50.470450 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:50.471683 kubelet[2549]: E1009 07:53:50.470918 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:50.472049 kubelet[2549]: W1009 07:53:50.471863 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:50.472049 kubelet[2549]: E1009 07:53:50.471912 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:50.472546 kubelet[2549]: E1009 07:53:50.472524 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:50.472683 kubelet[2549]: W1009 07:53:50.472627 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:50.472683 kubelet[2549]: E1009 07:53:50.472653 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:50.473960 kubelet[2549]: E1009 07:53:50.473830 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:50.473960 kubelet[2549]: W1009 07:53:50.473852 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:50.473960 kubelet[2549]: E1009 07:53:50.473871 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:50.473960 kubelet[2549]: I1009 07:53:50.473925 2549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/3cebfdf7-f604-4870-8e68-e3e120793ced-varrun\") pod \"csi-node-driver-87rvr\" (UID: \"3cebfdf7-f604-4870-8e68-e3e120793ced\") " pod="calico-system/csi-node-driver-87rvr" Oct 9 07:53:50.476131 kubelet[2549]: E1009 07:53:50.475301 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:50.476131 kubelet[2549]: W1009 07:53:50.475324 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:50.476131 kubelet[2549]: E1009 07:53:50.475353 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:50.476991 kubelet[2549]: E1009 07:53:50.476810 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:50.476991 kubelet[2549]: W1009 07:53:50.476830 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:50.476991 kubelet[2549]: E1009 07:53:50.476862 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:50.478507 kubelet[2549]: E1009 07:53:50.478223 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:50.478507 kubelet[2549]: W1009 07:53:50.478250 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:50.478507 kubelet[2549]: E1009 07:53:50.478279 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:50.478507 kubelet[2549]: I1009 07:53:50.478334 2549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3cebfdf7-f604-4870-8e68-e3e120793ced-socket-dir\") pod \"csi-node-driver-87rvr\" (UID: \"3cebfdf7-f604-4870-8e68-e3e120793ced\") " pod="calico-system/csi-node-driver-87rvr" Oct 9 07:53:50.478969 kubelet[2549]: E1009 07:53:50.478921 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:50.479896 kubelet[2549]: W1009 07:53:50.479860 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:50.480236 kubelet[2549]: E1009 07:53:50.480082 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:50.480236 kubelet[2549]: I1009 07:53:50.480143 2549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8g2xz\" (UniqueName: \"kubernetes.io/projected/3cebfdf7-f604-4870-8e68-e3e120793ced-kube-api-access-8g2xz\") pod \"csi-node-driver-87rvr\" (UID: \"3cebfdf7-f604-4870-8e68-e3e120793ced\") " pod="calico-system/csi-node-driver-87rvr" Oct 9 07:53:50.481660 kubelet[2549]: E1009 07:53:50.481251 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:50.481660 kubelet[2549]: W1009 07:53:50.481274 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:50.481660 kubelet[2549]: E1009 07:53:50.481336 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:50.481660 kubelet[2549]: I1009 07:53:50.481400 2549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3cebfdf7-f604-4870-8e68-e3e120793ced-kubelet-dir\") pod \"csi-node-driver-87rvr\" (UID: \"3cebfdf7-f604-4870-8e68-e3e120793ced\") " pod="calico-system/csi-node-driver-87rvr" Oct 9 07:53:50.483657 kubelet[2549]: E1009 07:53:50.483206 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:50.483657 kubelet[2549]: W1009 07:53:50.483244 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:50.483657 kubelet[2549]: E1009 07:53:50.483321 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:50.484594 kubelet[2549]: E1009 07:53:50.484262 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:50.484594 kubelet[2549]: W1009 07:53:50.484287 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:50.484594 kubelet[2549]: E1009 07:53:50.484350 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:50.486205 kubelet[2549]: E1009 07:53:50.485972 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:50.486205 kubelet[2549]: W1009 07:53:50.485994 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:50.486615 kubelet[2549]: E1009 07:53:50.486224 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:50.486615 kubelet[2549]: I1009 07:53:50.486289 2549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3cebfdf7-f604-4870-8e68-e3e120793ced-registration-dir\") pod \"csi-node-driver-87rvr\" (UID: \"3cebfdf7-f604-4870-8e68-e3e120793ced\") " pod="calico-system/csi-node-driver-87rvr" Oct 9 07:53:50.489100 kubelet[2549]: E1009 07:53:50.488015 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:50.489582 kubelet[2549]: W1009 07:53:50.489312 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:50.489582 kubelet[2549]: E1009 07:53:50.489437 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:50.489809 kubelet[2549]: E1009 07:53:50.489792 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:50.489878 kubelet[2549]: W1009 07:53:50.489866 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:50.490100 kubelet[2549]: E1009 07:53:50.489943 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:50.490349 kubelet[2549]: E1009 07:53:50.490333 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:50.490632 kubelet[2549]: W1009 07:53:50.490440 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:50.490632 kubelet[2549]: E1009 07:53:50.490474 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:50.491887 kubelet[2549]: E1009 07:53:50.491611 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:50.491887 kubelet[2549]: W1009 07:53:50.491630 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:50.491887 kubelet[2549]: E1009 07:53:50.491660 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:50.494851 kubelet[2549]: E1009 07:53:50.494233 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:50.494851 kubelet[2549]: W1009 07:53:50.494265 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:50.494851 kubelet[2549]: E1009 07:53:50.494295 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:50.494851 kubelet[2549]: E1009 07:53:50.494696 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:50.494851 kubelet[2549]: W1009 07:53:50.494715 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:50.494851 kubelet[2549]: E1009 07:53:50.494743 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:50.513084 containerd[1473]: time="2024-10-09T07:53:50.512389074Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:53:50.513084 containerd[1473]: time="2024-10-09T07:53:50.512472535Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:53:50.513084 containerd[1473]: time="2024-10-09T07:53:50.512485655Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:53:50.515444 containerd[1473]: time="2024-10-09T07:53:50.515298045Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:53:50.551353 systemd[1]: Started cri-containerd-70855f96b5695acb5b0a038906fc0a1131a20beb10df5ec6056d5bb3a5607f18.scope - libcontainer container 70855f96b5695acb5b0a038906fc0a1131a20beb10df5ec6056d5bb3a5607f18. Oct 9 07:53:50.598206 kubelet[2549]: E1009 07:53:50.597144 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:50.598206 kubelet[2549]: W1009 07:53:50.597182 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:50.598206 kubelet[2549]: E1009 07:53:50.597219 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:50.598206 kubelet[2549]: E1009 07:53:50.597823 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:50.598206 kubelet[2549]: W1009 07:53:50.597840 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:50.598206 kubelet[2549]: E1009 07:53:50.597885 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:50.599855 kubelet[2549]: E1009 07:53:50.599175 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:50.599855 kubelet[2549]: W1009 07:53:50.599231 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:50.599855 kubelet[2549]: E1009 07:53:50.599271 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:50.600892 kubelet[2549]: E1009 07:53:50.600558 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:50.600892 kubelet[2549]: W1009 07:53:50.600581 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:50.600892 kubelet[2549]: E1009 07:53:50.600631 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:50.604287 kubelet[2549]: E1009 07:53:50.602480 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:50.604287 kubelet[2549]: W1009 07:53:50.602503 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:50.606818 kubelet[2549]: E1009 07:53:50.606779 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:50.607188 kubelet[2549]: E1009 07:53:50.607169 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:50.607331 kubelet[2549]: W1009 07:53:50.607309 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:50.607519 kubelet[2549]: E1009 07:53:50.607469 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:50.607947 kubelet[2549]: E1009 07:53:50.607927 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:50.608133 kubelet[2549]: W1009 07:53:50.608109 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:50.611329 kubelet[2549]: E1009 07:53:50.611214 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:50.612438 kubelet[2549]: E1009 07:53:50.612229 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:50.612438 kubelet[2549]: W1009 07:53:50.612268 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:50.613758 kubelet[2549]: E1009 07:53:50.613384 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:50.613758 kubelet[2549]: W1009 07:53:50.613412 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:50.616539 kubelet[2549]: E1009 07:53:50.616134 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:50.616539 kubelet[2549]: W1009 07:53:50.616286 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:50.618375 kubelet[2549]: E1009 07:53:50.617758 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:50.618375 kubelet[2549]: W1009 07:53:50.617780 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:50.618375 kubelet[2549]: E1009 07:53:50.617817 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:50.619577 kubelet[2549]: E1009 07:53:50.619555 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:50.620073 kubelet[2549]: W1009 07:53:50.619719 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:50.620073 kubelet[2549]: E1009 07:53:50.619751 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:50.623691 kubelet[2549]: E1009 07:53:50.623661 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:50.626274 kubelet[2549]: W1009 07:53:50.623752 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:50.626274 kubelet[2549]: E1009 07:53:50.623785 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:50.626274 kubelet[2549]: E1009 07:53:50.625894 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:50.626274 kubelet[2549]: E1009 07:53:50.625954 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:50.627673 kubelet[2549]: E1009 07:53:50.627369 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:50.627673 kubelet[2549]: W1009 07:53:50.627403 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:50.627673 kubelet[2549]: E1009 07:53:50.627432 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:50.629084 kubelet[2549]: E1009 07:53:50.628845 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:50.629084 kubelet[2549]: W1009 07:53:50.628875 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:50.629084 kubelet[2549]: E1009 07:53:50.628902 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:50.632881 kubelet[2549]: E1009 07:53:50.632676 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:50.632881 kubelet[2549]: W1009 07:53:50.632703 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:50.632881 kubelet[2549]: E1009 07:53:50.632734 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:50.633860 kubelet[2549]: E1009 07:53:50.633597 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:50.633860 kubelet[2549]: W1009 07:53:50.633622 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:50.633860 kubelet[2549]: E1009 07:53:50.633650 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:50.635080 kubelet[2549]: E1009 07:53:50.634584 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:50.635080 kubelet[2549]: W1009 07:53:50.634604 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:50.635080 kubelet[2549]: E1009 07:53:50.634628 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:50.635080 kubelet[2549]: E1009 07:53:50.634671 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:50.636119 kubelet[2549]: E1009 07:53:50.635432 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:50.636119 kubelet[2549]: W1009 07:53:50.635451 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:50.636765 kubelet[2549]: E1009 07:53:50.636612 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:50.637105 kubelet[2549]: E1009 07:53:50.637088 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:50.638264 kubelet[2549]: W1009 07:53:50.637985 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:50.638264 kubelet[2549]: E1009 07:53:50.638037 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:50.639522 kubelet[2549]: E1009 07:53:50.639495 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:50.639774 kubelet[2549]: W1009 07:53:50.639749 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:50.644100 kubelet[2549]: E1009 07:53:50.642357 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:50.645156 kubelet[2549]: E1009 07:53:50.644825 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:50.648330 kubelet[2549]: W1009 07:53:50.645339 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:50.648330 kubelet[2549]: E1009 07:53:50.645386 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:50.648807 kubelet[2549]: E1009 07:53:50.648782 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:50.649052 kubelet[2549]: W1009 07:53:50.648921 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:50.649052 kubelet[2549]: E1009 07:53:50.648962 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:50.649597 kubelet[2549]: E1009 07:53:50.649576 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:50.649709 kubelet[2549]: W1009 07:53:50.649693 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:50.649915 kubelet[2549]: E1009 07:53:50.649780 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:50.650344 kubelet[2549]: E1009 07:53:50.650324 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:50.650476 kubelet[2549]: W1009 07:53:50.650459 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:50.650572 kubelet[2549]: E1009 07:53:50.650560 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:50.660138 containerd[1473]: time="2024-10-09T07:53:50.658706113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7854d879b7-sfz4n,Uid:6e7bf4f5-c2c4-42d1-b886-907689aa5c51,Namespace:calico-system,Attempt:0,} returns sandbox id \"80cda21956b6cf461af0b9598cd9c6c3812572a59ceab6076e15223e3a4d5e4c\"" Oct 9 07:53:50.667766 kubelet[2549]: E1009 07:53:50.667734 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:50.668012 kubelet[2549]: W1009 07:53:50.667992 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:50.668212 kubelet[2549]: E1009 07:53:50.668174 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:50.671526 containerd[1473]: time="2024-10-09T07:53:50.671420858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ww74c,Uid:66d5816d-8583-4e63-a341-9b01884b0ba6,Namespace:calico-system,Attempt:0,} returns sandbox id \"70855f96b5695acb5b0a038906fc0a1131a20beb10df5ec6056d5bb3a5607f18\"" Oct 9 07:53:50.674105 kubelet[2549]: E1009 07:53:50.672170 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:53:50.676389 containerd[1473]: time="2024-10-09T07:53:50.676342859Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\"" Oct 9 07:53:50.678135 kubelet[2549]: E1009 07:53:50.677683 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:53:51.654173 kubelet[2549]: E1009 07:53:51.654024 2549 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-87rvr" podUID="3cebfdf7-f604-4870-8e68-e3e120793ced" Oct 9 07:53:53.033258 containerd[1473]: time="2024-10-09T07:53:53.033161848Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:53:53.034891 containerd[1473]: time="2024-10-09T07:53:53.034342088Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.1: active requests=0, bytes read=29471335" Oct 9 07:53:53.035744 containerd[1473]: time="2024-10-09T07:53:53.035325147Z" level=info msg="ImageCreate event name:\"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:53:53.039432 containerd[1473]: time="2024-10-09T07:53:53.039350147Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:53:53.041047 containerd[1473]: time="2024-10-09T07:53:53.040289342Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.1\" with image id \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\", size \"30963728\" in 2.363732006s" Oct 9 07:53:53.041047 containerd[1473]: time="2024-10-09T07:53:53.040334542Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\" returns image reference \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\"" Oct 9 07:53:53.042118 containerd[1473]: time="2024-10-09T07:53:53.041844687Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\"" Oct 9 07:53:53.078353 containerd[1473]: time="2024-10-09T07:53:53.077767886Z" level=info msg="CreateContainer within sandbox \"80cda21956b6cf461af0b9598cd9c6c3812572a59ceab6076e15223e3a4d5e4c\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 9 07:53:53.121351 containerd[1473]: time="2024-10-09T07:53:53.121272561Z" level=info msg="CreateContainer within sandbox \"80cda21956b6cf461af0b9598cd9c6c3812572a59ceab6076e15223e3a4d5e4c\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"1add26b85874d2e1265a02dd0852c70c0516cfa8d2de640b1227fc16b3d99b88\"" Oct 9 07:53:53.123851 containerd[1473]: time="2024-10-09T07:53:53.123791190Z" level=info msg="StartContainer for \"1add26b85874d2e1265a02dd0852c70c0516cfa8d2de640b1227fc16b3d99b88\"" Oct 9 07:53:53.173397 systemd[1]: Started cri-containerd-1add26b85874d2e1265a02dd0852c70c0516cfa8d2de640b1227fc16b3d99b88.scope - libcontainer container 1add26b85874d2e1265a02dd0852c70c0516cfa8d2de640b1227fc16b3d99b88. Oct 9 07:53:53.247483 containerd[1473]: time="2024-10-09T07:53:53.247386143Z" level=info msg="StartContainer for \"1add26b85874d2e1265a02dd0852c70c0516cfa8d2de640b1227fc16b3d99b88\" returns successfully" Oct 9 07:53:53.657559 kubelet[2549]: E1009 07:53:53.657114 2549 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-87rvr" podUID="3cebfdf7-f604-4870-8e68-e3e120793ced" Oct 9 07:53:53.849500 kubelet[2549]: E1009 07:53:53.848801 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:53:53.899526 kubelet[2549]: E1009 07:53:53.899491 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:53.899526 kubelet[2549]: W1009 07:53:53.899515 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:53.899733 kubelet[2549]: E1009 07:53:53.899544 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:53.899862 kubelet[2549]: E1009 07:53:53.899844 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:53.899899 kubelet[2549]: W1009 07:53:53.899862 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:53.899899 kubelet[2549]: E1009 07:53:53.899894 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:53.900130 kubelet[2549]: E1009 07:53:53.900119 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:53.900130 kubelet[2549]: W1009 07:53:53.900128 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:53.900220 kubelet[2549]: E1009 07:53:53.900139 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:53.900405 kubelet[2549]: E1009 07:53:53.900391 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:53.900405 kubelet[2549]: W1009 07:53:53.900402 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:53.900485 kubelet[2549]: E1009 07:53:53.900415 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:53.900640 kubelet[2549]: E1009 07:53:53.900630 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:53.900640 kubelet[2549]: W1009 07:53:53.900640 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:53.900702 kubelet[2549]: E1009 07:53:53.900650 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:53.900826 kubelet[2549]: E1009 07:53:53.900815 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:53.900873 kubelet[2549]: W1009 07:53:53.900833 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:53.900873 kubelet[2549]: E1009 07:53:53.900843 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:53.901012 kubelet[2549]: E1009 07:53:53.901001 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:53.901012 kubelet[2549]: W1009 07:53:53.901009 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:53.901012 kubelet[2549]: E1009 07:53:53.901019 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:53.901194 kubelet[2549]: E1009 07:53:53.901186 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:53.901194 kubelet[2549]: W1009 07:53:53.901192 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:53.901272 kubelet[2549]: E1009 07:53:53.901215 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:53.901399 kubelet[2549]: E1009 07:53:53.901378 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:53.901399 kubelet[2549]: W1009 07:53:53.901387 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:53.901399 kubelet[2549]: E1009 07:53:53.901396 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:53.901559 kubelet[2549]: E1009 07:53:53.901549 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:53.901559 kubelet[2549]: W1009 07:53:53.901558 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:53.901735 kubelet[2549]: E1009 07:53:53.901567 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:53.901794 kubelet[2549]: E1009 07:53:53.901781 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:53.901794 kubelet[2549]: W1009 07:53:53.901793 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:53.901847 kubelet[2549]: E1009 07:53:53.901804 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:53.901994 kubelet[2549]: E1009 07:53:53.901980 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:53.901994 kubelet[2549]: W1009 07:53:53.901989 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:53.902093 kubelet[2549]: E1009 07:53:53.901998 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:53.902247 kubelet[2549]: E1009 07:53:53.902230 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:53.902247 kubelet[2549]: W1009 07:53:53.902242 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:53.902364 kubelet[2549]: E1009 07:53:53.902254 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:53.902442 kubelet[2549]: E1009 07:53:53.902422 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:53.902442 kubelet[2549]: W1009 07:53:53.902428 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:53.902528 kubelet[2549]: E1009 07:53:53.902437 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:53.902680 kubelet[2549]: E1009 07:53:53.902669 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:53.902710 kubelet[2549]: W1009 07:53:53.902680 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:53.902710 kubelet[2549]: E1009 07:53:53.902691 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:53.956194 kubelet[2549]: E1009 07:53:53.956073 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:53.956194 kubelet[2549]: W1009 07:53:53.956101 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:53.956194 kubelet[2549]: E1009 07:53:53.956128 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:53.957140 kubelet[2549]: E1009 07:53:53.956937 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:53.957140 kubelet[2549]: W1009 07:53:53.956957 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:53.957140 kubelet[2549]: E1009 07:53:53.956991 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:53.957650 kubelet[2549]: E1009 07:53:53.957586 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:53.957650 kubelet[2549]: W1009 07:53:53.957601 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:53.957650 kubelet[2549]: E1009 07:53:53.957626 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:53.957988 kubelet[2549]: E1009 07:53:53.957962 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:53.957988 kubelet[2549]: W1009 07:53:53.957978 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:53.958099 kubelet[2549]: E1009 07:53:53.958004 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:53.958415 kubelet[2549]: E1009 07:53:53.958382 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:53.958415 kubelet[2549]: W1009 07:53:53.958397 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:53.958505 kubelet[2549]: E1009 07:53:53.958446 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:53.958738 kubelet[2549]: E1009 07:53:53.958721 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:53.958738 kubelet[2549]: W1009 07:53:53.958737 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:53.958738 kubelet[2549]: E1009 07:53:53.958782 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:53.959140 kubelet[2549]: E1009 07:53:53.959124 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:53.959140 kubelet[2549]: W1009 07:53:53.959139 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:53.959250 kubelet[2549]: E1009 07:53:53.959232 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:53.959623 kubelet[2549]: E1009 07:53:53.959599 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:53.959623 kubelet[2549]: W1009 07:53:53.959613 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:53.959738 kubelet[2549]: E1009 07:53:53.959726 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:53.959894 kubelet[2549]: E1009 07:53:53.959882 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:53.959894 kubelet[2549]: W1009 07:53:53.959892 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:53.959966 kubelet[2549]: E1009 07:53:53.959908 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:53.961270 kubelet[2549]: E1009 07:53:53.961251 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:53.961497 kubelet[2549]: W1009 07:53:53.961377 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:53.961497 kubelet[2549]: E1009 07:53:53.961404 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:53.961838 kubelet[2549]: E1009 07:53:53.961778 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:53.961838 kubelet[2549]: W1009 07:53:53.961791 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:53.961838 kubelet[2549]: E1009 07:53:53.961827 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:53.962253 kubelet[2549]: E1009 07:53:53.962145 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:53.962253 kubelet[2549]: W1009 07:53:53.962156 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:53.962253 kubelet[2549]: E1009 07:53:53.962174 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:53.962693 kubelet[2549]: E1009 07:53:53.962530 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:53.962693 kubelet[2549]: W1009 07:53:53.962542 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:53.962693 kubelet[2549]: E1009 07:53:53.962560 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:53.962929 kubelet[2549]: E1009 07:53:53.962844 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:53.962929 kubelet[2549]: W1009 07:53:53.962855 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:53.962929 kubelet[2549]: E1009 07:53:53.962881 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:53.963381 kubelet[2549]: E1009 07:53:53.963287 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:53.963381 kubelet[2549]: W1009 07:53:53.963300 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:53.963778 kubelet[2549]: E1009 07:53:53.963520 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:53.964645 kubelet[2549]: E1009 07:53:53.964078 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:53.964645 kubelet[2549]: W1009 07:53:53.964099 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:53.964645 kubelet[2549]: E1009 07:53:53.964117 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:53.964933 kubelet[2549]: E1009 07:53:53.964912 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:53.964988 kubelet[2549]: W1009 07:53:53.964934 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:53.964988 kubelet[2549]: E1009 07:53:53.964960 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:53.965478 kubelet[2549]: E1009 07:53:53.965462 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:53.965478 kubelet[2549]: W1009 07:53:53.965476 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:53.965572 kubelet[2549]: E1009 07:53:53.965489 2549 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:54.278647 containerd[1473]: time="2024-10-09T07:53:54.278000777Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:53:54.280324 containerd[1473]: time="2024-10-09T07:53:54.279902023Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1: active requests=0, bytes read=5141007" Oct 9 07:53:54.282271 containerd[1473]: time="2024-10-09T07:53:54.282111837Z" level=info msg="ImageCreate event name:\"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:53:54.285618 containerd[1473]: time="2024-10-09T07:53:54.285557682Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:53:54.286871 containerd[1473]: time="2024-10-09T07:53:54.286324555Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" with image id \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\", size \"6633368\" in 1.244427837s" Oct 9 07:53:54.286871 containerd[1473]: time="2024-10-09T07:53:54.286357428Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" returns image reference \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\"" Oct 9 07:53:54.290326 containerd[1473]: time="2024-10-09T07:53:54.290276615Z" level=info msg="CreateContainer within sandbox \"70855f96b5695acb5b0a038906fc0a1131a20beb10df5ec6056d5bb3a5607f18\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 9 07:53:54.302218 containerd[1473]: time="2024-10-09T07:53:54.300878751Z" level=info msg="CreateContainer within sandbox \"70855f96b5695acb5b0a038906fc0a1131a20beb10df5ec6056d5bb3a5607f18\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"abebb745de0f197ea970e5cbf61bbaa3eb773dbe647ac89af1326347d2a64cc5\"" Oct 9 07:53:54.306295 containerd[1473]: time="2024-10-09T07:53:54.303150642Z" level=info msg="StartContainer for \"abebb745de0f197ea970e5cbf61bbaa3eb773dbe647ac89af1326347d2a64cc5\"" Oct 9 07:53:54.359346 systemd[1]: Started cri-containerd-abebb745de0f197ea970e5cbf61bbaa3eb773dbe647ac89af1326347d2a64cc5.scope - libcontainer container abebb745de0f197ea970e5cbf61bbaa3eb773dbe647ac89af1326347d2a64cc5. Oct 9 07:53:54.405657 containerd[1473]: time="2024-10-09T07:53:54.405597540Z" level=info msg="StartContainer for \"abebb745de0f197ea970e5cbf61bbaa3eb773dbe647ac89af1326347d2a64cc5\" returns successfully" Oct 9 07:53:54.430120 systemd[1]: cri-containerd-abebb745de0f197ea970e5cbf61bbaa3eb773dbe647ac89af1326347d2a64cc5.scope: Deactivated successfully. Oct 9 07:53:54.464765 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-abebb745de0f197ea970e5cbf61bbaa3eb773dbe647ac89af1326347d2a64cc5-rootfs.mount: Deactivated successfully. Oct 9 07:53:54.469849 containerd[1473]: time="2024-10-09T07:53:54.469344375Z" level=info msg="shim disconnected" id=abebb745de0f197ea970e5cbf61bbaa3eb773dbe647ac89af1326347d2a64cc5 namespace=k8s.io Oct 9 07:53:54.469849 containerd[1473]: time="2024-10-09T07:53:54.469414765Z" level=warning msg="cleaning up after shim disconnected" id=abebb745de0f197ea970e5cbf61bbaa3eb773dbe647ac89af1326347d2a64cc5 namespace=k8s.io Oct 9 07:53:54.469849 containerd[1473]: time="2024-10-09T07:53:54.469424240Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 07:53:54.852084 kubelet[2549]: I1009 07:53:54.852040 2549 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 9 07:53:54.852720 kubelet[2549]: E1009 07:53:54.852687 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:53:54.854098 kubelet[2549]: E1009 07:53:54.853382 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:53:54.855123 containerd[1473]: time="2024-10-09T07:53:54.854784997Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\"" Oct 9 07:53:54.894022 kubelet[2549]: I1009 07:53:54.893523 2549 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-7854d879b7-sfz4n" podStartSLOduration=3.527889954 podStartE2EDuration="5.893471727s" podCreationTimestamp="2024-10-09 07:53:49 +0000 UTC" firstStartedPulling="2024-10-09 07:53:50.675127767 +0000 UTC m=+20.187010971" lastFinishedPulling="2024-10-09 07:53:53.040709535 +0000 UTC m=+22.552592744" observedRunningTime="2024-10-09 07:53:53.882307686 +0000 UTC m=+23.394190909" watchObservedRunningTime="2024-10-09 07:53:54.893471727 +0000 UTC m=+24.405354973" Oct 9 07:53:55.653986 kubelet[2549]: E1009 07:53:55.653908 2549 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-87rvr" podUID="3cebfdf7-f604-4870-8e68-e3e120793ced" Oct 9 07:53:57.655028 kubelet[2549]: E1009 07:53:57.654783 2549 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-87rvr" podUID="3cebfdf7-f604-4870-8e68-e3e120793ced" Oct 9 07:53:57.990959 containerd[1473]: time="2024-10-09T07:53:57.990111749Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:53:57.990959 containerd[1473]: time="2024-10-09T07:53:57.990885841Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.1: active requests=0, bytes read=93083736" Oct 9 07:53:57.992732 containerd[1473]: time="2024-10-09T07:53:57.991283552Z" level=info msg="ImageCreate event name:\"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:53:57.993373 containerd[1473]: time="2024-10-09T07:53:57.993340493Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:53:57.994538 containerd[1473]: time="2024-10-09T07:53:57.994472679Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.1\" with image id \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\", size \"94576137\" in 3.139637881s" Oct 9 07:53:57.994660 containerd[1473]: time="2024-10-09T07:53:57.994644229Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\" returns image reference \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\"" Oct 9 07:53:57.998087 containerd[1473]: time="2024-10-09T07:53:57.997877695Z" level=info msg="CreateContainer within sandbox \"70855f96b5695acb5b0a038906fc0a1131a20beb10df5ec6056d5bb3a5607f18\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 9 07:53:58.015936 containerd[1473]: time="2024-10-09T07:53:58.015886117Z" level=info msg="CreateContainer within sandbox \"70855f96b5695acb5b0a038906fc0a1131a20beb10df5ec6056d5bb3a5607f18\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f5f3c62b552a29cf3011113634ae2810d449abd8eba68423b2d5b15b4c62de6a\"" Oct 9 07:53:58.018096 containerd[1473]: time="2024-10-09T07:53:58.016914128Z" level=info msg="StartContainer for \"f5f3c62b552a29cf3011113634ae2810d449abd8eba68423b2d5b15b4c62de6a\"" Oct 9 07:53:58.125331 systemd[1]: Started cri-containerd-f5f3c62b552a29cf3011113634ae2810d449abd8eba68423b2d5b15b4c62de6a.scope - libcontainer container f5f3c62b552a29cf3011113634ae2810d449abd8eba68423b2d5b15b4c62de6a. Oct 9 07:53:58.161834 containerd[1473]: time="2024-10-09T07:53:58.161783138Z" level=info msg="StartContainer for \"f5f3c62b552a29cf3011113634ae2810d449abd8eba68423b2d5b15b4c62de6a\" returns successfully" Oct 9 07:53:58.733249 systemd[1]: cri-containerd-f5f3c62b552a29cf3011113634ae2810d449abd8eba68423b2d5b15b4c62de6a.scope: Deactivated successfully. Oct 9 07:53:58.786313 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f5f3c62b552a29cf3011113634ae2810d449abd8eba68423b2d5b15b4c62de6a-rootfs.mount: Deactivated successfully. Oct 9 07:53:58.792269 containerd[1473]: time="2024-10-09T07:53:58.790910072Z" level=info msg="shim disconnected" id=f5f3c62b552a29cf3011113634ae2810d449abd8eba68423b2d5b15b4c62de6a namespace=k8s.io Oct 9 07:53:58.792269 containerd[1473]: time="2024-10-09T07:53:58.791019624Z" level=warning msg="cleaning up after shim disconnected" id=f5f3c62b552a29cf3011113634ae2810d449abd8eba68423b2d5b15b4c62de6a namespace=k8s.io Oct 9 07:53:58.792269 containerd[1473]: time="2024-10-09T07:53:58.791032524Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 07:53:58.803792 kubelet[2549]: I1009 07:53:58.803750 2549 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Oct 9 07:53:58.859364 kubelet[2549]: I1009 07:53:58.859306 2549 topology_manager.go:215] "Topology Admit Handler" podUID="4d558ca1-372f-4592-9031-7ad3d0cc0acb" podNamespace="kube-system" podName="coredns-76f75df574-lsh4c" Oct 9 07:53:58.872042 kubelet[2549]: E1009 07:53:58.871483 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:53:58.874633 systemd[1]: Created slice kubepods-burstable-pod4d558ca1_372f_4592_9031_7ad3d0cc0acb.slice - libcontainer container kubepods-burstable-pod4d558ca1_372f_4592_9031_7ad3d0cc0acb.slice. Oct 9 07:53:58.879287 containerd[1473]: time="2024-10-09T07:53:58.877280475Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\"" Oct 9 07:53:58.881976 kubelet[2549]: I1009 07:53:58.881705 2549 topology_manager.go:215] "Topology Admit Handler" podUID="728b2665-75cd-48cb-bec2-37026553584c" podNamespace="kube-system" podName="coredns-76f75df574-2p7nw" Oct 9 07:53:58.889791 kubelet[2549]: I1009 07:53:58.888469 2549 topology_manager.go:215] "Topology Admit Handler" podUID="e429ea71-5b02-4029-b597-4f1188a52c84" podNamespace="calico-system" podName="calico-kube-controllers-5d8dc58b7-n2qsx" Oct 9 07:53:58.898181 systemd[1]: Created slice kubepods-burstable-pod728b2665_75cd_48cb_bec2_37026553584c.slice - libcontainer container kubepods-burstable-pod728b2665_75cd_48cb_bec2_37026553584c.slice. Oct 9 07:53:58.913007 systemd[1]: Created slice kubepods-besteffort-pode429ea71_5b02_4029_b597_4f1188a52c84.slice - libcontainer container kubepods-besteffort-pode429ea71_5b02_4029_b597_4f1188a52c84.slice. Oct 9 07:53:59.005326 kubelet[2549]: I1009 07:53:59.001604 2549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4d558ca1-372f-4592-9031-7ad3d0cc0acb-config-volume\") pod \"coredns-76f75df574-lsh4c\" (UID: \"4d558ca1-372f-4592-9031-7ad3d0cc0acb\") " pod="kube-system/coredns-76f75df574-lsh4c" Oct 9 07:53:59.005326 kubelet[2549]: I1009 07:53:59.001702 2549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncfj9\" (UniqueName: \"kubernetes.io/projected/4d558ca1-372f-4592-9031-7ad3d0cc0acb-kube-api-access-ncfj9\") pod \"coredns-76f75df574-lsh4c\" (UID: \"4d558ca1-372f-4592-9031-7ad3d0cc0acb\") " pod="kube-system/coredns-76f75df574-lsh4c" Oct 9 07:53:59.005326 kubelet[2549]: I1009 07:53:59.001737 2549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lt525\" (UniqueName: \"kubernetes.io/projected/728b2665-75cd-48cb-bec2-37026553584c-kube-api-access-lt525\") pod \"coredns-76f75df574-2p7nw\" (UID: \"728b2665-75cd-48cb-bec2-37026553584c\") " pod="kube-system/coredns-76f75df574-2p7nw" Oct 9 07:53:59.005326 kubelet[2549]: I1009 07:53:59.001813 2549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lg4kb\" (UniqueName: \"kubernetes.io/projected/e429ea71-5b02-4029-b597-4f1188a52c84-kube-api-access-lg4kb\") pod \"calico-kube-controllers-5d8dc58b7-n2qsx\" (UID: \"e429ea71-5b02-4029-b597-4f1188a52c84\") " pod="calico-system/calico-kube-controllers-5d8dc58b7-n2qsx" Oct 9 07:53:59.005326 kubelet[2549]: I1009 07:53:59.001850 2549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e429ea71-5b02-4029-b597-4f1188a52c84-tigera-ca-bundle\") pod \"calico-kube-controllers-5d8dc58b7-n2qsx\" (UID: \"e429ea71-5b02-4029-b597-4f1188a52c84\") " pod="calico-system/calico-kube-controllers-5d8dc58b7-n2qsx" Oct 9 07:53:59.005718 kubelet[2549]: I1009 07:53:59.001881 2549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/728b2665-75cd-48cb-bec2-37026553584c-config-volume\") pod \"coredns-76f75df574-2p7nw\" (UID: \"728b2665-75cd-48cb-bec2-37026553584c\") " pod="kube-system/coredns-76f75df574-2p7nw" Oct 9 07:53:59.186276 kubelet[2549]: E1009 07:53:59.186216 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:53:59.189250 containerd[1473]: time="2024-10-09T07:53:59.189142037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-lsh4c,Uid:4d558ca1-372f-4592-9031-7ad3d0cc0acb,Namespace:kube-system,Attempt:0,}" Oct 9 07:53:59.214039 kubelet[2549]: E1009 07:53:59.212652 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:53:59.221292 containerd[1473]: time="2024-10-09T07:53:59.220438503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-2p7nw,Uid:728b2665-75cd-48cb-bec2-37026553584c,Namespace:kube-system,Attempt:0,}" Oct 9 07:53:59.223768 containerd[1473]: time="2024-10-09T07:53:59.223720622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d8dc58b7-n2qsx,Uid:e429ea71-5b02-4029-b597-4f1188a52c84,Namespace:calico-system,Attempt:0,}" Oct 9 07:53:59.621419 containerd[1473]: time="2024-10-09T07:53:59.621318635Z" level=error msg="Failed to destroy network for sandbox \"5df1028d435143b90be9133b0391c10e4f49bf5fc6441197bbe7cbb2c129b089\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:53:59.624214 containerd[1473]: time="2024-10-09T07:53:59.624101645Z" level=error msg="Failed to destroy network for sandbox \"f13efe088c1734e9d4d20e8c6fa0592faba04482fd54d84c88075cd317c253ab\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:53:59.627963 containerd[1473]: time="2024-10-09T07:53:59.625032303Z" level=error msg="Failed to destroy network for sandbox \"fe918a26c80c4794a63e5dcaab4851c9a902d99d032e6d6b9ebace707683836c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:53:59.629530 containerd[1473]: time="2024-10-09T07:53:59.629461165Z" level=error msg="encountered an error cleaning up failed sandbox \"fe918a26c80c4794a63e5dcaab4851c9a902d99d032e6d6b9ebace707683836c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:53:59.629941 containerd[1473]: time="2024-10-09T07:53:59.629901554Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d8dc58b7-n2qsx,Uid:e429ea71-5b02-4029-b597-4f1188a52c84,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fe918a26c80c4794a63e5dcaab4851c9a902d99d032e6d6b9ebace707683836c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:53:59.632692 containerd[1473]: time="2024-10-09T07:53:59.628912125Z" level=error msg="encountered an error cleaning up failed sandbox \"5df1028d435143b90be9133b0391c10e4f49bf5fc6441197bbe7cbb2c129b089\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:53:59.632692 containerd[1473]: time="2024-10-09T07:53:59.632000239Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-lsh4c,Uid:4d558ca1-372f-4592-9031-7ad3d0cc0acb,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5df1028d435143b90be9133b0391c10e4f49bf5fc6441197bbe7cbb2c129b089\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:53:59.639214 containerd[1473]: time="2024-10-09T07:53:59.628941905Z" level=error msg="encountered an error cleaning up failed sandbox \"f13efe088c1734e9d4d20e8c6fa0592faba04482fd54d84c88075cd317c253ab\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:53:59.639214 containerd[1473]: time="2024-10-09T07:53:59.637780045Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-2p7nw,Uid:728b2665-75cd-48cb-bec2-37026553584c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f13efe088c1734e9d4d20e8c6fa0592faba04482fd54d84c88075cd317c253ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:53:59.639847 kubelet[2549]: E1009 07:53:59.639642 2549 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f13efe088c1734e9d4d20e8c6fa0592faba04482fd54d84c88075cd317c253ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:53:59.639847 kubelet[2549]: E1009 07:53:59.639817 2549 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe918a26c80c4794a63e5dcaab4851c9a902d99d032e6d6b9ebace707683836c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:53:59.640840 kubelet[2549]: E1009 07:53:59.639942 2549 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe918a26c80c4794a63e5dcaab4851c9a902d99d032e6d6b9ebace707683836c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5d8dc58b7-n2qsx" Oct 9 07:53:59.640840 kubelet[2549]: E1009 07:53:59.640110 2549 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe918a26c80c4794a63e5dcaab4851c9a902d99d032e6d6b9ebace707683836c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5d8dc58b7-n2qsx" Oct 9 07:53:59.640840 kubelet[2549]: E1009 07:53:59.640201 2549 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5d8dc58b7-n2qsx_calico-system(e429ea71-5b02-4029-b597-4f1188a52c84)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5d8dc58b7-n2qsx_calico-system(e429ea71-5b02-4029-b597-4f1188a52c84)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fe918a26c80c4794a63e5dcaab4851c9a902d99d032e6d6b9ebace707683836c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5d8dc58b7-n2qsx" podUID="e429ea71-5b02-4029-b597-4f1188a52c84" Oct 9 07:53:59.641166 kubelet[2549]: E1009 07:53:59.640002 2549 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f13efe088c1734e9d4d20e8c6fa0592faba04482fd54d84c88075cd317c253ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-2p7nw" Oct 9 07:53:59.641166 kubelet[2549]: E1009 07:53:59.640274 2549 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f13efe088c1734e9d4d20e8c6fa0592faba04482fd54d84c88075cd317c253ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-2p7nw" Oct 9 07:53:59.641166 kubelet[2549]: E1009 07:53:59.640416 2549 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-2p7nw_kube-system(728b2665-75cd-48cb-bec2-37026553584c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-2p7nw_kube-system(728b2665-75cd-48cb-bec2-37026553584c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f13efe088c1734e9d4d20e8c6fa0592faba04482fd54d84c88075cd317c253ab\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-2p7nw" podUID="728b2665-75cd-48cb-bec2-37026553584c" Oct 9 07:53:59.641446 kubelet[2549]: E1009 07:53:59.640031 2549 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5df1028d435143b90be9133b0391c10e4f49bf5fc6441197bbe7cbb2c129b089\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:53:59.641446 kubelet[2549]: E1009 07:53:59.640560 2549 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5df1028d435143b90be9133b0391c10e4f49bf5fc6441197bbe7cbb2c129b089\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-lsh4c" Oct 9 07:53:59.641446 kubelet[2549]: E1009 07:53:59.640587 2549 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5df1028d435143b90be9133b0391c10e4f49bf5fc6441197bbe7cbb2c129b089\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-lsh4c" Oct 9 07:53:59.641576 kubelet[2549]: E1009 07:53:59.640735 2549 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-lsh4c_kube-system(4d558ca1-372f-4592-9031-7ad3d0cc0acb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-lsh4c_kube-system(4d558ca1-372f-4592-9031-7ad3d0cc0acb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5df1028d435143b90be9133b0391c10e4f49bf5fc6441197bbe7cbb2c129b089\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-lsh4c" podUID="4d558ca1-372f-4592-9031-7ad3d0cc0acb" Oct 9 07:53:59.662499 systemd[1]: Created slice kubepods-besteffort-pod3cebfdf7_f604_4870_8e68_e3e120793ced.slice - libcontainer container kubepods-besteffort-pod3cebfdf7_f604_4870_8e68_e3e120793ced.slice. Oct 9 07:53:59.668178 containerd[1473]: time="2024-10-09T07:53:59.667687027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-87rvr,Uid:3cebfdf7-f604-4870-8e68-e3e120793ced,Namespace:calico-system,Attempt:0,}" Oct 9 07:53:59.769887 containerd[1473]: time="2024-10-09T07:53:59.769801753Z" level=error msg="Failed to destroy network for sandbox \"3ee1cb4226ea45403e536e1242afacae46ff673c032b1129956dc51870358a8d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:53:59.770628 containerd[1473]: time="2024-10-09T07:53:59.770540631Z" level=error msg="encountered an error cleaning up failed sandbox \"3ee1cb4226ea45403e536e1242afacae46ff673c032b1129956dc51870358a8d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:53:59.770893 containerd[1473]: time="2024-10-09T07:53:59.770767467Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-87rvr,Uid:3cebfdf7-f604-4870-8e68-e3e120793ced,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3ee1cb4226ea45403e536e1242afacae46ff673c032b1129956dc51870358a8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:53:59.771203 kubelet[2549]: E1009 07:53:59.771170 2549 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ee1cb4226ea45403e536e1242afacae46ff673c032b1129956dc51870358a8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:53:59.771297 kubelet[2549]: E1009 07:53:59.771244 2549 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ee1cb4226ea45403e536e1242afacae46ff673c032b1129956dc51870358a8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-87rvr" Oct 9 07:53:59.771297 kubelet[2549]: E1009 07:53:59.771282 2549 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ee1cb4226ea45403e536e1242afacae46ff673c032b1129956dc51870358a8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-87rvr" Oct 9 07:53:59.771382 kubelet[2549]: E1009 07:53:59.771360 2549 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-87rvr_calico-system(3cebfdf7-f604-4870-8e68-e3e120793ced)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-87rvr_calico-system(3cebfdf7-f604-4870-8e68-e3e120793ced)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3ee1cb4226ea45403e536e1242afacae46ff673c032b1129956dc51870358a8d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-87rvr" podUID="3cebfdf7-f604-4870-8e68-e3e120793ced" Oct 9 07:53:59.879404 kubelet[2549]: I1009 07:53:59.877392 2549 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ee1cb4226ea45403e536e1242afacae46ff673c032b1129956dc51870358a8d" Oct 9 07:53:59.888549 containerd[1473]: time="2024-10-09T07:53:59.888443650Z" level=info msg="StopPodSandbox for \"3ee1cb4226ea45403e536e1242afacae46ff673c032b1129956dc51870358a8d\"" Oct 9 07:53:59.890253 containerd[1473]: time="2024-10-09T07:53:59.890177988Z" level=info msg="Ensure that sandbox 3ee1cb4226ea45403e536e1242afacae46ff673c032b1129956dc51870358a8d in task-service has been cleanup successfully" Oct 9 07:53:59.898488 kubelet[2549]: I1009 07:53:59.897123 2549 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5df1028d435143b90be9133b0391c10e4f49bf5fc6441197bbe7cbb2c129b089" Oct 9 07:53:59.898681 containerd[1473]: time="2024-10-09T07:53:59.897305605Z" level=info msg="StopPodSandbox for \"5df1028d435143b90be9133b0391c10e4f49bf5fc6441197bbe7cbb2c129b089\"" Oct 9 07:53:59.898681 containerd[1473]: time="2024-10-09T07:53:59.897500339Z" level=info msg="Ensure that sandbox 5df1028d435143b90be9133b0391c10e4f49bf5fc6441197bbe7cbb2c129b089 in task-service has been cleanup successfully" Oct 9 07:53:59.915145 kubelet[2549]: I1009 07:53:59.914555 2549 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe918a26c80c4794a63e5dcaab4851c9a902d99d032e6d6b9ebace707683836c" Oct 9 07:53:59.915346 containerd[1473]: time="2024-10-09T07:53:59.915202520Z" level=info msg="StopPodSandbox for \"fe918a26c80c4794a63e5dcaab4851c9a902d99d032e6d6b9ebace707683836c\"" Oct 9 07:53:59.915413 containerd[1473]: time="2024-10-09T07:53:59.915387239Z" level=info msg="Ensure that sandbox fe918a26c80c4794a63e5dcaab4851c9a902d99d032e6d6b9ebace707683836c in task-service has been cleanup successfully" Oct 9 07:53:59.918631 kubelet[2549]: I1009 07:53:59.918590 2549 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f13efe088c1734e9d4d20e8c6fa0592faba04482fd54d84c88075cd317c253ab" Oct 9 07:53:59.920084 containerd[1473]: time="2024-10-09T07:53:59.919572310Z" level=info msg="StopPodSandbox for \"f13efe088c1734e9d4d20e8c6fa0592faba04482fd54d84c88075cd317c253ab\"" Oct 9 07:53:59.920084 containerd[1473]: time="2024-10-09T07:53:59.919829318Z" level=info msg="Ensure that sandbox f13efe088c1734e9d4d20e8c6fa0592faba04482fd54d84c88075cd317c253ab in task-service has been cleanup successfully" Oct 9 07:53:59.996156 containerd[1473]: time="2024-10-09T07:53:59.996094257Z" level=error msg="StopPodSandbox for \"3ee1cb4226ea45403e536e1242afacae46ff673c032b1129956dc51870358a8d\" failed" error="failed to destroy network for sandbox \"3ee1cb4226ea45403e536e1242afacae46ff673c032b1129956dc51870358a8d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:53:59.996798 kubelet[2549]: E1009 07:53:59.996574 2549 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3ee1cb4226ea45403e536e1242afacae46ff673c032b1129956dc51870358a8d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3ee1cb4226ea45403e536e1242afacae46ff673c032b1129956dc51870358a8d" Oct 9 07:53:59.996798 kubelet[2549]: E1009 07:53:59.996668 2549 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3ee1cb4226ea45403e536e1242afacae46ff673c032b1129956dc51870358a8d"} Oct 9 07:53:59.996798 kubelet[2549]: E1009 07:53:59.996710 2549 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3cebfdf7-f604-4870-8e68-e3e120793ced\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3ee1cb4226ea45403e536e1242afacae46ff673c032b1129956dc51870358a8d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 07:53:59.996798 kubelet[2549]: E1009 07:53:59.996742 2549 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3cebfdf7-f604-4870-8e68-e3e120793ced\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3ee1cb4226ea45403e536e1242afacae46ff673c032b1129956dc51870358a8d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-87rvr" podUID="3cebfdf7-f604-4870-8e68-e3e120793ced" Oct 9 07:54:00.000708 containerd[1473]: time="2024-10-09T07:54:00.000443483Z" level=error msg="StopPodSandbox for \"5df1028d435143b90be9133b0391c10e4f49bf5fc6441197bbe7cbb2c129b089\" failed" error="failed to destroy network for sandbox \"5df1028d435143b90be9133b0391c10e4f49bf5fc6441197bbe7cbb2c129b089\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:54:00.001498 kubelet[2549]: E1009 07:54:00.001299 2549 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5df1028d435143b90be9133b0391c10e4f49bf5fc6441197bbe7cbb2c129b089\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5df1028d435143b90be9133b0391c10e4f49bf5fc6441197bbe7cbb2c129b089" Oct 9 07:54:00.001498 kubelet[2549]: E1009 07:54:00.001356 2549 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5df1028d435143b90be9133b0391c10e4f49bf5fc6441197bbe7cbb2c129b089"} Oct 9 07:54:00.001498 kubelet[2549]: E1009 07:54:00.001397 2549 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4d558ca1-372f-4592-9031-7ad3d0cc0acb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5df1028d435143b90be9133b0391c10e4f49bf5fc6441197bbe7cbb2c129b089\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 07:54:00.001498 kubelet[2549]: E1009 07:54:00.001430 2549 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4d558ca1-372f-4592-9031-7ad3d0cc0acb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5df1028d435143b90be9133b0391c10e4f49bf5fc6441197bbe7cbb2c129b089\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-lsh4c" podUID="4d558ca1-372f-4592-9031-7ad3d0cc0acb" Oct 9 07:54:00.024927 containerd[1473]: time="2024-10-09T07:54:00.024794220Z" level=error msg="StopPodSandbox for \"fe918a26c80c4794a63e5dcaab4851c9a902d99d032e6d6b9ebace707683836c\" failed" error="failed to destroy network for sandbox \"fe918a26c80c4794a63e5dcaab4851c9a902d99d032e6d6b9ebace707683836c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:54:00.025783 kubelet[2549]: E1009 07:54:00.025306 2549 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fe918a26c80c4794a63e5dcaab4851c9a902d99d032e6d6b9ebace707683836c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fe918a26c80c4794a63e5dcaab4851c9a902d99d032e6d6b9ebace707683836c" Oct 9 07:54:00.025783 kubelet[2549]: E1009 07:54:00.025369 2549 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fe918a26c80c4794a63e5dcaab4851c9a902d99d032e6d6b9ebace707683836c"} Oct 9 07:54:00.025783 kubelet[2549]: E1009 07:54:00.025428 2549 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e429ea71-5b02-4029-b597-4f1188a52c84\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fe918a26c80c4794a63e5dcaab4851c9a902d99d032e6d6b9ebace707683836c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 07:54:00.025783 kubelet[2549]: E1009 07:54:00.025474 2549 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e429ea71-5b02-4029-b597-4f1188a52c84\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fe918a26c80c4794a63e5dcaab4851c9a902d99d032e6d6b9ebace707683836c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5d8dc58b7-n2qsx" podUID="e429ea71-5b02-4029-b597-4f1188a52c84" Oct 9 07:54:00.030808 containerd[1473]: time="2024-10-09T07:54:00.030726612Z" level=error msg="StopPodSandbox for \"f13efe088c1734e9d4d20e8c6fa0592faba04482fd54d84c88075cd317c253ab\" failed" error="failed to destroy network for sandbox \"f13efe088c1734e9d4d20e8c6fa0592faba04482fd54d84c88075cd317c253ab\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:54:00.031523 kubelet[2549]: E1009 07:54:00.031302 2549 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f13efe088c1734e9d4d20e8c6fa0592faba04482fd54d84c88075cd317c253ab\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f13efe088c1734e9d4d20e8c6fa0592faba04482fd54d84c88075cd317c253ab" Oct 9 07:54:00.031523 kubelet[2549]: E1009 07:54:00.031372 2549 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f13efe088c1734e9d4d20e8c6fa0592faba04482fd54d84c88075cd317c253ab"} Oct 9 07:54:00.031523 kubelet[2549]: E1009 07:54:00.031426 2549 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"728b2665-75cd-48cb-bec2-37026553584c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f13efe088c1734e9d4d20e8c6fa0592faba04482fd54d84c88075cd317c253ab\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 07:54:00.031523 kubelet[2549]: E1009 07:54:00.031472 2549 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"728b2665-75cd-48cb-bec2-37026553584c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f13efe088c1734e9d4d20e8c6fa0592faba04482fd54d84c88075cd317c253ab\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-2p7nw" podUID="728b2665-75cd-48cb-bec2-37026553584c" Oct 9 07:54:00.128293 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fe918a26c80c4794a63e5dcaab4851c9a902d99d032e6d6b9ebace707683836c-shm.mount: Deactivated successfully. Oct 9 07:54:00.128532 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5df1028d435143b90be9133b0391c10e4f49bf5fc6441197bbe7cbb2c129b089-shm.mount: Deactivated successfully. Oct 9 07:54:04.957416 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1818134767.mount: Deactivated successfully. Oct 9 07:54:04.997343 containerd[1473]: time="2024-10-09T07:54:04.997274446Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:54:05.013183 containerd[1473]: time="2024-10-09T07:54:05.012810987Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.1: active requests=0, bytes read=117873564" Oct 9 07:54:05.025045 containerd[1473]: time="2024-10-09T07:54:05.024918005Z" level=info msg="ImageCreate event name:\"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:54:05.027689 containerd[1473]: time="2024-10-09T07:54:05.027578335Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:54:05.029089 containerd[1473]: time="2024-10-09T07:54:05.028797735Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.1\" with image id \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\", size \"117873426\" in 6.151469811s" Oct 9 07:54:05.029089 containerd[1473]: time="2024-10-09T07:54:05.028858426Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\" returns image reference \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\"" Oct 9 07:54:05.128658 containerd[1473]: time="2024-10-09T07:54:05.128576384Z" level=info msg="CreateContainer within sandbox \"70855f96b5695acb5b0a038906fc0a1131a20beb10df5ec6056d5bb3a5607f18\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 9 07:54:05.240183 containerd[1473]: time="2024-10-09T07:54:05.239847164Z" level=info msg="CreateContainer within sandbox \"70855f96b5695acb5b0a038906fc0a1131a20beb10df5ec6056d5bb3a5607f18\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"9ccd0a7f46d14e647566d0d9db14e4ca2e855340a5efc9e7e44bc95b8c1e6589\"" Oct 9 07:54:05.242640 containerd[1473]: time="2024-10-09T07:54:05.242580292Z" level=info msg="StartContainer for \"9ccd0a7f46d14e647566d0d9db14e4ca2e855340a5efc9e7e44bc95b8c1e6589\"" Oct 9 07:54:05.530740 systemd[1]: Started cri-containerd-9ccd0a7f46d14e647566d0d9db14e4ca2e855340a5efc9e7e44bc95b8c1e6589.scope - libcontainer container 9ccd0a7f46d14e647566d0d9db14e4ca2e855340a5efc9e7e44bc95b8c1e6589. Oct 9 07:54:05.607302 containerd[1473]: time="2024-10-09T07:54:05.607213677Z" level=info msg="StartContainer for \"9ccd0a7f46d14e647566d0d9db14e4ca2e855340a5efc9e7e44bc95b8c1e6589\" returns successfully" Oct 9 07:54:05.725427 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 9 07:54:05.727121 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 9 07:54:05.978850 kubelet[2549]: E1009 07:54:05.978662 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:54:06.086768 kubelet[2549]: I1009 07:54:06.085561 2549 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-ww74c" podStartSLOduration=1.718064332 podStartE2EDuration="16.065990064s" podCreationTimestamp="2024-10-09 07:53:50 +0000 UTC" firstStartedPulling="2024-10-09 07:53:50.681318753 +0000 UTC m=+20.193201970" lastFinishedPulling="2024-10-09 07:54:05.02924449 +0000 UTC m=+34.541127702" observedRunningTime="2024-10-09 07:54:06.060582866 +0000 UTC m=+35.572466101" watchObservedRunningTime="2024-10-09 07:54:06.065990064 +0000 UTC m=+35.577873287" Oct 9 07:54:06.962214 kubelet[2549]: E1009 07:54:06.961892 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:54:06.992506 systemd[1]: run-containerd-runc-k8s.io-9ccd0a7f46d14e647566d0d9db14e4ca2e855340a5efc9e7e44bc95b8c1e6589-runc.Jlo2y3.mount: Deactivated successfully. Oct 9 07:54:10.093672 kubelet[2549]: I1009 07:54:10.092931 2549 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 9 07:54:10.102714 kubelet[2549]: E1009 07:54:10.102010 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:54:10.975008 kubelet[2549]: E1009 07:54:10.974966 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:54:10.999112 kernel: bpftool[3809]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Oct 9 07:54:11.436720 systemd-networkd[1366]: vxlan.calico: Link UP Oct 9 07:54:11.436733 systemd-networkd[1366]: vxlan.calico: Gained carrier Oct 9 07:54:12.656652 containerd[1473]: time="2024-10-09T07:54:12.656591824Z" level=info msg="StopPodSandbox for \"5df1028d435143b90be9133b0391c10e4f49bf5fc6441197bbe7cbb2c129b089\"" Oct 9 07:54:12.838547 systemd[1]: Started sshd@7-143.198.229.119:22-139.178.89.65:50350.service - OpenSSH per-connection server daemon (139.178.89.65:50350). Oct 9 07:54:12.923500 containerd[1473]: 2024-10-09 07:54:12.742 [INFO][3924] k8s.go 608: Cleaning up netns ContainerID="5df1028d435143b90be9133b0391c10e4f49bf5fc6441197bbe7cbb2c129b089" Oct 9 07:54:12.923500 containerd[1473]: 2024-10-09 07:54:12.743 [INFO][3924] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="5df1028d435143b90be9133b0391c10e4f49bf5fc6441197bbe7cbb2c129b089" iface="eth0" netns="/var/run/netns/cni-baa6b56a-7183-b2ef-13b3-78dd085899a2" Oct 9 07:54:12.923500 containerd[1473]: 2024-10-09 07:54:12.744 [INFO][3924] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="5df1028d435143b90be9133b0391c10e4f49bf5fc6441197bbe7cbb2c129b089" iface="eth0" netns="/var/run/netns/cni-baa6b56a-7183-b2ef-13b3-78dd085899a2" Oct 9 07:54:12.923500 containerd[1473]: 2024-10-09 07:54:12.745 [INFO][3924] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="5df1028d435143b90be9133b0391c10e4f49bf5fc6441197bbe7cbb2c129b089" iface="eth0" netns="/var/run/netns/cni-baa6b56a-7183-b2ef-13b3-78dd085899a2" Oct 9 07:54:12.923500 containerd[1473]: 2024-10-09 07:54:12.745 [INFO][3924] k8s.go 615: Releasing IP address(es) ContainerID="5df1028d435143b90be9133b0391c10e4f49bf5fc6441197bbe7cbb2c129b089" Oct 9 07:54:12.923500 containerd[1473]: 2024-10-09 07:54:12.745 [INFO][3924] utils.go 188: Calico CNI releasing IP address ContainerID="5df1028d435143b90be9133b0391c10e4f49bf5fc6441197bbe7cbb2c129b089" Oct 9 07:54:12.923500 containerd[1473]: 2024-10-09 07:54:12.895 [INFO][3930] ipam_plugin.go 417: Releasing address using handleID ContainerID="5df1028d435143b90be9133b0391c10e4f49bf5fc6441197bbe7cbb2c129b089" HandleID="k8s-pod-network.5df1028d435143b90be9133b0391c10e4f49bf5fc6441197bbe7cbb2c129b089" Workload="ci--4081.1.0--5--a4f881141a-k8s-coredns--76f75df574--lsh4c-eth0" Oct 9 07:54:12.923500 containerd[1473]: 2024-10-09 07:54:12.896 [INFO][3930] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:54:12.923500 containerd[1473]: 2024-10-09 07:54:12.897 [INFO][3930] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:54:12.923500 containerd[1473]: 2024-10-09 07:54:12.915 [WARNING][3930] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="5df1028d435143b90be9133b0391c10e4f49bf5fc6441197bbe7cbb2c129b089" HandleID="k8s-pod-network.5df1028d435143b90be9133b0391c10e4f49bf5fc6441197bbe7cbb2c129b089" Workload="ci--4081.1.0--5--a4f881141a-k8s-coredns--76f75df574--lsh4c-eth0" Oct 9 07:54:12.923500 containerd[1473]: 2024-10-09 07:54:12.915 [INFO][3930] ipam_plugin.go 445: Releasing address using workloadID ContainerID="5df1028d435143b90be9133b0391c10e4f49bf5fc6441197bbe7cbb2c129b089" HandleID="k8s-pod-network.5df1028d435143b90be9133b0391c10e4f49bf5fc6441197bbe7cbb2c129b089" Workload="ci--4081.1.0--5--a4f881141a-k8s-coredns--76f75df574--lsh4c-eth0" Oct 9 07:54:12.923500 containerd[1473]: 2024-10-09 07:54:12.917 [INFO][3930] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:54:12.923500 containerd[1473]: 2024-10-09 07:54:12.920 [INFO][3924] k8s.go 621: Teardown processing complete. ContainerID="5df1028d435143b90be9133b0391c10e4f49bf5fc6441197bbe7cbb2c129b089" Oct 9 07:54:12.927508 containerd[1473]: time="2024-10-09T07:54:12.927452874Z" level=info msg="TearDown network for sandbox \"5df1028d435143b90be9133b0391c10e4f49bf5fc6441197bbe7cbb2c129b089\" successfully" Oct 9 07:54:12.927508 containerd[1473]: time="2024-10-09T07:54:12.927495060Z" level=info msg="StopPodSandbox for \"5df1028d435143b90be9133b0391c10e4f49bf5fc6441197bbe7cbb2c129b089\" returns successfully" Oct 9 07:54:12.928301 kubelet[2549]: E1009 07:54:12.928214 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:54:12.929841 systemd[1]: run-netns-cni\x2dbaa6b56a\x2d7183\x2db2ef\x2d13b3\x2d78dd085899a2.mount: Deactivated successfully. Oct 9 07:54:12.934727 sshd[3935]: Accepted publickey for core from 139.178.89.65 port 50350 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:54:12.937310 containerd[1473]: time="2024-10-09T07:54:12.937172268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-lsh4c,Uid:4d558ca1-372f-4592-9031-7ad3d0cc0acb,Namespace:kube-system,Attempt:1,}" Oct 9 07:54:12.938547 sshd[3935]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:54:12.947587 systemd-logind[1448]: New session 8 of user core. Oct 9 07:54:12.953604 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 9 07:54:13.055123 systemd-networkd[1366]: vxlan.calico: Gained IPv6LL Oct 9 07:54:13.178583 sshd[3935]: pam_unix(sshd:session): session closed for user core Oct 9 07:54:13.184956 systemd[1]: sshd@7-143.198.229.119:22-139.178.89.65:50350.service: Deactivated successfully. Oct 9 07:54:13.188224 systemd[1]: session-8.scope: Deactivated successfully. Oct 9 07:54:13.193915 systemd-logind[1448]: Session 8 logged out. Waiting for processes to exit. Oct 9 07:54:13.197456 systemd-logind[1448]: Removed session 8. Oct 9 07:54:13.243271 systemd-networkd[1366]: cali2fa380cd917: Link UP Oct 9 07:54:13.250030 systemd-networkd[1366]: cali2fa380cd917: Gained carrier Oct 9 07:54:13.273375 containerd[1473]: 2024-10-09 07:54:13.079 [INFO][3942] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.1.0--5--a4f881141a-k8s-coredns--76f75df574--lsh4c-eth0 coredns-76f75df574- kube-system 4d558ca1-372f-4592-9031-7ad3d0cc0acb 789 0 2024-10-09 07:53:44 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.1.0-5-a4f881141a coredns-76f75df574-lsh4c eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2fa380cd917 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="4cd76a47e5de381e46e7ecfefbd183d4150991fc4913858ef68d8eb9fcaeda85" Namespace="kube-system" Pod="coredns-76f75df574-lsh4c" WorkloadEndpoint="ci--4081.1.0--5--a4f881141a-k8s-coredns--76f75df574--lsh4c-" Oct 9 07:54:13.273375 containerd[1473]: 2024-10-09 07:54:13.080 [INFO][3942] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4cd76a47e5de381e46e7ecfefbd183d4150991fc4913858ef68d8eb9fcaeda85" Namespace="kube-system" Pod="coredns-76f75df574-lsh4c" WorkloadEndpoint="ci--4081.1.0--5--a4f881141a-k8s-coredns--76f75df574--lsh4c-eth0" Oct 9 07:54:13.273375 containerd[1473]: 2024-10-09 07:54:13.145 [INFO][3960] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4cd76a47e5de381e46e7ecfefbd183d4150991fc4913858ef68d8eb9fcaeda85" HandleID="k8s-pod-network.4cd76a47e5de381e46e7ecfefbd183d4150991fc4913858ef68d8eb9fcaeda85" Workload="ci--4081.1.0--5--a4f881141a-k8s-coredns--76f75df574--lsh4c-eth0" Oct 9 07:54:13.273375 containerd[1473]: 2024-10-09 07:54:13.168 [INFO][3960] ipam_plugin.go 270: Auto assigning IP ContainerID="4cd76a47e5de381e46e7ecfefbd183d4150991fc4913858ef68d8eb9fcaeda85" HandleID="k8s-pod-network.4cd76a47e5de381e46e7ecfefbd183d4150991fc4913858ef68d8eb9fcaeda85" Workload="ci--4081.1.0--5--a4f881141a-k8s-coredns--76f75df574--lsh4c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000504a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.1.0-5-a4f881141a", "pod":"coredns-76f75df574-lsh4c", "timestamp":"2024-10-09 07:54:13.145340674 +0000 UTC"}, Hostname:"ci-4081.1.0-5-a4f881141a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 07:54:13.273375 containerd[1473]: 2024-10-09 07:54:13.168 [INFO][3960] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:54:13.273375 containerd[1473]: 2024-10-09 07:54:13.169 [INFO][3960] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:54:13.273375 containerd[1473]: 2024-10-09 07:54:13.169 [INFO][3960] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.1.0-5-a4f881141a' Oct 9 07:54:13.273375 containerd[1473]: 2024-10-09 07:54:13.173 [INFO][3960] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4cd76a47e5de381e46e7ecfefbd183d4150991fc4913858ef68d8eb9fcaeda85" host="ci-4081.1.0-5-a4f881141a" Oct 9 07:54:13.273375 containerd[1473]: 2024-10-09 07:54:13.193 [INFO][3960] ipam.go 372: Looking up existing affinities for host host="ci-4081.1.0-5-a4f881141a" Oct 9 07:54:13.273375 containerd[1473]: 2024-10-09 07:54:13.203 [INFO][3960] ipam.go 489: Trying affinity for 192.168.52.128/26 host="ci-4081.1.0-5-a4f881141a" Oct 9 07:54:13.273375 containerd[1473]: 2024-10-09 07:54:13.207 [INFO][3960] ipam.go 155: Attempting to load block cidr=192.168.52.128/26 host="ci-4081.1.0-5-a4f881141a" Oct 9 07:54:13.273375 containerd[1473]: 2024-10-09 07:54:13.211 [INFO][3960] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.52.128/26 host="ci-4081.1.0-5-a4f881141a" Oct 9 07:54:13.273375 containerd[1473]: 2024-10-09 07:54:13.211 [INFO][3960] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.52.128/26 handle="k8s-pod-network.4cd76a47e5de381e46e7ecfefbd183d4150991fc4913858ef68d8eb9fcaeda85" host="ci-4081.1.0-5-a4f881141a" Oct 9 07:54:13.273375 containerd[1473]: 2024-10-09 07:54:13.213 [INFO][3960] ipam.go 1685: Creating new handle: k8s-pod-network.4cd76a47e5de381e46e7ecfefbd183d4150991fc4913858ef68d8eb9fcaeda85 Oct 9 07:54:13.273375 containerd[1473]: 2024-10-09 07:54:13.224 [INFO][3960] ipam.go 1203: Writing block in order to claim IPs block=192.168.52.128/26 handle="k8s-pod-network.4cd76a47e5de381e46e7ecfefbd183d4150991fc4913858ef68d8eb9fcaeda85" host="ci-4081.1.0-5-a4f881141a" Oct 9 07:54:13.273375 containerd[1473]: 2024-10-09 07:54:13.233 [INFO][3960] ipam.go 1216: Successfully claimed IPs: [192.168.52.129/26] block=192.168.52.128/26 handle="k8s-pod-network.4cd76a47e5de381e46e7ecfefbd183d4150991fc4913858ef68d8eb9fcaeda85" host="ci-4081.1.0-5-a4f881141a" Oct 9 07:54:13.273375 containerd[1473]: 2024-10-09 07:54:13.233 [INFO][3960] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.52.129/26] handle="k8s-pod-network.4cd76a47e5de381e46e7ecfefbd183d4150991fc4913858ef68d8eb9fcaeda85" host="ci-4081.1.0-5-a4f881141a" Oct 9 07:54:13.273375 containerd[1473]: 2024-10-09 07:54:13.233 [INFO][3960] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:54:13.273375 containerd[1473]: 2024-10-09 07:54:13.234 [INFO][3960] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.52.129/26] IPv6=[] ContainerID="4cd76a47e5de381e46e7ecfefbd183d4150991fc4913858ef68d8eb9fcaeda85" HandleID="k8s-pod-network.4cd76a47e5de381e46e7ecfefbd183d4150991fc4913858ef68d8eb9fcaeda85" Workload="ci--4081.1.0--5--a4f881141a-k8s-coredns--76f75df574--lsh4c-eth0" Oct 9 07:54:13.276455 containerd[1473]: 2024-10-09 07:54:13.238 [INFO][3942] k8s.go 386: Populated endpoint ContainerID="4cd76a47e5de381e46e7ecfefbd183d4150991fc4913858ef68d8eb9fcaeda85" Namespace="kube-system" Pod="coredns-76f75df574-lsh4c" WorkloadEndpoint="ci--4081.1.0--5--a4f881141a-k8s-coredns--76f75df574--lsh4c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.1.0--5--a4f881141a-k8s-coredns--76f75df574--lsh4c-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"4d558ca1-372f-4592-9031-7ad3d0cc0acb", ResourceVersion:"789", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 53, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.1.0-5-a4f881141a", ContainerID:"", Pod:"coredns-76f75df574-lsh4c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2fa380cd917", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:54:13.276455 containerd[1473]: 2024-10-09 07:54:13.238 [INFO][3942] k8s.go 387: Calico CNI using IPs: [192.168.52.129/32] ContainerID="4cd76a47e5de381e46e7ecfefbd183d4150991fc4913858ef68d8eb9fcaeda85" Namespace="kube-system" Pod="coredns-76f75df574-lsh4c" WorkloadEndpoint="ci--4081.1.0--5--a4f881141a-k8s-coredns--76f75df574--lsh4c-eth0" Oct 9 07:54:13.276455 containerd[1473]: 2024-10-09 07:54:13.238 [INFO][3942] dataplane_linux.go 68: Setting the host side veth name to cali2fa380cd917 ContainerID="4cd76a47e5de381e46e7ecfefbd183d4150991fc4913858ef68d8eb9fcaeda85" Namespace="kube-system" Pod="coredns-76f75df574-lsh4c" WorkloadEndpoint="ci--4081.1.0--5--a4f881141a-k8s-coredns--76f75df574--lsh4c-eth0" Oct 9 07:54:13.276455 containerd[1473]: 2024-10-09 07:54:13.241 [INFO][3942] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="4cd76a47e5de381e46e7ecfefbd183d4150991fc4913858ef68d8eb9fcaeda85" Namespace="kube-system" Pod="coredns-76f75df574-lsh4c" WorkloadEndpoint="ci--4081.1.0--5--a4f881141a-k8s-coredns--76f75df574--lsh4c-eth0" Oct 9 07:54:13.276455 containerd[1473]: 2024-10-09 07:54:13.242 [INFO][3942] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4cd76a47e5de381e46e7ecfefbd183d4150991fc4913858ef68d8eb9fcaeda85" Namespace="kube-system" Pod="coredns-76f75df574-lsh4c" WorkloadEndpoint="ci--4081.1.0--5--a4f881141a-k8s-coredns--76f75df574--lsh4c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.1.0--5--a4f881141a-k8s-coredns--76f75df574--lsh4c-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"4d558ca1-372f-4592-9031-7ad3d0cc0acb", ResourceVersion:"789", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 53, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.1.0-5-a4f881141a", ContainerID:"4cd76a47e5de381e46e7ecfefbd183d4150991fc4913858ef68d8eb9fcaeda85", Pod:"coredns-76f75df574-lsh4c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2fa380cd917", MAC:"72:8a:1d:90:ef:bd", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:54:13.276455 containerd[1473]: 2024-10-09 07:54:13.267 [INFO][3942] k8s.go 500: Wrote updated endpoint to datastore ContainerID="4cd76a47e5de381e46e7ecfefbd183d4150991fc4913858ef68d8eb9fcaeda85" Namespace="kube-system" Pod="coredns-76f75df574-lsh4c" WorkloadEndpoint="ci--4081.1.0--5--a4f881141a-k8s-coredns--76f75df574--lsh4c-eth0" Oct 9 07:54:13.328297 containerd[1473]: time="2024-10-09T07:54:13.327961054Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:54:13.328297 containerd[1473]: time="2024-10-09T07:54:13.328042127Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:54:13.328297 containerd[1473]: time="2024-10-09T07:54:13.328164842Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:54:13.331360 containerd[1473]: time="2024-10-09T07:54:13.328899147Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:54:13.375528 systemd[1]: Started cri-containerd-4cd76a47e5de381e46e7ecfefbd183d4150991fc4913858ef68d8eb9fcaeda85.scope - libcontainer container 4cd76a47e5de381e46e7ecfefbd183d4150991fc4913858ef68d8eb9fcaeda85. Oct 9 07:54:13.458358 containerd[1473]: time="2024-10-09T07:54:13.457917470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-lsh4c,Uid:4d558ca1-372f-4592-9031-7ad3d0cc0acb,Namespace:kube-system,Attempt:1,} returns sandbox id \"4cd76a47e5de381e46e7ecfefbd183d4150991fc4913858ef68d8eb9fcaeda85\"" Oct 9 07:54:13.460550 kubelet[2549]: E1009 07:54:13.460516 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:54:13.474979 containerd[1473]: time="2024-10-09T07:54:13.474461635Z" level=info msg="CreateContainer within sandbox \"4cd76a47e5de381e46e7ecfefbd183d4150991fc4913858ef68d8eb9fcaeda85\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 9 07:54:13.495055 containerd[1473]: time="2024-10-09T07:54:13.494951145Z" level=info msg="CreateContainer within sandbox \"4cd76a47e5de381e46e7ecfefbd183d4150991fc4913858ef68d8eb9fcaeda85\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1427023606892d5e87af8b27a6c61eb67f694fe9f060e37ec9d22825cc4b256a\"" Oct 9 07:54:13.498045 containerd[1473]: time="2024-10-09T07:54:13.497970330Z" level=info msg="StartContainer for \"1427023606892d5e87af8b27a6c61eb67f694fe9f060e37ec9d22825cc4b256a\"" Oct 9 07:54:13.544455 systemd[1]: Started cri-containerd-1427023606892d5e87af8b27a6c61eb67f694fe9f060e37ec9d22825cc4b256a.scope - libcontainer container 1427023606892d5e87af8b27a6c61eb67f694fe9f060e37ec9d22825cc4b256a. Oct 9 07:54:13.585481 containerd[1473]: time="2024-10-09T07:54:13.585433765Z" level=info msg="StartContainer for \"1427023606892d5e87af8b27a6c61eb67f694fe9f060e37ec9d22825cc4b256a\" returns successfully" Oct 9 07:54:13.656733 containerd[1473]: time="2024-10-09T07:54:13.656145007Z" level=info msg="StopPodSandbox for \"3ee1cb4226ea45403e536e1242afacae46ff673c032b1129956dc51870358a8d\"" Oct 9 07:54:13.824028 containerd[1473]: 2024-10-09 07:54:13.736 [INFO][4072] k8s.go 608: Cleaning up netns ContainerID="3ee1cb4226ea45403e536e1242afacae46ff673c032b1129956dc51870358a8d" Oct 9 07:54:13.824028 containerd[1473]: 2024-10-09 07:54:13.736 [INFO][4072] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="3ee1cb4226ea45403e536e1242afacae46ff673c032b1129956dc51870358a8d" iface="eth0" netns="/var/run/netns/cni-188e8273-24b9-0586-5bf6-b874fb3ed71a" Oct 9 07:54:13.824028 containerd[1473]: 2024-10-09 07:54:13.737 [INFO][4072] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="3ee1cb4226ea45403e536e1242afacae46ff673c032b1129956dc51870358a8d" iface="eth0" netns="/var/run/netns/cni-188e8273-24b9-0586-5bf6-b874fb3ed71a" Oct 9 07:54:13.824028 containerd[1473]: 2024-10-09 07:54:13.738 [INFO][4072] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="3ee1cb4226ea45403e536e1242afacae46ff673c032b1129956dc51870358a8d" iface="eth0" netns="/var/run/netns/cni-188e8273-24b9-0586-5bf6-b874fb3ed71a" Oct 9 07:54:13.824028 containerd[1473]: 2024-10-09 07:54:13.738 [INFO][4072] k8s.go 615: Releasing IP address(es) ContainerID="3ee1cb4226ea45403e536e1242afacae46ff673c032b1129956dc51870358a8d" Oct 9 07:54:13.824028 containerd[1473]: 2024-10-09 07:54:13.738 [INFO][4072] utils.go 188: Calico CNI releasing IP address ContainerID="3ee1cb4226ea45403e536e1242afacae46ff673c032b1129956dc51870358a8d" Oct 9 07:54:13.824028 containerd[1473]: 2024-10-09 07:54:13.786 [INFO][4079] ipam_plugin.go 417: Releasing address using handleID ContainerID="3ee1cb4226ea45403e536e1242afacae46ff673c032b1129956dc51870358a8d" HandleID="k8s-pod-network.3ee1cb4226ea45403e536e1242afacae46ff673c032b1129956dc51870358a8d" Workload="ci--4081.1.0--5--a4f881141a-k8s-csi--node--driver--87rvr-eth0" Oct 9 07:54:13.824028 containerd[1473]: 2024-10-09 07:54:13.789 [INFO][4079] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:54:13.824028 containerd[1473]: 2024-10-09 07:54:13.789 [INFO][4079] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:54:13.824028 containerd[1473]: 2024-10-09 07:54:13.807 [WARNING][4079] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="3ee1cb4226ea45403e536e1242afacae46ff673c032b1129956dc51870358a8d" HandleID="k8s-pod-network.3ee1cb4226ea45403e536e1242afacae46ff673c032b1129956dc51870358a8d" Workload="ci--4081.1.0--5--a4f881141a-k8s-csi--node--driver--87rvr-eth0" Oct 9 07:54:13.824028 containerd[1473]: 2024-10-09 07:54:13.807 [INFO][4079] ipam_plugin.go 445: Releasing address using workloadID ContainerID="3ee1cb4226ea45403e536e1242afacae46ff673c032b1129956dc51870358a8d" HandleID="k8s-pod-network.3ee1cb4226ea45403e536e1242afacae46ff673c032b1129956dc51870358a8d" Workload="ci--4081.1.0--5--a4f881141a-k8s-csi--node--driver--87rvr-eth0" Oct 9 07:54:13.824028 containerd[1473]: 2024-10-09 07:54:13.816 [INFO][4079] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:54:13.824028 containerd[1473]: 2024-10-09 07:54:13.820 [INFO][4072] k8s.go 621: Teardown processing complete. ContainerID="3ee1cb4226ea45403e536e1242afacae46ff673c032b1129956dc51870358a8d" Oct 9 07:54:13.824735 containerd[1473]: time="2024-10-09T07:54:13.824320663Z" level=info msg="TearDown network for sandbox \"3ee1cb4226ea45403e536e1242afacae46ff673c032b1129956dc51870358a8d\" successfully" Oct 9 07:54:13.824735 containerd[1473]: time="2024-10-09T07:54:13.824353396Z" level=info msg="StopPodSandbox for \"3ee1cb4226ea45403e536e1242afacae46ff673c032b1129956dc51870358a8d\" returns successfully" Oct 9 07:54:13.825759 containerd[1473]: time="2024-10-09T07:54:13.825718642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-87rvr,Uid:3cebfdf7-f604-4870-8e68-e3e120793ced,Namespace:calico-system,Attempt:1,}" Oct 9 07:54:13.938966 systemd[1]: run-netns-cni\x2d188e8273\x2d24b9\x2d0586\x2d5bf6\x2db874fb3ed71a.mount: Deactivated successfully. Oct 9 07:54:13.990701 kubelet[2549]: E1009 07:54:13.990637 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:54:14.178938 systemd-networkd[1366]: cali77dc339c103: Link UP Oct 9 07:54:14.185957 systemd-networkd[1366]: cali77dc339c103: Gained carrier Oct 9 07:54:14.210447 kubelet[2549]: I1009 07:54:14.210206 2549 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-lsh4c" podStartSLOduration=30.210150097 podStartE2EDuration="30.210150097s" podCreationTimestamp="2024-10-09 07:53:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 07:54:14.049635031 +0000 UTC m=+43.561518256" watchObservedRunningTime="2024-10-09 07:54:14.210150097 +0000 UTC m=+43.722033320" Oct 9 07:54:14.221568 containerd[1473]: 2024-10-09 07:54:13.899 [INFO][4089] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.1.0--5--a4f881141a-k8s-csi--node--driver--87rvr-eth0 csi-node-driver- calico-system 3cebfdf7-f604-4870-8e68-e3e120793ced 805 0 2024-10-09 07:53:50 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78cd84fb8c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ci-4081.1.0-5-a4f881141a csi-node-driver-87rvr eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali77dc339c103 [] []}} ContainerID="3de6cb1c4e21eaa9eb41bab60530bc238baf28c027b4ed68326cd31bee541287" Namespace="calico-system" Pod="csi-node-driver-87rvr" WorkloadEndpoint="ci--4081.1.0--5--a4f881141a-k8s-csi--node--driver--87rvr-" Oct 9 07:54:14.221568 containerd[1473]: 2024-10-09 07:54:13.900 [INFO][4089] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3de6cb1c4e21eaa9eb41bab60530bc238baf28c027b4ed68326cd31bee541287" Namespace="calico-system" Pod="csi-node-driver-87rvr" WorkloadEndpoint="ci--4081.1.0--5--a4f881141a-k8s-csi--node--driver--87rvr-eth0" Oct 9 07:54:14.221568 containerd[1473]: 2024-10-09 07:54:13.975 [INFO][4100] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3de6cb1c4e21eaa9eb41bab60530bc238baf28c027b4ed68326cd31bee541287" HandleID="k8s-pod-network.3de6cb1c4e21eaa9eb41bab60530bc238baf28c027b4ed68326cd31bee541287" Workload="ci--4081.1.0--5--a4f881141a-k8s-csi--node--driver--87rvr-eth0" Oct 9 07:54:14.221568 containerd[1473]: 2024-10-09 07:54:14.053 [INFO][4100] ipam_plugin.go 270: Auto assigning IP ContainerID="3de6cb1c4e21eaa9eb41bab60530bc238baf28c027b4ed68326cd31bee541287" HandleID="k8s-pod-network.3de6cb1c4e21eaa9eb41bab60530bc238baf28c027b4ed68326cd31bee541287" Workload="ci--4081.1.0--5--a4f881141a-k8s-csi--node--driver--87rvr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000319160), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.1.0-5-a4f881141a", "pod":"csi-node-driver-87rvr", "timestamp":"2024-10-09 07:54:13.975187173 +0000 UTC"}, Hostname:"ci-4081.1.0-5-a4f881141a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 07:54:14.221568 containerd[1473]: 2024-10-09 07:54:14.053 [INFO][4100] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:54:14.221568 containerd[1473]: 2024-10-09 07:54:14.053 [INFO][4100] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:54:14.221568 containerd[1473]: 2024-10-09 07:54:14.054 [INFO][4100] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.1.0-5-a4f881141a' Oct 9 07:54:14.221568 containerd[1473]: 2024-10-09 07:54:14.060 [INFO][4100] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3de6cb1c4e21eaa9eb41bab60530bc238baf28c027b4ed68326cd31bee541287" host="ci-4081.1.0-5-a4f881141a" Oct 9 07:54:14.221568 containerd[1473]: 2024-10-09 07:54:14.092 [INFO][4100] ipam.go 372: Looking up existing affinities for host host="ci-4081.1.0-5-a4f881141a" Oct 9 07:54:14.221568 containerd[1473]: 2024-10-09 07:54:14.117 [INFO][4100] ipam.go 489: Trying affinity for 192.168.52.128/26 host="ci-4081.1.0-5-a4f881141a" Oct 9 07:54:14.221568 containerd[1473]: 2024-10-09 07:54:14.122 [INFO][4100] ipam.go 155: Attempting to load block cidr=192.168.52.128/26 host="ci-4081.1.0-5-a4f881141a" Oct 9 07:54:14.221568 containerd[1473]: 2024-10-09 07:54:14.125 [INFO][4100] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.52.128/26 host="ci-4081.1.0-5-a4f881141a" Oct 9 07:54:14.221568 containerd[1473]: 2024-10-09 07:54:14.125 [INFO][4100] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.52.128/26 handle="k8s-pod-network.3de6cb1c4e21eaa9eb41bab60530bc238baf28c027b4ed68326cd31bee541287" host="ci-4081.1.0-5-a4f881141a" Oct 9 07:54:14.221568 containerd[1473]: 2024-10-09 07:54:14.129 [INFO][4100] ipam.go 1685: Creating new handle: k8s-pod-network.3de6cb1c4e21eaa9eb41bab60530bc238baf28c027b4ed68326cd31bee541287 Oct 9 07:54:14.221568 containerd[1473]: 2024-10-09 07:54:14.152 [INFO][4100] ipam.go 1203: Writing block in order to claim IPs block=192.168.52.128/26 handle="k8s-pod-network.3de6cb1c4e21eaa9eb41bab60530bc238baf28c027b4ed68326cd31bee541287" host="ci-4081.1.0-5-a4f881141a" Oct 9 07:54:14.221568 containerd[1473]: 2024-10-09 07:54:14.169 [INFO][4100] ipam.go 1216: Successfully claimed IPs: [192.168.52.130/26] block=192.168.52.128/26 handle="k8s-pod-network.3de6cb1c4e21eaa9eb41bab60530bc238baf28c027b4ed68326cd31bee541287" host="ci-4081.1.0-5-a4f881141a" Oct 9 07:54:14.221568 containerd[1473]: 2024-10-09 07:54:14.169 [INFO][4100] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.52.130/26] handle="k8s-pod-network.3de6cb1c4e21eaa9eb41bab60530bc238baf28c027b4ed68326cd31bee541287" host="ci-4081.1.0-5-a4f881141a" Oct 9 07:54:14.221568 containerd[1473]: 2024-10-09 07:54:14.169 [INFO][4100] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:54:14.221568 containerd[1473]: 2024-10-09 07:54:14.169 [INFO][4100] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.52.130/26] IPv6=[] ContainerID="3de6cb1c4e21eaa9eb41bab60530bc238baf28c027b4ed68326cd31bee541287" HandleID="k8s-pod-network.3de6cb1c4e21eaa9eb41bab60530bc238baf28c027b4ed68326cd31bee541287" Workload="ci--4081.1.0--5--a4f881141a-k8s-csi--node--driver--87rvr-eth0" Oct 9 07:54:14.222694 containerd[1473]: 2024-10-09 07:54:14.172 [INFO][4089] k8s.go 386: Populated endpoint ContainerID="3de6cb1c4e21eaa9eb41bab60530bc238baf28c027b4ed68326cd31bee541287" Namespace="calico-system" Pod="csi-node-driver-87rvr" WorkloadEndpoint="ci--4081.1.0--5--a4f881141a-k8s-csi--node--driver--87rvr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.1.0--5--a4f881141a-k8s-csi--node--driver--87rvr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3cebfdf7-f604-4870-8e68-e3e120793ced", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 53, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.1.0-5-a4f881141a", ContainerID:"", Pod:"csi-node-driver-87rvr", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.52.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali77dc339c103", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:54:14.222694 containerd[1473]: 2024-10-09 07:54:14.172 [INFO][4089] k8s.go 387: Calico CNI using IPs: [192.168.52.130/32] ContainerID="3de6cb1c4e21eaa9eb41bab60530bc238baf28c027b4ed68326cd31bee541287" Namespace="calico-system" Pod="csi-node-driver-87rvr" WorkloadEndpoint="ci--4081.1.0--5--a4f881141a-k8s-csi--node--driver--87rvr-eth0" Oct 9 07:54:14.222694 containerd[1473]: 2024-10-09 07:54:14.173 [INFO][4089] dataplane_linux.go 68: Setting the host side veth name to cali77dc339c103 ContainerID="3de6cb1c4e21eaa9eb41bab60530bc238baf28c027b4ed68326cd31bee541287" Namespace="calico-system" Pod="csi-node-driver-87rvr" WorkloadEndpoint="ci--4081.1.0--5--a4f881141a-k8s-csi--node--driver--87rvr-eth0" Oct 9 07:54:14.222694 containerd[1473]: 2024-10-09 07:54:14.184 [INFO][4089] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="3de6cb1c4e21eaa9eb41bab60530bc238baf28c027b4ed68326cd31bee541287" Namespace="calico-system" Pod="csi-node-driver-87rvr" WorkloadEndpoint="ci--4081.1.0--5--a4f881141a-k8s-csi--node--driver--87rvr-eth0" Oct 9 07:54:14.222694 containerd[1473]: 2024-10-09 07:54:14.184 [INFO][4089] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3de6cb1c4e21eaa9eb41bab60530bc238baf28c027b4ed68326cd31bee541287" Namespace="calico-system" Pod="csi-node-driver-87rvr" WorkloadEndpoint="ci--4081.1.0--5--a4f881141a-k8s-csi--node--driver--87rvr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.1.0--5--a4f881141a-k8s-csi--node--driver--87rvr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3cebfdf7-f604-4870-8e68-e3e120793ced", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 53, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.1.0-5-a4f881141a", ContainerID:"3de6cb1c4e21eaa9eb41bab60530bc238baf28c027b4ed68326cd31bee541287", Pod:"csi-node-driver-87rvr", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.52.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali77dc339c103", MAC:"8a:3d:c0:55:4e:6f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:54:14.222694 containerd[1473]: 2024-10-09 07:54:14.210 [INFO][4089] k8s.go 500: Wrote updated endpoint to datastore ContainerID="3de6cb1c4e21eaa9eb41bab60530bc238baf28c027b4ed68326cd31bee541287" Namespace="calico-system" Pod="csi-node-driver-87rvr" WorkloadEndpoint="ci--4081.1.0--5--a4f881141a-k8s-csi--node--driver--87rvr-eth0" Oct 9 07:54:14.311326 containerd[1473]: time="2024-10-09T07:54:14.310883724Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:54:14.311326 containerd[1473]: time="2024-10-09T07:54:14.310950975Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:54:14.315688 containerd[1473]: time="2024-10-09T07:54:14.314784609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:54:14.317908 containerd[1473]: time="2024-10-09T07:54:14.317718951Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:54:14.363409 systemd[1]: Started cri-containerd-3de6cb1c4e21eaa9eb41bab60530bc238baf28c027b4ed68326cd31bee541287.scope - libcontainer container 3de6cb1c4e21eaa9eb41bab60530bc238baf28c027b4ed68326cd31bee541287. Oct 9 07:54:14.397900 containerd[1473]: time="2024-10-09T07:54:14.397734553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-87rvr,Uid:3cebfdf7-f604-4870-8e68-e3e120793ced,Namespace:calico-system,Attempt:1,} returns sandbox id \"3de6cb1c4e21eaa9eb41bab60530bc238baf28c027b4ed68326cd31bee541287\"" Oct 9 07:54:14.409376 containerd[1473]: time="2024-10-09T07:54:14.408788255Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\"" Oct 9 07:54:14.654459 systemd-networkd[1366]: cali2fa380cd917: Gained IPv6LL Oct 9 07:54:14.998890 kubelet[2549]: E1009 07:54:14.998238 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:54:15.550399 systemd-networkd[1366]: cali77dc339c103: Gained IPv6LL Oct 9 07:54:15.659035 containerd[1473]: time="2024-10-09T07:54:15.658565821Z" level=info msg="StopPodSandbox for \"fe918a26c80c4794a63e5dcaab4851c9a902d99d032e6d6b9ebace707683836c\"" Oct 9 07:54:15.659792 containerd[1473]: time="2024-10-09T07:54:15.659284052Z" level=info msg="StopPodSandbox for \"f13efe088c1734e9d4d20e8c6fa0592faba04482fd54d84c88075cd317c253ab\"" Oct 9 07:54:15.888391 containerd[1473]: 2024-10-09 07:54:15.780 [INFO][4190] k8s.go 608: Cleaning up netns ContainerID="fe918a26c80c4794a63e5dcaab4851c9a902d99d032e6d6b9ebace707683836c" Oct 9 07:54:15.888391 containerd[1473]: 2024-10-09 07:54:15.784 [INFO][4190] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="fe918a26c80c4794a63e5dcaab4851c9a902d99d032e6d6b9ebace707683836c" iface="eth0" netns="/var/run/netns/cni-97c7dcd5-a14d-b58d-c219-f4e2037d577d" Oct 9 07:54:15.888391 containerd[1473]: 2024-10-09 07:54:15.785 [INFO][4190] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="fe918a26c80c4794a63e5dcaab4851c9a902d99d032e6d6b9ebace707683836c" iface="eth0" netns="/var/run/netns/cni-97c7dcd5-a14d-b58d-c219-f4e2037d577d" Oct 9 07:54:15.888391 containerd[1473]: 2024-10-09 07:54:15.785 [INFO][4190] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="fe918a26c80c4794a63e5dcaab4851c9a902d99d032e6d6b9ebace707683836c" iface="eth0" netns="/var/run/netns/cni-97c7dcd5-a14d-b58d-c219-f4e2037d577d" Oct 9 07:54:15.888391 containerd[1473]: 2024-10-09 07:54:15.785 [INFO][4190] k8s.go 615: Releasing IP address(es) ContainerID="fe918a26c80c4794a63e5dcaab4851c9a902d99d032e6d6b9ebace707683836c" Oct 9 07:54:15.888391 containerd[1473]: 2024-10-09 07:54:15.786 [INFO][4190] utils.go 188: Calico CNI releasing IP address ContainerID="fe918a26c80c4794a63e5dcaab4851c9a902d99d032e6d6b9ebace707683836c" Oct 9 07:54:15.888391 containerd[1473]: 2024-10-09 07:54:15.864 [INFO][4207] ipam_plugin.go 417: Releasing address using handleID ContainerID="fe918a26c80c4794a63e5dcaab4851c9a902d99d032e6d6b9ebace707683836c" HandleID="k8s-pod-network.fe918a26c80c4794a63e5dcaab4851c9a902d99d032e6d6b9ebace707683836c" Workload="ci--4081.1.0--5--a4f881141a-k8s-calico--kube--controllers--5d8dc58b7--n2qsx-eth0" Oct 9 07:54:15.888391 containerd[1473]: 2024-10-09 07:54:15.864 [INFO][4207] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:54:15.888391 containerd[1473]: 2024-10-09 07:54:15.864 [INFO][4207] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:54:15.888391 containerd[1473]: 2024-10-09 07:54:15.872 [WARNING][4207] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="fe918a26c80c4794a63e5dcaab4851c9a902d99d032e6d6b9ebace707683836c" HandleID="k8s-pod-network.fe918a26c80c4794a63e5dcaab4851c9a902d99d032e6d6b9ebace707683836c" Workload="ci--4081.1.0--5--a4f881141a-k8s-calico--kube--controllers--5d8dc58b7--n2qsx-eth0" Oct 9 07:54:15.888391 containerd[1473]: 2024-10-09 07:54:15.872 [INFO][4207] ipam_plugin.go 445: Releasing address using workloadID ContainerID="fe918a26c80c4794a63e5dcaab4851c9a902d99d032e6d6b9ebace707683836c" HandleID="k8s-pod-network.fe918a26c80c4794a63e5dcaab4851c9a902d99d032e6d6b9ebace707683836c" Workload="ci--4081.1.0--5--a4f881141a-k8s-calico--kube--controllers--5d8dc58b7--n2qsx-eth0" Oct 9 07:54:15.888391 containerd[1473]: 2024-10-09 07:54:15.875 [INFO][4207] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:54:15.888391 containerd[1473]: 2024-10-09 07:54:15.883 [INFO][4190] k8s.go 621: Teardown processing complete. ContainerID="fe918a26c80c4794a63e5dcaab4851c9a902d99d032e6d6b9ebace707683836c" Oct 9 07:54:15.891425 containerd[1473]: time="2024-10-09T07:54:15.891289411Z" level=info msg="TearDown network for sandbox \"fe918a26c80c4794a63e5dcaab4851c9a902d99d032e6d6b9ebace707683836c\" successfully" Oct 9 07:54:15.891425 containerd[1473]: time="2024-10-09T07:54:15.891370140Z" level=info msg="StopPodSandbox for \"fe918a26c80c4794a63e5dcaab4851c9a902d99d032e6d6b9ebace707683836c\" returns successfully" Oct 9 07:54:15.894052 systemd[1]: run-netns-cni\x2d97c7dcd5\x2da14d\x2db58d\x2dc219\x2df4e2037d577d.mount: Deactivated successfully. Oct 9 07:54:15.895983 containerd[1473]: time="2024-10-09T07:54:15.894348433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d8dc58b7-n2qsx,Uid:e429ea71-5b02-4029-b597-4f1188a52c84,Namespace:calico-system,Attempt:1,}" Oct 9 07:54:15.920297 containerd[1473]: 2024-10-09 07:54:15.819 [INFO][4199] k8s.go 608: Cleaning up netns ContainerID="f13efe088c1734e9d4d20e8c6fa0592faba04482fd54d84c88075cd317c253ab" Oct 9 07:54:15.920297 containerd[1473]: 2024-10-09 07:54:15.819 [INFO][4199] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="f13efe088c1734e9d4d20e8c6fa0592faba04482fd54d84c88075cd317c253ab" iface="eth0" netns="/var/run/netns/cni-289824a0-7fbd-685e-0007-97c2f2dafeab" Oct 9 07:54:15.920297 containerd[1473]: 2024-10-09 07:54:15.821 [INFO][4199] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="f13efe088c1734e9d4d20e8c6fa0592faba04482fd54d84c88075cd317c253ab" iface="eth0" netns="/var/run/netns/cni-289824a0-7fbd-685e-0007-97c2f2dafeab" Oct 9 07:54:15.920297 containerd[1473]: 2024-10-09 07:54:15.823 [INFO][4199] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="f13efe088c1734e9d4d20e8c6fa0592faba04482fd54d84c88075cd317c253ab" iface="eth0" netns="/var/run/netns/cni-289824a0-7fbd-685e-0007-97c2f2dafeab" Oct 9 07:54:15.920297 containerd[1473]: 2024-10-09 07:54:15.823 [INFO][4199] k8s.go 615: Releasing IP address(es) ContainerID="f13efe088c1734e9d4d20e8c6fa0592faba04482fd54d84c88075cd317c253ab" Oct 9 07:54:15.920297 containerd[1473]: 2024-10-09 07:54:15.823 [INFO][4199] utils.go 188: Calico CNI releasing IP address ContainerID="f13efe088c1734e9d4d20e8c6fa0592faba04482fd54d84c88075cd317c253ab" Oct 9 07:54:15.920297 containerd[1473]: 2024-10-09 07:54:15.869 [INFO][4213] ipam_plugin.go 417: Releasing address using handleID ContainerID="f13efe088c1734e9d4d20e8c6fa0592faba04482fd54d84c88075cd317c253ab" HandleID="k8s-pod-network.f13efe088c1734e9d4d20e8c6fa0592faba04482fd54d84c88075cd317c253ab" Workload="ci--4081.1.0--5--a4f881141a-k8s-coredns--76f75df574--2p7nw-eth0" Oct 9 07:54:15.920297 containerd[1473]: 2024-10-09 07:54:15.869 [INFO][4213] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:54:15.920297 containerd[1473]: 2024-10-09 07:54:15.876 [INFO][4213] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:54:15.920297 containerd[1473]: 2024-10-09 07:54:15.904 [WARNING][4213] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="f13efe088c1734e9d4d20e8c6fa0592faba04482fd54d84c88075cd317c253ab" HandleID="k8s-pod-network.f13efe088c1734e9d4d20e8c6fa0592faba04482fd54d84c88075cd317c253ab" Workload="ci--4081.1.0--5--a4f881141a-k8s-coredns--76f75df574--2p7nw-eth0" Oct 9 07:54:15.920297 containerd[1473]: 2024-10-09 07:54:15.904 [INFO][4213] ipam_plugin.go 445: Releasing address using workloadID ContainerID="f13efe088c1734e9d4d20e8c6fa0592faba04482fd54d84c88075cd317c253ab" HandleID="k8s-pod-network.f13efe088c1734e9d4d20e8c6fa0592faba04482fd54d84c88075cd317c253ab" Workload="ci--4081.1.0--5--a4f881141a-k8s-coredns--76f75df574--2p7nw-eth0" Oct 9 07:54:15.920297 containerd[1473]: 2024-10-09 07:54:15.907 [INFO][4213] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:54:15.920297 containerd[1473]: 2024-10-09 07:54:15.918 [INFO][4199] k8s.go 621: Teardown processing complete. ContainerID="f13efe088c1734e9d4d20e8c6fa0592faba04482fd54d84c88075cd317c253ab" Oct 9 07:54:15.928243 containerd[1473]: time="2024-10-09T07:54:15.920976370Z" level=info msg="TearDown network for sandbox \"f13efe088c1734e9d4d20e8c6fa0592faba04482fd54d84c88075cd317c253ab\" successfully" Oct 9 07:54:15.928243 containerd[1473]: time="2024-10-09T07:54:15.921013027Z" level=info msg="StopPodSandbox for \"f13efe088c1734e9d4d20e8c6fa0592faba04482fd54d84c88075cd317c253ab\" returns successfully" Oct 9 07:54:15.925905 systemd[1]: run-netns-cni\x2d289824a0\x2d7fbd\x2d685e\x2d0007\x2d97c2f2dafeab.mount: Deactivated successfully. Oct 9 07:54:15.928492 kubelet[2549]: E1009 07:54:15.924003 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:54:15.931860 containerd[1473]: time="2024-10-09T07:54:15.931454538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-2p7nw,Uid:728b2665-75cd-48cb-bec2-37026553584c,Namespace:kube-system,Attempt:1,}" Oct 9 07:54:15.942670 containerd[1473]: time="2024-10-09T07:54:15.942041809Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:54:15.948858 containerd[1473]: time="2024-10-09T07:54:15.948776152Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.1: active requests=0, bytes read=7642081" Oct 9 07:54:15.952623 containerd[1473]: time="2024-10-09T07:54:15.952577729Z" level=info msg="ImageCreate event name:\"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:54:15.955869 containerd[1473]: time="2024-10-09T07:54:15.955768937Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:54:15.957474 containerd[1473]: time="2024-10-09T07:54:15.957387807Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.1\" with image id \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\", size \"9134482\" in 1.548535855s" Oct 9 07:54:15.957474 containerd[1473]: time="2024-10-09T07:54:15.957430423Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\" returns image reference \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\"" Oct 9 07:54:15.965985 containerd[1473]: time="2024-10-09T07:54:15.965786724Z" level=info msg="CreateContainer within sandbox \"3de6cb1c4e21eaa9eb41bab60530bc238baf28c027b4ed68326cd31bee541287\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Oct 9 07:54:15.992352 containerd[1473]: time="2024-10-09T07:54:15.992261567Z" level=info msg="CreateContainer within sandbox \"3de6cb1c4e21eaa9eb41bab60530bc238baf28c027b4ed68326cd31bee541287\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"15ca15f9191a067d8523bfaba70c79b00d524ccca3d3c3f8e942a972aa1d5699\"" Oct 9 07:54:15.997356 containerd[1473]: time="2024-10-09T07:54:15.996109294Z" level=info msg="StartContainer for \"15ca15f9191a067d8523bfaba70c79b00d524ccca3d3c3f8e942a972aa1d5699\"" Oct 9 07:54:16.009841 kubelet[2549]: E1009 07:54:16.009797 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:54:16.084310 systemd[1]: Started cri-containerd-15ca15f9191a067d8523bfaba70c79b00d524ccca3d3c3f8e942a972aa1d5699.scope - libcontainer container 15ca15f9191a067d8523bfaba70c79b00d524ccca3d3c3f8e942a972aa1d5699. Oct 9 07:54:16.147881 containerd[1473]: time="2024-10-09T07:54:16.147594568Z" level=info msg="StartContainer for \"15ca15f9191a067d8523bfaba70c79b00d524ccca3d3c3f8e942a972aa1d5699\" returns successfully" Oct 9 07:54:16.151011 containerd[1473]: time="2024-10-09T07:54:16.150762405Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\"" Oct 9 07:54:16.238904 systemd-networkd[1366]: calidc8141028fa: Link UP Oct 9 07:54:16.240262 systemd-networkd[1366]: calidc8141028fa: Gained carrier Oct 9 07:54:16.275529 containerd[1473]: 2024-10-09 07:54:16.032 [INFO][4220] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.1.0--5--a4f881141a-k8s-calico--kube--controllers--5d8dc58b7--n2qsx-eth0 calico-kube-controllers-5d8dc58b7- calico-system e429ea71-5b02-4029-b597-4f1188a52c84 837 0 2024-10-09 07:53:50 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5d8dc58b7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.1.0-5-a4f881141a calico-kube-controllers-5d8dc58b7-n2qsx eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calidc8141028fa [] []}} ContainerID="43cd7260a59ddc69198b27154525cb819283f814246d5e1abda62e0896d7bc03" Namespace="calico-system" Pod="calico-kube-controllers-5d8dc58b7-n2qsx" WorkloadEndpoint="ci--4081.1.0--5--a4f881141a-k8s-calico--kube--controllers--5d8dc58b7--n2qsx-" Oct 9 07:54:16.275529 containerd[1473]: 2024-10-09 07:54:16.032 [INFO][4220] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="43cd7260a59ddc69198b27154525cb819283f814246d5e1abda62e0896d7bc03" Namespace="calico-system" Pod="calico-kube-controllers-5d8dc58b7-n2qsx" WorkloadEndpoint="ci--4081.1.0--5--a4f881141a-k8s-calico--kube--controllers--5d8dc58b7--n2qsx-eth0" Oct 9 07:54:16.275529 containerd[1473]: 2024-10-09 07:54:16.117 [INFO][4256] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="43cd7260a59ddc69198b27154525cb819283f814246d5e1abda62e0896d7bc03" HandleID="k8s-pod-network.43cd7260a59ddc69198b27154525cb819283f814246d5e1abda62e0896d7bc03" Workload="ci--4081.1.0--5--a4f881141a-k8s-calico--kube--controllers--5d8dc58b7--n2qsx-eth0" Oct 9 07:54:16.275529 containerd[1473]: 2024-10-09 07:54:16.134 [INFO][4256] ipam_plugin.go 270: Auto assigning IP ContainerID="43cd7260a59ddc69198b27154525cb819283f814246d5e1abda62e0896d7bc03" HandleID="k8s-pod-network.43cd7260a59ddc69198b27154525cb819283f814246d5e1abda62e0896d7bc03" Workload="ci--4081.1.0--5--a4f881141a-k8s-calico--kube--controllers--5d8dc58b7--n2qsx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318320), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.1.0-5-a4f881141a", "pod":"calico-kube-controllers-5d8dc58b7-n2qsx", "timestamp":"2024-10-09 07:54:16.116977148 +0000 UTC"}, Hostname:"ci-4081.1.0-5-a4f881141a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 07:54:16.275529 containerd[1473]: 2024-10-09 07:54:16.134 [INFO][4256] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:54:16.275529 containerd[1473]: 2024-10-09 07:54:16.134 [INFO][4256] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:54:16.275529 containerd[1473]: 2024-10-09 07:54:16.135 [INFO][4256] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.1.0-5-a4f881141a' Oct 9 07:54:16.275529 containerd[1473]: 2024-10-09 07:54:16.140 [INFO][4256] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.43cd7260a59ddc69198b27154525cb819283f814246d5e1abda62e0896d7bc03" host="ci-4081.1.0-5-a4f881141a" Oct 9 07:54:16.275529 containerd[1473]: 2024-10-09 07:54:16.152 [INFO][4256] ipam.go 372: Looking up existing affinities for host host="ci-4081.1.0-5-a4f881141a" Oct 9 07:54:16.275529 containerd[1473]: 2024-10-09 07:54:16.176 [INFO][4256] ipam.go 489: Trying affinity for 192.168.52.128/26 host="ci-4081.1.0-5-a4f881141a" Oct 9 07:54:16.275529 containerd[1473]: 2024-10-09 07:54:16.180 [INFO][4256] ipam.go 155: Attempting to load block cidr=192.168.52.128/26 host="ci-4081.1.0-5-a4f881141a" Oct 9 07:54:16.275529 containerd[1473]: 2024-10-09 07:54:16.184 [INFO][4256] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.52.128/26 host="ci-4081.1.0-5-a4f881141a" Oct 9 07:54:16.275529 containerd[1473]: 2024-10-09 07:54:16.184 [INFO][4256] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.52.128/26 handle="k8s-pod-network.43cd7260a59ddc69198b27154525cb819283f814246d5e1abda62e0896d7bc03" host="ci-4081.1.0-5-a4f881141a" Oct 9 07:54:16.275529 containerd[1473]: 2024-10-09 07:54:16.188 [INFO][4256] ipam.go 1685: Creating new handle: k8s-pod-network.43cd7260a59ddc69198b27154525cb819283f814246d5e1abda62e0896d7bc03 Oct 9 07:54:16.275529 containerd[1473]: 2024-10-09 07:54:16.212 [INFO][4256] ipam.go 1203: Writing block in order to claim IPs block=192.168.52.128/26 handle="k8s-pod-network.43cd7260a59ddc69198b27154525cb819283f814246d5e1abda62e0896d7bc03" host="ci-4081.1.0-5-a4f881141a" Oct 9 07:54:16.275529 containerd[1473]: 2024-10-09 07:54:16.228 [INFO][4256] ipam.go 1216: Successfully claimed IPs: [192.168.52.131/26] block=192.168.52.128/26 handle="k8s-pod-network.43cd7260a59ddc69198b27154525cb819283f814246d5e1abda62e0896d7bc03" host="ci-4081.1.0-5-a4f881141a" Oct 9 07:54:16.275529 containerd[1473]: 2024-10-09 07:54:16.229 [INFO][4256] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.52.131/26] handle="k8s-pod-network.43cd7260a59ddc69198b27154525cb819283f814246d5e1abda62e0896d7bc03" host="ci-4081.1.0-5-a4f881141a" Oct 9 07:54:16.275529 containerd[1473]: 2024-10-09 07:54:16.229 [INFO][4256] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:54:16.275529 containerd[1473]: 2024-10-09 07:54:16.229 [INFO][4256] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.52.131/26] IPv6=[] ContainerID="43cd7260a59ddc69198b27154525cb819283f814246d5e1abda62e0896d7bc03" HandleID="k8s-pod-network.43cd7260a59ddc69198b27154525cb819283f814246d5e1abda62e0896d7bc03" Workload="ci--4081.1.0--5--a4f881141a-k8s-calico--kube--controllers--5d8dc58b7--n2qsx-eth0" Oct 9 07:54:16.276854 containerd[1473]: 2024-10-09 07:54:16.232 [INFO][4220] k8s.go 386: Populated endpoint ContainerID="43cd7260a59ddc69198b27154525cb819283f814246d5e1abda62e0896d7bc03" Namespace="calico-system" Pod="calico-kube-controllers-5d8dc58b7-n2qsx" WorkloadEndpoint="ci--4081.1.0--5--a4f881141a-k8s-calico--kube--controllers--5d8dc58b7--n2qsx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.1.0--5--a4f881141a-k8s-calico--kube--controllers--5d8dc58b7--n2qsx-eth0", GenerateName:"calico-kube-controllers-5d8dc58b7-", Namespace:"calico-system", SelfLink:"", UID:"e429ea71-5b02-4029-b597-4f1188a52c84", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 53, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d8dc58b7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.1.0-5-a4f881141a", ContainerID:"", Pod:"calico-kube-controllers-5d8dc58b7-n2qsx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.52.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidc8141028fa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:54:16.276854 containerd[1473]: 2024-10-09 07:54:16.232 [INFO][4220] k8s.go 387: Calico CNI using IPs: [192.168.52.131/32] ContainerID="43cd7260a59ddc69198b27154525cb819283f814246d5e1abda62e0896d7bc03" Namespace="calico-system" Pod="calico-kube-controllers-5d8dc58b7-n2qsx" WorkloadEndpoint="ci--4081.1.0--5--a4f881141a-k8s-calico--kube--controllers--5d8dc58b7--n2qsx-eth0" Oct 9 07:54:16.276854 containerd[1473]: 2024-10-09 07:54:16.232 [INFO][4220] dataplane_linux.go 68: Setting the host side veth name to calidc8141028fa ContainerID="43cd7260a59ddc69198b27154525cb819283f814246d5e1abda62e0896d7bc03" Namespace="calico-system" Pod="calico-kube-controllers-5d8dc58b7-n2qsx" WorkloadEndpoint="ci--4081.1.0--5--a4f881141a-k8s-calico--kube--controllers--5d8dc58b7--n2qsx-eth0" Oct 9 07:54:16.276854 containerd[1473]: 2024-10-09 07:54:16.241 [INFO][4220] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="43cd7260a59ddc69198b27154525cb819283f814246d5e1abda62e0896d7bc03" Namespace="calico-system" Pod="calico-kube-controllers-5d8dc58b7-n2qsx" WorkloadEndpoint="ci--4081.1.0--5--a4f881141a-k8s-calico--kube--controllers--5d8dc58b7--n2qsx-eth0" Oct 9 07:54:16.276854 containerd[1473]: 2024-10-09 07:54:16.243 [INFO][4220] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="43cd7260a59ddc69198b27154525cb819283f814246d5e1abda62e0896d7bc03" Namespace="calico-system" Pod="calico-kube-controllers-5d8dc58b7-n2qsx" WorkloadEndpoint="ci--4081.1.0--5--a4f881141a-k8s-calico--kube--controllers--5d8dc58b7--n2qsx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.1.0--5--a4f881141a-k8s-calico--kube--controllers--5d8dc58b7--n2qsx-eth0", GenerateName:"calico-kube-controllers-5d8dc58b7-", Namespace:"calico-system", SelfLink:"", UID:"e429ea71-5b02-4029-b597-4f1188a52c84", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 53, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d8dc58b7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.1.0-5-a4f881141a", ContainerID:"43cd7260a59ddc69198b27154525cb819283f814246d5e1abda62e0896d7bc03", Pod:"calico-kube-controllers-5d8dc58b7-n2qsx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.52.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidc8141028fa", MAC:"92:69:0f:95:7c:75", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:54:16.276854 containerd[1473]: 2024-10-09 07:54:16.265 [INFO][4220] k8s.go 500: Wrote updated endpoint to datastore ContainerID="43cd7260a59ddc69198b27154525cb819283f814246d5e1abda62e0896d7bc03" Namespace="calico-system" Pod="calico-kube-controllers-5d8dc58b7-n2qsx" WorkloadEndpoint="ci--4081.1.0--5--a4f881141a-k8s-calico--kube--controllers--5d8dc58b7--n2qsx-eth0" Oct 9 07:54:16.324836 containerd[1473]: time="2024-10-09T07:54:16.324654674Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:54:16.324836 containerd[1473]: time="2024-10-09T07:54:16.324743351Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:54:16.324836 containerd[1473]: time="2024-10-09T07:54:16.324761607Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:54:16.325293 containerd[1473]: time="2024-10-09T07:54:16.324867279Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:54:16.348122 systemd-networkd[1366]: cali0db7bb620ca: Link UP Oct 9 07:54:16.348485 systemd-networkd[1366]: cali0db7bb620ca: Gained carrier Oct 9 07:54:16.366540 systemd[1]: Started cri-containerd-43cd7260a59ddc69198b27154525cb819283f814246d5e1abda62e0896d7bc03.scope - libcontainer container 43cd7260a59ddc69198b27154525cb819283f814246d5e1abda62e0896d7bc03. Oct 9 07:54:16.391507 containerd[1473]: 2024-10-09 07:54:16.045 [INFO][4229] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.1.0--5--a4f881141a-k8s-coredns--76f75df574--2p7nw-eth0 coredns-76f75df574- kube-system 728b2665-75cd-48cb-bec2-37026553584c 838 0 2024-10-09 07:53:44 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.1.0-5-a4f881141a coredns-76f75df574-2p7nw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0db7bb620ca [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="ccaa7d87b06a70bc8444c605fc5ed73340265678bb60eb7da690a14ac20ff319" Namespace="kube-system" Pod="coredns-76f75df574-2p7nw" WorkloadEndpoint="ci--4081.1.0--5--a4f881141a-k8s-coredns--76f75df574--2p7nw-" Oct 9 07:54:16.391507 containerd[1473]: 2024-10-09 07:54:16.045 [INFO][4229] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ccaa7d87b06a70bc8444c605fc5ed73340265678bb60eb7da690a14ac20ff319" Namespace="kube-system" Pod="coredns-76f75df574-2p7nw" WorkloadEndpoint="ci--4081.1.0--5--a4f881141a-k8s-coredns--76f75df574--2p7nw-eth0" Oct 9 07:54:16.391507 containerd[1473]: 2024-10-09 07:54:16.136 [INFO][4267] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ccaa7d87b06a70bc8444c605fc5ed73340265678bb60eb7da690a14ac20ff319" HandleID="k8s-pod-network.ccaa7d87b06a70bc8444c605fc5ed73340265678bb60eb7da690a14ac20ff319" Workload="ci--4081.1.0--5--a4f881141a-k8s-coredns--76f75df574--2p7nw-eth0" Oct 9 07:54:16.391507 containerd[1473]: 2024-10-09 07:54:16.171 [INFO][4267] ipam_plugin.go 270: Auto assigning IP ContainerID="ccaa7d87b06a70bc8444c605fc5ed73340265678bb60eb7da690a14ac20ff319" HandleID="k8s-pod-network.ccaa7d87b06a70bc8444c605fc5ed73340265678bb60eb7da690a14ac20ff319" Workload="ci--4081.1.0--5--a4f881141a-k8s-coredns--76f75df574--2p7nw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031c500), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.1.0-5-a4f881141a", "pod":"coredns-76f75df574-2p7nw", "timestamp":"2024-10-09 07:54:16.136196692 +0000 UTC"}, Hostname:"ci-4081.1.0-5-a4f881141a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 07:54:16.391507 containerd[1473]: 2024-10-09 07:54:16.172 [INFO][4267] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:54:16.391507 containerd[1473]: 2024-10-09 07:54:16.229 [INFO][4267] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:54:16.391507 containerd[1473]: 2024-10-09 07:54:16.229 [INFO][4267] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.1.0-5-a4f881141a' Oct 9 07:54:16.391507 containerd[1473]: 2024-10-09 07:54:16.233 [INFO][4267] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ccaa7d87b06a70bc8444c605fc5ed73340265678bb60eb7da690a14ac20ff319" host="ci-4081.1.0-5-a4f881141a" Oct 9 07:54:16.391507 containerd[1473]: 2024-10-09 07:54:16.248 [INFO][4267] ipam.go 372: Looking up existing affinities for host host="ci-4081.1.0-5-a4f881141a" Oct 9 07:54:16.391507 containerd[1473]: 2024-10-09 07:54:16.262 [INFO][4267] ipam.go 489: Trying affinity for 192.168.52.128/26 host="ci-4081.1.0-5-a4f881141a" Oct 9 07:54:16.391507 containerd[1473]: 2024-10-09 07:54:16.274 [INFO][4267] ipam.go 155: Attempting to load block cidr=192.168.52.128/26 host="ci-4081.1.0-5-a4f881141a" Oct 9 07:54:16.391507 containerd[1473]: 2024-10-09 07:54:16.284 [INFO][4267] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.52.128/26 host="ci-4081.1.0-5-a4f881141a" Oct 9 07:54:16.391507 containerd[1473]: 2024-10-09 07:54:16.284 [INFO][4267] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.52.128/26 handle="k8s-pod-network.ccaa7d87b06a70bc8444c605fc5ed73340265678bb60eb7da690a14ac20ff319" host="ci-4081.1.0-5-a4f881141a" Oct 9 07:54:16.391507 containerd[1473]: 2024-10-09 07:54:16.289 [INFO][4267] ipam.go 1685: Creating new handle: k8s-pod-network.ccaa7d87b06a70bc8444c605fc5ed73340265678bb60eb7da690a14ac20ff319 Oct 9 07:54:16.391507 containerd[1473]: 2024-10-09 07:54:16.317 [INFO][4267] ipam.go 1203: Writing block in order to claim IPs block=192.168.52.128/26 handle="k8s-pod-network.ccaa7d87b06a70bc8444c605fc5ed73340265678bb60eb7da690a14ac20ff319" host="ci-4081.1.0-5-a4f881141a" Oct 9 07:54:16.391507 containerd[1473]: 2024-10-09 07:54:16.334 [INFO][4267] ipam.go 1216: Successfully claimed IPs: [192.168.52.132/26] block=192.168.52.128/26 handle="k8s-pod-network.ccaa7d87b06a70bc8444c605fc5ed73340265678bb60eb7da690a14ac20ff319" host="ci-4081.1.0-5-a4f881141a" Oct 9 07:54:16.391507 containerd[1473]: 2024-10-09 07:54:16.334 [INFO][4267] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.52.132/26] handle="k8s-pod-network.ccaa7d87b06a70bc8444c605fc5ed73340265678bb60eb7da690a14ac20ff319" host="ci-4081.1.0-5-a4f881141a" Oct 9 07:54:16.391507 containerd[1473]: 2024-10-09 07:54:16.334 [INFO][4267] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:54:16.391507 containerd[1473]: 2024-10-09 07:54:16.334 [INFO][4267] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.52.132/26] IPv6=[] ContainerID="ccaa7d87b06a70bc8444c605fc5ed73340265678bb60eb7da690a14ac20ff319" HandleID="k8s-pod-network.ccaa7d87b06a70bc8444c605fc5ed73340265678bb60eb7da690a14ac20ff319" Workload="ci--4081.1.0--5--a4f881141a-k8s-coredns--76f75df574--2p7nw-eth0" Oct 9 07:54:16.396406 containerd[1473]: 2024-10-09 07:54:16.340 [INFO][4229] k8s.go 386: Populated endpoint ContainerID="ccaa7d87b06a70bc8444c605fc5ed73340265678bb60eb7da690a14ac20ff319" Namespace="kube-system" Pod="coredns-76f75df574-2p7nw" WorkloadEndpoint="ci--4081.1.0--5--a4f881141a-k8s-coredns--76f75df574--2p7nw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.1.0--5--a4f881141a-k8s-coredns--76f75df574--2p7nw-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"728b2665-75cd-48cb-bec2-37026553584c", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 53, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.1.0-5-a4f881141a", ContainerID:"", Pod:"coredns-76f75df574-2p7nw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0db7bb620ca", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:54:16.396406 containerd[1473]: 2024-10-09 07:54:16.341 [INFO][4229] k8s.go 387: Calico CNI using IPs: [192.168.52.132/32] ContainerID="ccaa7d87b06a70bc8444c605fc5ed73340265678bb60eb7da690a14ac20ff319" Namespace="kube-system" Pod="coredns-76f75df574-2p7nw" WorkloadEndpoint="ci--4081.1.0--5--a4f881141a-k8s-coredns--76f75df574--2p7nw-eth0" Oct 9 07:54:16.396406 containerd[1473]: 2024-10-09 07:54:16.341 [INFO][4229] dataplane_linux.go 68: Setting the host side veth name to cali0db7bb620ca ContainerID="ccaa7d87b06a70bc8444c605fc5ed73340265678bb60eb7da690a14ac20ff319" Namespace="kube-system" Pod="coredns-76f75df574-2p7nw" WorkloadEndpoint="ci--4081.1.0--5--a4f881141a-k8s-coredns--76f75df574--2p7nw-eth0" Oct 9 07:54:16.396406 containerd[1473]: 2024-10-09 07:54:16.349 [INFO][4229] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="ccaa7d87b06a70bc8444c605fc5ed73340265678bb60eb7da690a14ac20ff319" Namespace="kube-system" Pod="coredns-76f75df574-2p7nw" WorkloadEndpoint="ci--4081.1.0--5--a4f881141a-k8s-coredns--76f75df574--2p7nw-eth0" Oct 9 07:54:16.396406 containerd[1473]: 2024-10-09 07:54:16.351 [INFO][4229] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ccaa7d87b06a70bc8444c605fc5ed73340265678bb60eb7da690a14ac20ff319" Namespace="kube-system" Pod="coredns-76f75df574-2p7nw" WorkloadEndpoint="ci--4081.1.0--5--a4f881141a-k8s-coredns--76f75df574--2p7nw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.1.0--5--a4f881141a-k8s-coredns--76f75df574--2p7nw-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"728b2665-75cd-48cb-bec2-37026553584c", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 53, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.1.0-5-a4f881141a", ContainerID:"ccaa7d87b06a70bc8444c605fc5ed73340265678bb60eb7da690a14ac20ff319", Pod:"coredns-76f75df574-2p7nw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0db7bb620ca", MAC:"fa:6a:3f:98:ec:ac", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:54:16.396406 containerd[1473]: 2024-10-09 07:54:16.378 [INFO][4229] k8s.go 500: Wrote updated endpoint to datastore ContainerID="ccaa7d87b06a70bc8444c605fc5ed73340265678bb60eb7da690a14ac20ff319" Namespace="kube-system" Pod="coredns-76f75df574-2p7nw" WorkloadEndpoint="ci--4081.1.0--5--a4f881141a-k8s-coredns--76f75df574--2p7nw-eth0" Oct 9 07:54:16.452480 containerd[1473]: time="2024-10-09T07:54:16.450484084Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:54:16.452480 containerd[1473]: time="2024-10-09T07:54:16.450571450Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:54:16.452480 containerd[1473]: time="2024-10-09T07:54:16.450588639Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:54:16.452480 containerd[1473]: time="2024-10-09T07:54:16.450965353Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:54:16.471263 containerd[1473]: time="2024-10-09T07:54:16.471211854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d8dc58b7-n2qsx,Uid:e429ea71-5b02-4029-b597-4f1188a52c84,Namespace:calico-system,Attempt:1,} returns sandbox id \"43cd7260a59ddc69198b27154525cb819283f814246d5e1abda62e0896d7bc03\"" Oct 9 07:54:16.492506 systemd[1]: Started cri-containerd-ccaa7d87b06a70bc8444c605fc5ed73340265678bb60eb7da690a14ac20ff319.scope - libcontainer container ccaa7d87b06a70bc8444c605fc5ed73340265678bb60eb7da690a14ac20ff319. Oct 9 07:54:16.557390 containerd[1473]: time="2024-10-09T07:54:16.557333694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-2p7nw,Uid:728b2665-75cd-48cb-bec2-37026553584c,Namespace:kube-system,Attempt:1,} returns sandbox id \"ccaa7d87b06a70bc8444c605fc5ed73340265678bb60eb7da690a14ac20ff319\"" Oct 9 07:54:16.558663 kubelet[2549]: E1009 07:54:16.558626 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:54:16.563569 containerd[1473]: time="2024-10-09T07:54:16.563116438Z" level=info msg="CreateContainer within sandbox \"ccaa7d87b06a70bc8444c605fc5ed73340265678bb60eb7da690a14ac20ff319\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 9 07:54:16.579345 containerd[1473]: time="2024-10-09T07:54:16.579149616Z" level=info msg="CreateContainer within sandbox \"ccaa7d87b06a70bc8444c605fc5ed73340265678bb60eb7da690a14ac20ff319\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f03d73560cc43f8efec54075989cac642c21689d04c4bca23d61154925db0f56\"" Oct 9 07:54:16.581149 containerd[1473]: time="2024-10-09T07:54:16.579907952Z" level=info msg="StartContainer for \"f03d73560cc43f8efec54075989cac642c21689d04c4bca23d61154925db0f56\"" Oct 9 07:54:16.635410 systemd[1]: Started cri-containerd-f03d73560cc43f8efec54075989cac642c21689d04c4bca23d61154925db0f56.scope - libcontainer container f03d73560cc43f8efec54075989cac642c21689d04c4bca23d61154925db0f56. Oct 9 07:54:16.675281 containerd[1473]: time="2024-10-09T07:54:16.675223147Z" level=info msg="StartContainer for \"f03d73560cc43f8efec54075989cac642c21689d04c4bca23d61154925db0f56\" returns successfully" Oct 9 07:54:17.020431 kubelet[2549]: E1009 07:54:17.018709 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:54:17.068848 kubelet[2549]: I1009 07:54:17.068491 2549 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-2p7nw" podStartSLOduration=33.068436799 podStartE2EDuration="33.068436799s" podCreationTimestamp="2024-10-09 07:53:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 07:54:17.05086189 +0000 UTC m=+46.562745092" watchObservedRunningTime="2024-10-09 07:54:17.068436799 +0000 UTC m=+46.580320021" Oct 9 07:54:17.470407 systemd-networkd[1366]: calidc8141028fa: Gained IPv6LL Oct 9 07:54:17.540562 containerd[1473]: time="2024-10-09T07:54:17.539965074Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1: active requests=0, bytes read=12907822" Oct 9 07:54:17.546178 containerd[1473]: time="2024-10-09T07:54:17.545785840Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" with image id \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\", size \"14400175\" in 1.394953077s" Oct 9 07:54:17.546178 containerd[1473]: time="2024-10-09T07:54:17.545857350Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" returns image reference \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\"" Oct 9 07:54:17.549440 containerd[1473]: time="2024-10-09T07:54:17.548360423Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\"" Oct 9 07:54:17.549440 containerd[1473]: time="2024-10-09T07:54:17.548711260Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:54:17.569747 containerd[1473]: time="2024-10-09T07:54:17.569497166Z" level=info msg="ImageCreate event name:\"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:54:17.570650 containerd[1473]: time="2024-10-09T07:54:17.570517499Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:54:17.572710 containerd[1473]: time="2024-10-09T07:54:17.572457585Z" level=info msg="CreateContainer within sandbox \"3de6cb1c4e21eaa9eb41bab60530bc238baf28c027b4ed68326cd31bee541287\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Oct 9 07:54:17.589306 containerd[1473]: time="2024-10-09T07:54:17.589125607Z" level=info msg="CreateContainer within sandbox \"3de6cb1c4e21eaa9eb41bab60530bc238baf28c027b4ed68326cd31bee541287\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"a305341b30fb16e6fa81ea437b7ba59ba03ee7076d9f2770b6b05b519c60a6f7\"" Oct 9 07:54:17.591223 containerd[1473]: time="2024-10-09T07:54:17.590012031Z" level=info msg="StartContainer for \"a305341b30fb16e6fa81ea437b7ba59ba03ee7076d9f2770b6b05b519c60a6f7\"" Oct 9 07:54:17.647427 systemd[1]: Started cri-containerd-a305341b30fb16e6fa81ea437b7ba59ba03ee7076d9f2770b6b05b519c60a6f7.scope - libcontainer container a305341b30fb16e6fa81ea437b7ba59ba03ee7076d9f2770b6b05b519c60a6f7. Oct 9 07:54:17.703176 containerd[1473]: time="2024-10-09T07:54:17.702912464Z" level=info msg="StartContainer for \"a305341b30fb16e6fa81ea437b7ba59ba03ee7076d9f2770b6b05b519c60a6f7\" returns successfully" Oct 9 07:54:17.879115 kubelet[2549]: I1009 07:54:17.878702 2549 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Oct 9 07:54:17.889390 kubelet[2549]: I1009 07:54:17.889182 2549 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Oct 9 07:54:18.043303 kubelet[2549]: E1009 07:54:18.043163 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:54:18.199520 systemd[1]: Started sshd@8-143.198.229.119:22-139.178.89.65:55100.service - OpenSSH per-connection server daemon (139.178.89.65:55100). Oct 9 07:54:18.303184 systemd-networkd[1366]: cali0db7bb620ca: Gained IPv6LL Oct 9 07:54:18.313340 sshd[4491]: Accepted publickey for core from 139.178.89.65 port 55100 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:54:18.318123 sshd[4491]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:54:18.327826 systemd-logind[1448]: New session 9 of user core. Oct 9 07:54:18.333469 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 9 07:54:18.700469 sshd[4491]: pam_unix(sshd:session): session closed for user core Oct 9 07:54:18.706074 systemd[1]: sshd@8-143.198.229.119:22-139.178.89.65:55100.service: Deactivated successfully. Oct 9 07:54:18.708936 systemd[1]: session-9.scope: Deactivated successfully. Oct 9 07:54:18.711211 systemd-logind[1448]: Session 9 logged out. Waiting for processes to exit. Oct 9 07:54:18.714463 systemd-logind[1448]: Removed session 9. Oct 9 07:54:19.046528 kubelet[2549]: E1009 07:54:19.046356 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:54:19.620055 containerd[1473]: time="2024-10-09T07:54:19.619982635Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:54:19.621846 containerd[1473]: time="2024-10-09T07:54:19.621725916Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.1: active requests=0, bytes read=33507125" Oct 9 07:54:19.623496 containerd[1473]: time="2024-10-09T07:54:19.623426852Z" level=info msg="ImageCreate event name:\"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:54:19.627033 containerd[1473]: time="2024-10-09T07:54:19.626943310Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:54:19.628203 containerd[1473]: time="2024-10-09T07:54:19.628124065Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" with image id \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\", size \"34999494\" in 2.079701283s" Oct 9 07:54:19.628572 containerd[1473]: time="2024-10-09T07:54:19.628405298Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" returns image reference \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\"" Oct 9 07:54:19.659559 containerd[1473]: time="2024-10-09T07:54:19.659318316Z" level=info msg="CreateContainer within sandbox \"43cd7260a59ddc69198b27154525cb819283f814246d5e1abda62e0896d7bc03\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Oct 9 07:54:19.707586 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2112176620.mount: Deactivated successfully. Oct 9 07:54:19.713200 containerd[1473]: time="2024-10-09T07:54:19.712856042Z" level=info msg="CreateContainer within sandbox \"43cd7260a59ddc69198b27154525cb819283f814246d5e1abda62e0896d7bc03\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"08af2b261c102a660d4e393da811cc815dc19d859946ade8b50e5ec353d6872c\"" Oct 9 07:54:19.719011 containerd[1473]: time="2024-10-09T07:54:19.715974521Z" level=info msg="StartContainer for \"08af2b261c102a660d4e393da811cc815dc19d859946ade8b50e5ec353d6872c\"" Oct 9 07:54:19.854364 systemd[1]: Started cri-containerd-08af2b261c102a660d4e393da811cc815dc19d859946ade8b50e5ec353d6872c.scope - libcontainer container 08af2b261c102a660d4e393da811cc815dc19d859946ade8b50e5ec353d6872c. Oct 9 07:54:19.923381 containerd[1473]: time="2024-10-09T07:54:19.923185901Z" level=info msg="StartContainer for \"08af2b261c102a660d4e393da811cc815dc19d859946ade8b50e5ec353d6872c\" returns successfully" Oct 9 07:54:20.095928 kubelet[2549]: I1009 07:54:20.095859 2549 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-87rvr" podStartSLOduration=26.948930231 podStartE2EDuration="30.095783102s" podCreationTimestamp="2024-10-09 07:53:50 +0000 UTC" firstStartedPulling="2024-10-09 07:54:14.400395661 +0000 UTC m=+43.912278867" lastFinishedPulling="2024-10-09 07:54:17.547248537 +0000 UTC m=+47.059131738" observedRunningTime="2024-10-09 07:54:18.069141062 +0000 UTC m=+47.581024284" watchObservedRunningTime="2024-10-09 07:54:20.095783102 +0000 UTC m=+49.607666324" Oct 9 07:54:20.099933 kubelet[2549]: I1009 07:54:20.097993 2549 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5d8dc58b7-n2qsx" podStartSLOduration=26.942304354 podStartE2EDuration="30.097819942s" podCreationTimestamp="2024-10-09 07:53:50 +0000 UTC" firstStartedPulling="2024-10-09 07:54:16.473812703 +0000 UTC m=+45.985695904" lastFinishedPulling="2024-10-09 07:54:19.629328287 +0000 UTC m=+49.141211492" observedRunningTime="2024-10-09 07:54:20.090443617 +0000 UTC m=+49.602326840" watchObservedRunningTime="2024-10-09 07:54:20.097819942 +0000 UTC m=+49.609703166" Oct 9 07:54:21.850039 systemd[1]: run-containerd-runc-k8s.io-9ccd0a7f46d14e647566d0d9db14e4ca2e855340a5efc9e7e44bc95b8c1e6589-runc.67elvf.mount: Deactivated successfully. Oct 9 07:54:21.993126 kubelet[2549]: E1009 07:54:21.990299 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:54:23.717501 systemd[1]: Started sshd@9-143.198.229.119:22-139.178.89.65:55110.service - OpenSSH per-connection server daemon (139.178.89.65:55110). Oct 9 07:54:23.848164 sshd[4604]: Accepted publickey for core from 139.178.89.65 port 55110 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:54:23.851904 sshd[4604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:54:23.863188 systemd-logind[1448]: New session 10 of user core. Oct 9 07:54:23.869426 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 9 07:54:24.273337 sshd[4604]: pam_unix(sshd:session): session closed for user core Oct 9 07:54:24.281813 systemd-logind[1448]: Session 10 logged out. Waiting for processes to exit. Oct 9 07:54:24.282844 systemd[1]: sshd@9-143.198.229.119:22-139.178.89.65:55110.service: Deactivated successfully. Oct 9 07:54:24.289930 systemd[1]: session-10.scope: Deactivated successfully. Oct 9 07:54:24.296191 systemd-logind[1448]: Removed session 10. Oct 9 07:54:29.305241 systemd[1]: Started sshd@10-143.198.229.119:22-139.178.89.65:54182.service - OpenSSH per-connection server daemon (139.178.89.65:54182). Oct 9 07:54:29.352249 sshd[4620]: Accepted publickey for core from 139.178.89.65 port 54182 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:54:29.353983 sshd[4620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:54:29.361187 systemd-logind[1448]: New session 11 of user core. Oct 9 07:54:29.371550 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 9 07:54:29.536643 sshd[4620]: pam_unix(sshd:session): session closed for user core Oct 9 07:54:29.548156 systemd[1]: sshd@10-143.198.229.119:22-139.178.89.65:54182.service: Deactivated successfully. Oct 9 07:54:29.551773 systemd[1]: session-11.scope: Deactivated successfully. Oct 9 07:54:29.555869 systemd-logind[1448]: Session 11 logged out. Waiting for processes to exit. Oct 9 07:54:29.565246 systemd[1]: Started sshd@11-143.198.229.119:22-139.178.89.65:54194.service - OpenSSH per-connection server daemon (139.178.89.65:54194). Oct 9 07:54:29.566622 systemd-logind[1448]: Removed session 11. Oct 9 07:54:29.633742 sshd[4634]: Accepted publickey for core from 139.178.89.65 port 54194 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:54:29.636319 sshd[4634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:54:29.645584 systemd-logind[1448]: New session 12 of user core. Oct 9 07:54:29.650733 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 9 07:54:29.920526 sshd[4634]: pam_unix(sshd:session): session closed for user core Oct 9 07:54:29.932938 systemd[1]: sshd@11-143.198.229.119:22-139.178.89.65:54194.service: Deactivated successfully. Oct 9 07:54:29.936628 systemd[1]: session-12.scope: Deactivated successfully. Oct 9 07:54:29.939893 systemd-logind[1448]: Session 12 logged out. Waiting for processes to exit. Oct 9 07:54:29.953731 systemd[1]: Started sshd@12-143.198.229.119:22-139.178.89.65:54206.service - OpenSSH per-connection server daemon (139.178.89.65:54206). Oct 9 07:54:29.958702 systemd-logind[1448]: Removed session 12. Oct 9 07:54:30.031170 sshd[4644]: Accepted publickey for core from 139.178.89.65 port 54206 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:54:30.033530 sshd[4644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:54:30.050627 systemd-logind[1448]: New session 13 of user core. Oct 9 07:54:30.057362 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 9 07:54:30.253015 sshd[4644]: pam_unix(sshd:session): session closed for user core Oct 9 07:54:30.260290 systemd[1]: sshd@12-143.198.229.119:22-139.178.89.65:54206.service: Deactivated successfully. Oct 9 07:54:30.264928 systemd[1]: session-13.scope: Deactivated successfully. Oct 9 07:54:30.266337 systemd-logind[1448]: Session 13 logged out. Waiting for processes to exit. Oct 9 07:54:30.268881 systemd-logind[1448]: Removed session 13. Oct 9 07:54:30.700955 containerd[1473]: time="2024-10-09T07:54:30.700451955Z" level=info msg="StopPodSandbox for \"3ee1cb4226ea45403e536e1242afacae46ff673c032b1129956dc51870358a8d\"" Oct 9 07:54:30.841219 containerd[1473]: 2024-10-09 07:54:30.766 [WARNING][4669] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3ee1cb4226ea45403e536e1242afacae46ff673c032b1129956dc51870358a8d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.1.0--5--a4f881141a-k8s-csi--node--driver--87rvr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3cebfdf7-f604-4870-8e68-e3e120793ced", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 53, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.1.0-5-a4f881141a", ContainerID:"3de6cb1c4e21eaa9eb41bab60530bc238baf28c027b4ed68326cd31bee541287", Pod:"csi-node-driver-87rvr", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.52.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali77dc339c103", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:54:30.841219 containerd[1473]: 2024-10-09 07:54:30.767 [INFO][4669] k8s.go 608: Cleaning up netns ContainerID="3ee1cb4226ea45403e536e1242afacae46ff673c032b1129956dc51870358a8d" Oct 9 07:54:30.841219 containerd[1473]: 2024-10-09 07:54:30.767 [INFO][4669] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="3ee1cb4226ea45403e536e1242afacae46ff673c032b1129956dc51870358a8d" iface="eth0" netns="" Oct 9 07:54:30.841219 containerd[1473]: 2024-10-09 07:54:30.767 [INFO][4669] k8s.go 615: Releasing IP address(es) ContainerID="3ee1cb4226ea45403e536e1242afacae46ff673c032b1129956dc51870358a8d" Oct 9 07:54:30.841219 containerd[1473]: 2024-10-09 07:54:30.767 [INFO][4669] utils.go 188: Calico CNI releasing IP address ContainerID="3ee1cb4226ea45403e536e1242afacae46ff673c032b1129956dc51870358a8d" Oct 9 07:54:30.841219 containerd[1473]: 2024-10-09 07:54:30.822 [INFO][4675] ipam_plugin.go 417: Releasing address using handleID ContainerID="3ee1cb4226ea45403e536e1242afacae46ff673c032b1129956dc51870358a8d" HandleID="k8s-pod-network.3ee1cb4226ea45403e536e1242afacae46ff673c032b1129956dc51870358a8d" Workload="ci--4081.1.0--5--a4f881141a-k8s-csi--node--driver--87rvr-eth0" Oct 9 07:54:30.841219 containerd[1473]: 2024-10-09 07:54:30.823 [INFO][4675] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:54:30.841219 containerd[1473]: 2024-10-09 07:54:30.823 [INFO][4675] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:54:30.841219 containerd[1473]: 2024-10-09 07:54:30.832 [WARNING][4675] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="3ee1cb4226ea45403e536e1242afacae46ff673c032b1129956dc51870358a8d" HandleID="k8s-pod-network.3ee1cb4226ea45403e536e1242afacae46ff673c032b1129956dc51870358a8d" Workload="ci--4081.1.0--5--a4f881141a-k8s-csi--node--driver--87rvr-eth0" Oct 9 07:54:30.841219 containerd[1473]: 2024-10-09 07:54:30.833 [INFO][4675] ipam_plugin.go 445: Releasing address using workloadID ContainerID="3ee1cb4226ea45403e536e1242afacae46ff673c032b1129956dc51870358a8d" HandleID="k8s-pod-network.3ee1cb4226ea45403e536e1242afacae46ff673c032b1129956dc51870358a8d" Workload="ci--4081.1.0--5--a4f881141a-k8s-csi--node--driver--87rvr-eth0" Oct 9 07:54:30.841219 containerd[1473]: 2024-10-09 07:54:30.836 [INFO][4675] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:54:30.841219 containerd[1473]: 2024-10-09 07:54:30.838 [INFO][4669] k8s.go 621: Teardown processing complete. ContainerID="3ee1cb4226ea45403e536e1242afacae46ff673c032b1129956dc51870358a8d" Oct 9 07:54:30.841219 containerd[1473]: time="2024-10-09T07:54:30.841222129Z" level=info msg="TearDown network for sandbox \"3ee1cb4226ea45403e536e1242afacae46ff673c032b1129956dc51870358a8d\" successfully" Oct 9 07:54:30.841814 containerd[1473]: time="2024-10-09T07:54:30.841249165Z" level=info msg="StopPodSandbox for \"3ee1cb4226ea45403e536e1242afacae46ff673c032b1129956dc51870358a8d\" returns successfully" Oct 9 07:54:30.841995 containerd[1473]: time="2024-10-09T07:54:30.841957320Z" level=info msg="RemovePodSandbox for \"3ee1cb4226ea45403e536e1242afacae46ff673c032b1129956dc51870358a8d\"" Oct 9 07:54:30.842314 containerd[1473]: time="2024-10-09T07:54:30.842286496Z" level=info msg="Forcibly stopping sandbox \"3ee1cb4226ea45403e536e1242afacae46ff673c032b1129956dc51870358a8d\"" Oct 9 07:54:30.957722 containerd[1473]: 2024-10-09 07:54:30.903 [WARNING][4694] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3ee1cb4226ea45403e536e1242afacae46ff673c032b1129956dc51870358a8d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.1.0--5--a4f881141a-k8s-csi--node--driver--87rvr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3cebfdf7-f604-4870-8e68-e3e120793ced", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 53, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.1.0-5-a4f881141a", ContainerID:"3de6cb1c4e21eaa9eb41bab60530bc238baf28c027b4ed68326cd31bee541287", Pod:"csi-node-driver-87rvr", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.52.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali77dc339c103", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:54:30.957722 containerd[1473]: 2024-10-09 07:54:30.904 [INFO][4694] k8s.go 608: Cleaning up netns ContainerID="3ee1cb4226ea45403e536e1242afacae46ff673c032b1129956dc51870358a8d" Oct 9 07:54:30.957722 containerd[1473]: 2024-10-09 07:54:30.904 [INFO][4694] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="3ee1cb4226ea45403e536e1242afacae46ff673c032b1129956dc51870358a8d" iface="eth0" netns="" Oct 9 07:54:30.957722 containerd[1473]: 2024-10-09 07:54:30.904 [INFO][4694] k8s.go 615: Releasing IP address(es) ContainerID="3ee1cb4226ea45403e536e1242afacae46ff673c032b1129956dc51870358a8d" Oct 9 07:54:30.957722 containerd[1473]: 2024-10-09 07:54:30.904 [INFO][4694] utils.go 188: Calico CNI releasing IP address ContainerID="3ee1cb4226ea45403e536e1242afacae46ff673c032b1129956dc51870358a8d" Oct 9 07:54:30.957722 containerd[1473]: 2024-10-09 07:54:30.938 [INFO][4700] ipam_plugin.go 417: Releasing address using handleID ContainerID="3ee1cb4226ea45403e536e1242afacae46ff673c032b1129956dc51870358a8d" HandleID="k8s-pod-network.3ee1cb4226ea45403e536e1242afacae46ff673c032b1129956dc51870358a8d" Workload="ci--4081.1.0--5--a4f881141a-k8s-csi--node--driver--87rvr-eth0" Oct 9 07:54:30.957722 containerd[1473]: 2024-10-09 07:54:30.939 [INFO][4700] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:54:30.957722 containerd[1473]: 2024-10-09 07:54:30.939 [INFO][4700] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:54:30.957722 containerd[1473]: 2024-10-09 07:54:30.949 [WARNING][4700] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="3ee1cb4226ea45403e536e1242afacae46ff673c032b1129956dc51870358a8d" HandleID="k8s-pod-network.3ee1cb4226ea45403e536e1242afacae46ff673c032b1129956dc51870358a8d" Workload="ci--4081.1.0--5--a4f881141a-k8s-csi--node--driver--87rvr-eth0" Oct 9 07:54:30.957722 containerd[1473]: 2024-10-09 07:54:30.949 [INFO][4700] ipam_plugin.go 445: Releasing address using workloadID ContainerID="3ee1cb4226ea45403e536e1242afacae46ff673c032b1129956dc51870358a8d" HandleID="k8s-pod-network.3ee1cb4226ea45403e536e1242afacae46ff673c032b1129956dc51870358a8d" Workload="ci--4081.1.0--5--a4f881141a-k8s-csi--node--driver--87rvr-eth0" Oct 9 07:54:30.957722 containerd[1473]: 2024-10-09 07:54:30.953 [INFO][4700] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:54:30.957722 containerd[1473]: 2024-10-09 07:54:30.954 [INFO][4694] k8s.go 621: Teardown processing complete. ContainerID="3ee1cb4226ea45403e536e1242afacae46ff673c032b1129956dc51870358a8d" Oct 9 07:54:30.957722 containerd[1473]: time="2024-10-09T07:54:30.957677089Z" level=info msg="TearDown network for sandbox \"3ee1cb4226ea45403e536e1242afacae46ff673c032b1129956dc51870358a8d\" successfully" Oct 9 07:54:30.963427 containerd[1473]: time="2024-10-09T07:54:30.963362715Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3ee1cb4226ea45403e536e1242afacae46ff673c032b1129956dc51870358a8d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 07:54:30.963633 containerd[1473]: time="2024-10-09T07:54:30.963470748Z" level=info msg="RemovePodSandbox \"3ee1cb4226ea45403e536e1242afacae46ff673c032b1129956dc51870358a8d\" returns successfully" Oct 9 07:54:30.964602 containerd[1473]: time="2024-10-09T07:54:30.964553664Z" level=info msg="StopPodSandbox for \"fe918a26c80c4794a63e5dcaab4851c9a902d99d032e6d6b9ebace707683836c\"" Oct 9 07:54:31.129239 containerd[1473]: 2024-10-09 07:54:31.024 [WARNING][4719] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fe918a26c80c4794a63e5dcaab4851c9a902d99d032e6d6b9ebace707683836c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.1.0--5--a4f881141a-k8s-calico--kube--controllers--5d8dc58b7--n2qsx-eth0", GenerateName:"calico-kube-controllers-5d8dc58b7-", Namespace:"calico-system", SelfLink:"", UID:"e429ea71-5b02-4029-b597-4f1188a52c84", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 53, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d8dc58b7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.1.0-5-a4f881141a", ContainerID:"43cd7260a59ddc69198b27154525cb819283f814246d5e1abda62e0896d7bc03", Pod:"calico-kube-controllers-5d8dc58b7-n2qsx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.52.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidc8141028fa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:54:31.129239 containerd[1473]: 2024-10-09 07:54:31.025 [INFO][4719] k8s.go 608: Cleaning up netns ContainerID="fe918a26c80c4794a63e5dcaab4851c9a902d99d032e6d6b9ebace707683836c" Oct 9 07:54:31.129239 containerd[1473]: 2024-10-09 07:54:31.025 [INFO][4719] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="fe918a26c80c4794a63e5dcaab4851c9a902d99d032e6d6b9ebace707683836c" iface="eth0" netns="" Oct 9 07:54:31.129239 containerd[1473]: 2024-10-09 07:54:31.025 [INFO][4719] k8s.go 615: Releasing IP address(es) ContainerID="fe918a26c80c4794a63e5dcaab4851c9a902d99d032e6d6b9ebace707683836c" Oct 9 07:54:31.129239 containerd[1473]: 2024-10-09 07:54:31.025 [INFO][4719] utils.go 188: Calico CNI releasing IP address ContainerID="fe918a26c80c4794a63e5dcaab4851c9a902d99d032e6d6b9ebace707683836c" Oct 9 07:54:31.129239 containerd[1473]: 2024-10-09 07:54:31.085 [INFO][4725] ipam_plugin.go 417: Releasing address using handleID ContainerID="fe918a26c80c4794a63e5dcaab4851c9a902d99d032e6d6b9ebace707683836c" HandleID="k8s-pod-network.fe918a26c80c4794a63e5dcaab4851c9a902d99d032e6d6b9ebace707683836c" Workload="ci--4081.1.0--5--a4f881141a-k8s-calico--kube--controllers--5d8dc58b7--n2qsx-eth0" Oct 9 07:54:31.129239 containerd[1473]: 2024-10-09 07:54:31.085 [INFO][4725] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:54:31.129239 containerd[1473]: 2024-10-09 07:54:31.085 [INFO][4725] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:54:31.129239 containerd[1473]: 2024-10-09 07:54:31.118 [WARNING][4725] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="fe918a26c80c4794a63e5dcaab4851c9a902d99d032e6d6b9ebace707683836c" HandleID="k8s-pod-network.fe918a26c80c4794a63e5dcaab4851c9a902d99d032e6d6b9ebace707683836c" Workload="ci--4081.1.0--5--a4f881141a-k8s-calico--kube--controllers--5d8dc58b7--n2qsx-eth0" Oct 9 07:54:31.129239 containerd[1473]: 2024-10-09 07:54:31.118 [INFO][4725] ipam_plugin.go 445: Releasing address using workloadID ContainerID="fe918a26c80c4794a63e5dcaab4851c9a902d99d032e6d6b9ebace707683836c" HandleID="k8s-pod-network.fe918a26c80c4794a63e5dcaab4851c9a902d99d032e6d6b9ebace707683836c" Workload="ci--4081.1.0--5--a4f881141a-k8s-calico--kube--controllers--5d8dc58b7--n2qsx-eth0" Oct 9 07:54:31.129239 containerd[1473]: 2024-10-09 07:54:31.124 [INFO][4725] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:54:31.129239 containerd[1473]: 2024-10-09 07:54:31.126 [INFO][4719] k8s.go 621: Teardown processing complete. ContainerID="fe918a26c80c4794a63e5dcaab4851c9a902d99d032e6d6b9ebace707683836c" Oct 9 07:54:31.129873 containerd[1473]: time="2024-10-09T07:54:31.129256120Z" level=info msg="TearDown network for sandbox \"fe918a26c80c4794a63e5dcaab4851c9a902d99d032e6d6b9ebace707683836c\" successfully" Oct 9 07:54:31.129873 containerd[1473]: time="2024-10-09T07:54:31.129287211Z" level=info msg="StopPodSandbox for \"fe918a26c80c4794a63e5dcaab4851c9a902d99d032e6d6b9ebace707683836c\" returns successfully" Oct 9 07:54:31.131017 containerd[1473]: time="2024-10-09T07:54:31.130809705Z" level=info msg="RemovePodSandbox for \"fe918a26c80c4794a63e5dcaab4851c9a902d99d032e6d6b9ebace707683836c\"" Oct 9 07:54:31.131017 containerd[1473]: time="2024-10-09T07:54:31.130859007Z" level=info msg="Forcibly stopping sandbox \"fe918a26c80c4794a63e5dcaab4851c9a902d99d032e6d6b9ebace707683836c\"" Oct 9 07:54:31.227784 containerd[1473]: 2024-10-09 07:54:31.180 [WARNING][4743] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fe918a26c80c4794a63e5dcaab4851c9a902d99d032e6d6b9ebace707683836c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.1.0--5--a4f881141a-k8s-calico--kube--controllers--5d8dc58b7--n2qsx-eth0", GenerateName:"calico-kube-controllers-5d8dc58b7-", Namespace:"calico-system", SelfLink:"", UID:"e429ea71-5b02-4029-b597-4f1188a52c84", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 53, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d8dc58b7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.1.0-5-a4f881141a", ContainerID:"43cd7260a59ddc69198b27154525cb819283f814246d5e1abda62e0896d7bc03", Pod:"calico-kube-controllers-5d8dc58b7-n2qsx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.52.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidc8141028fa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:54:31.227784 containerd[1473]: 2024-10-09 07:54:31.180 [INFO][4743] k8s.go 608: Cleaning up netns ContainerID="fe918a26c80c4794a63e5dcaab4851c9a902d99d032e6d6b9ebace707683836c" Oct 9 07:54:31.227784 containerd[1473]: 2024-10-09 07:54:31.180 [INFO][4743] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="fe918a26c80c4794a63e5dcaab4851c9a902d99d032e6d6b9ebace707683836c" iface="eth0" netns="" Oct 9 07:54:31.227784 containerd[1473]: 2024-10-09 07:54:31.180 [INFO][4743] k8s.go 615: Releasing IP address(es) ContainerID="fe918a26c80c4794a63e5dcaab4851c9a902d99d032e6d6b9ebace707683836c" Oct 9 07:54:31.227784 containerd[1473]: 2024-10-09 07:54:31.180 [INFO][4743] utils.go 188: Calico CNI releasing IP address ContainerID="fe918a26c80c4794a63e5dcaab4851c9a902d99d032e6d6b9ebace707683836c" Oct 9 07:54:31.227784 containerd[1473]: 2024-10-09 07:54:31.208 [INFO][4749] ipam_plugin.go 417: Releasing address using handleID ContainerID="fe918a26c80c4794a63e5dcaab4851c9a902d99d032e6d6b9ebace707683836c" HandleID="k8s-pod-network.fe918a26c80c4794a63e5dcaab4851c9a902d99d032e6d6b9ebace707683836c" Workload="ci--4081.1.0--5--a4f881141a-k8s-calico--kube--controllers--5d8dc58b7--n2qsx-eth0" Oct 9 07:54:31.227784 containerd[1473]: 2024-10-09 07:54:31.209 [INFO][4749] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:54:31.227784 containerd[1473]: 2024-10-09 07:54:31.209 [INFO][4749] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:54:31.227784 containerd[1473]: 2024-10-09 07:54:31.219 [WARNING][4749] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="fe918a26c80c4794a63e5dcaab4851c9a902d99d032e6d6b9ebace707683836c" HandleID="k8s-pod-network.fe918a26c80c4794a63e5dcaab4851c9a902d99d032e6d6b9ebace707683836c" Workload="ci--4081.1.0--5--a4f881141a-k8s-calico--kube--controllers--5d8dc58b7--n2qsx-eth0" Oct 9 07:54:31.227784 containerd[1473]: 2024-10-09 07:54:31.219 [INFO][4749] ipam_plugin.go 445: Releasing address using workloadID ContainerID="fe918a26c80c4794a63e5dcaab4851c9a902d99d032e6d6b9ebace707683836c" HandleID="k8s-pod-network.fe918a26c80c4794a63e5dcaab4851c9a902d99d032e6d6b9ebace707683836c" Workload="ci--4081.1.0--5--a4f881141a-k8s-calico--kube--controllers--5d8dc58b7--n2qsx-eth0" Oct 9 07:54:31.227784 containerd[1473]: 2024-10-09 07:54:31.222 [INFO][4749] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:54:31.227784 containerd[1473]: 2024-10-09 07:54:31.225 [INFO][4743] k8s.go 621: Teardown processing complete. ContainerID="fe918a26c80c4794a63e5dcaab4851c9a902d99d032e6d6b9ebace707683836c" Oct 9 07:54:31.227784 containerd[1473]: time="2024-10-09T07:54:31.227746466Z" level=info msg="TearDown network for sandbox \"fe918a26c80c4794a63e5dcaab4851c9a902d99d032e6d6b9ebace707683836c\" successfully" Oct 9 07:54:31.233279 containerd[1473]: time="2024-10-09T07:54:31.233128571Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fe918a26c80c4794a63e5dcaab4851c9a902d99d032e6d6b9ebace707683836c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 07:54:31.233279 containerd[1473]: time="2024-10-09T07:54:31.233267194Z" level=info msg="RemovePodSandbox \"fe918a26c80c4794a63e5dcaab4851c9a902d99d032e6d6b9ebace707683836c\" returns successfully" Oct 9 07:54:31.234165 containerd[1473]: time="2024-10-09T07:54:31.234114187Z" level=info msg="StopPodSandbox for \"f13efe088c1734e9d4d20e8c6fa0592faba04482fd54d84c88075cd317c253ab\"" Oct 9 07:54:31.337681 containerd[1473]: 2024-10-09 07:54:31.285 [WARNING][4767] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f13efe088c1734e9d4d20e8c6fa0592faba04482fd54d84c88075cd317c253ab" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.1.0--5--a4f881141a-k8s-coredns--76f75df574--2p7nw-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"728b2665-75cd-48cb-bec2-37026553584c", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 53, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.1.0-5-a4f881141a", ContainerID:"ccaa7d87b06a70bc8444c605fc5ed73340265678bb60eb7da690a14ac20ff319", Pod:"coredns-76f75df574-2p7nw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0db7bb620ca", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:54:31.337681 containerd[1473]: 2024-10-09 07:54:31.285 [INFO][4767] k8s.go 608: Cleaning up netns ContainerID="f13efe088c1734e9d4d20e8c6fa0592faba04482fd54d84c88075cd317c253ab" Oct 9 07:54:31.337681 containerd[1473]: 2024-10-09 07:54:31.286 [INFO][4767] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="f13efe088c1734e9d4d20e8c6fa0592faba04482fd54d84c88075cd317c253ab" iface="eth0" netns="" Oct 9 07:54:31.337681 containerd[1473]: 2024-10-09 07:54:31.286 [INFO][4767] k8s.go 615: Releasing IP address(es) ContainerID="f13efe088c1734e9d4d20e8c6fa0592faba04482fd54d84c88075cd317c253ab" Oct 9 07:54:31.337681 containerd[1473]: 2024-10-09 07:54:31.286 [INFO][4767] utils.go 188: Calico CNI releasing IP address ContainerID="f13efe088c1734e9d4d20e8c6fa0592faba04482fd54d84c88075cd317c253ab" Oct 9 07:54:31.337681 containerd[1473]: 2024-10-09 07:54:31.322 [INFO][4773] ipam_plugin.go 417: Releasing address using handleID ContainerID="f13efe088c1734e9d4d20e8c6fa0592faba04482fd54d84c88075cd317c253ab" HandleID="k8s-pod-network.f13efe088c1734e9d4d20e8c6fa0592faba04482fd54d84c88075cd317c253ab" Workload="ci--4081.1.0--5--a4f881141a-k8s-coredns--76f75df574--2p7nw-eth0" Oct 9 07:54:31.337681 containerd[1473]: 2024-10-09 07:54:31.322 [INFO][4773] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:54:31.337681 containerd[1473]: 2024-10-09 07:54:31.322 [INFO][4773] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:54:31.337681 containerd[1473]: 2024-10-09 07:54:31.330 [WARNING][4773] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="f13efe088c1734e9d4d20e8c6fa0592faba04482fd54d84c88075cd317c253ab" HandleID="k8s-pod-network.f13efe088c1734e9d4d20e8c6fa0592faba04482fd54d84c88075cd317c253ab" Workload="ci--4081.1.0--5--a4f881141a-k8s-coredns--76f75df574--2p7nw-eth0" Oct 9 07:54:31.337681 containerd[1473]: 2024-10-09 07:54:31.330 [INFO][4773] ipam_plugin.go 445: Releasing address using workloadID ContainerID="f13efe088c1734e9d4d20e8c6fa0592faba04482fd54d84c88075cd317c253ab" HandleID="k8s-pod-network.f13efe088c1734e9d4d20e8c6fa0592faba04482fd54d84c88075cd317c253ab" Workload="ci--4081.1.0--5--a4f881141a-k8s-coredns--76f75df574--2p7nw-eth0" Oct 9 07:54:31.337681 containerd[1473]: 2024-10-09 07:54:31.332 [INFO][4773] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:54:31.337681 containerd[1473]: 2024-10-09 07:54:31.335 [INFO][4767] k8s.go 621: Teardown processing complete. ContainerID="f13efe088c1734e9d4d20e8c6fa0592faba04482fd54d84c88075cd317c253ab" Oct 9 07:54:31.339441 containerd[1473]: time="2024-10-09T07:54:31.338133575Z" level=info msg="TearDown network for sandbox \"f13efe088c1734e9d4d20e8c6fa0592faba04482fd54d84c88075cd317c253ab\" successfully" Oct 9 07:54:31.339441 containerd[1473]: time="2024-10-09T07:54:31.338168301Z" level=info msg="StopPodSandbox for \"f13efe088c1734e9d4d20e8c6fa0592faba04482fd54d84c88075cd317c253ab\" returns successfully" Oct 9 07:54:31.339441 containerd[1473]: time="2024-10-09T07:54:31.338812581Z" level=info msg="RemovePodSandbox for \"f13efe088c1734e9d4d20e8c6fa0592faba04482fd54d84c88075cd317c253ab\"" Oct 9 07:54:31.339441 containerd[1473]: time="2024-10-09T07:54:31.338849443Z" level=info msg="Forcibly stopping sandbox \"f13efe088c1734e9d4d20e8c6fa0592faba04482fd54d84c88075cd317c253ab\"" Oct 9 07:54:31.442671 containerd[1473]: 2024-10-09 07:54:31.395 [WARNING][4791] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f13efe088c1734e9d4d20e8c6fa0592faba04482fd54d84c88075cd317c253ab" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.1.0--5--a4f881141a-k8s-coredns--76f75df574--2p7nw-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"728b2665-75cd-48cb-bec2-37026553584c", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 53, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.1.0-5-a4f881141a", ContainerID:"ccaa7d87b06a70bc8444c605fc5ed73340265678bb60eb7da690a14ac20ff319", Pod:"coredns-76f75df574-2p7nw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0db7bb620ca", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:54:31.442671 containerd[1473]: 2024-10-09 07:54:31.395 [INFO][4791] k8s.go 608: Cleaning up netns ContainerID="f13efe088c1734e9d4d20e8c6fa0592faba04482fd54d84c88075cd317c253ab" Oct 9 07:54:31.442671 containerd[1473]: 2024-10-09 07:54:31.395 [INFO][4791] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="f13efe088c1734e9d4d20e8c6fa0592faba04482fd54d84c88075cd317c253ab" iface="eth0" netns="" Oct 9 07:54:31.442671 containerd[1473]: 2024-10-09 07:54:31.395 [INFO][4791] k8s.go 615: Releasing IP address(es) ContainerID="f13efe088c1734e9d4d20e8c6fa0592faba04482fd54d84c88075cd317c253ab" Oct 9 07:54:31.442671 containerd[1473]: 2024-10-09 07:54:31.395 [INFO][4791] utils.go 188: Calico CNI releasing IP address ContainerID="f13efe088c1734e9d4d20e8c6fa0592faba04482fd54d84c88075cd317c253ab" Oct 9 07:54:31.442671 containerd[1473]: 2024-10-09 07:54:31.424 [INFO][4797] ipam_plugin.go 417: Releasing address using handleID ContainerID="f13efe088c1734e9d4d20e8c6fa0592faba04482fd54d84c88075cd317c253ab" HandleID="k8s-pod-network.f13efe088c1734e9d4d20e8c6fa0592faba04482fd54d84c88075cd317c253ab" Workload="ci--4081.1.0--5--a4f881141a-k8s-coredns--76f75df574--2p7nw-eth0" Oct 9 07:54:31.442671 containerd[1473]: 2024-10-09 07:54:31.424 [INFO][4797] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:54:31.442671 containerd[1473]: 2024-10-09 07:54:31.424 [INFO][4797] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:54:31.442671 containerd[1473]: 2024-10-09 07:54:31.432 [WARNING][4797] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="f13efe088c1734e9d4d20e8c6fa0592faba04482fd54d84c88075cd317c253ab" HandleID="k8s-pod-network.f13efe088c1734e9d4d20e8c6fa0592faba04482fd54d84c88075cd317c253ab" Workload="ci--4081.1.0--5--a4f881141a-k8s-coredns--76f75df574--2p7nw-eth0" Oct 9 07:54:31.442671 containerd[1473]: 2024-10-09 07:54:31.432 [INFO][4797] ipam_plugin.go 445: Releasing address using workloadID ContainerID="f13efe088c1734e9d4d20e8c6fa0592faba04482fd54d84c88075cd317c253ab" HandleID="k8s-pod-network.f13efe088c1734e9d4d20e8c6fa0592faba04482fd54d84c88075cd317c253ab" Workload="ci--4081.1.0--5--a4f881141a-k8s-coredns--76f75df574--2p7nw-eth0" Oct 9 07:54:31.442671 containerd[1473]: 2024-10-09 07:54:31.436 [INFO][4797] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:54:31.442671 containerd[1473]: 2024-10-09 07:54:31.439 [INFO][4791] k8s.go 621: Teardown processing complete. ContainerID="f13efe088c1734e9d4d20e8c6fa0592faba04482fd54d84c88075cd317c253ab" Oct 9 07:54:31.444221 containerd[1473]: time="2024-10-09T07:54:31.442705226Z" level=info msg="TearDown network for sandbox \"f13efe088c1734e9d4d20e8c6fa0592faba04482fd54d84c88075cd317c253ab\" successfully" Oct 9 07:54:31.446792 containerd[1473]: time="2024-10-09T07:54:31.446711640Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f13efe088c1734e9d4d20e8c6fa0592faba04482fd54d84c88075cd317c253ab\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 07:54:31.447002 containerd[1473]: time="2024-10-09T07:54:31.446839634Z" level=info msg="RemovePodSandbox \"f13efe088c1734e9d4d20e8c6fa0592faba04482fd54d84c88075cd317c253ab\" returns successfully" Oct 9 07:54:31.448282 containerd[1473]: time="2024-10-09T07:54:31.447765193Z" level=info msg="StopPodSandbox for \"5df1028d435143b90be9133b0391c10e4f49bf5fc6441197bbe7cbb2c129b089\"" Oct 9 07:54:31.557237 containerd[1473]: 2024-10-09 07:54:31.503 [WARNING][4815] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5df1028d435143b90be9133b0391c10e4f49bf5fc6441197bbe7cbb2c129b089" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.1.0--5--a4f881141a-k8s-coredns--76f75df574--lsh4c-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"4d558ca1-372f-4592-9031-7ad3d0cc0acb", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 53, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.1.0-5-a4f881141a", ContainerID:"4cd76a47e5de381e46e7ecfefbd183d4150991fc4913858ef68d8eb9fcaeda85", Pod:"coredns-76f75df574-lsh4c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2fa380cd917", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:54:31.557237 containerd[1473]: 2024-10-09 07:54:31.503 [INFO][4815] k8s.go 608: Cleaning up netns ContainerID="5df1028d435143b90be9133b0391c10e4f49bf5fc6441197bbe7cbb2c129b089" Oct 9 07:54:31.557237 containerd[1473]: 2024-10-09 07:54:31.503 [INFO][4815] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="5df1028d435143b90be9133b0391c10e4f49bf5fc6441197bbe7cbb2c129b089" iface="eth0" netns="" Oct 9 07:54:31.557237 containerd[1473]: 2024-10-09 07:54:31.503 [INFO][4815] k8s.go 615: Releasing IP address(es) ContainerID="5df1028d435143b90be9133b0391c10e4f49bf5fc6441197bbe7cbb2c129b089" Oct 9 07:54:31.557237 containerd[1473]: 2024-10-09 07:54:31.503 [INFO][4815] utils.go 188: Calico CNI releasing IP address ContainerID="5df1028d435143b90be9133b0391c10e4f49bf5fc6441197bbe7cbb2c129b089" Oct 9 07:54:31.557237 containerd[1473]: 2024-10-09 07:54:31.537 [INFO][4821] ipam_plugin.go 417: Releasing address using handleID ContainerID="5df1028d435143b90be9133b0391c10e4f49bf5fc6441197bbe7cbb2c129b089" HandleID="k8s-pod-network.5df1028d435143b90be9133b0391c10e4f49bf5fc6441197bbe7cbb2c129b089" Workload="ci--4081.1.0--5--a4f881141a-k8s-coredns--76f75df574--lsh4c-eth0" Oct 9 07:54:31.557237 containerd[1473]: 2024-10-09 07:54:31.538 [INFO][4821] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:54:31.557237 containerd[1473]: 2024-10-09 07:54:31.538 [INFO][4821] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:54:31.557237 containerd[1473]: 2024-10-09 07:54:31.549 [WARNING][4821] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="5df1028d435143b90be9133b0391c10e4f49bf5fc6441197bbe7cbb2c129b089" HandleID="k8s-pod-network.5df1028d435143b90be9133b0391c10e4f49bf5fc6441197bbe7cbb2c129b089" Workload="ci--4081.1.0--5--a4f881141a-k8s-coredns--76f75df574--lsh4c-eth0" Oct 9 07:54:31.557237 containerd[1473]: 2024-10-09 07:54:31.549 [INFO][4821] ipam_plugin.go 445: Releasing address using workloadID ContainerID="5df1028d435143b90be9133b0391c10e4f49bf5fc6441197bbe7cbb2c129b089" HandleID="k8s-pod-network.5df1028d435143b90be9133b0391c10e4f49bf5fc6441197bbe7cbb2c129b089" Workload="ci--4081.1.0--5--a4f881141a-k8s-coredns--76f75df574--lsh4c-eth0" Oct 9 07:54:31.557237 containerd[1473]: 2024-10-09 07:54:31.552 [INFO][4821] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:54:31.557237 containerd[1473]: 2024-10-09 07:54:31.554 [INFO][4815] k8s.go 621: Teardown processing complete. ContainerID="5df1028d435143b90be9133b0391c10e4f49bf5fc6441197bbe7cbb2c129b089" Oct 9 07:54:31.558511 containerd[1473]: time="2024-10-09T07:54:31.558106885Z" level=info msg="TearDown network for sandbox \"5df1028d435143b90be9133b0391c10e4f49bf5fc6441197bbe7cbb2c129b089\" successfully" Oct 9 07:54:31.558511 containerd[1473]: time="2024-10-09T07:54:31.558189289Z" level=info msg="StopPodSandbox for \"5df1028d435143b90be9133b0391c10e4f49bf5fc6441197bbe7cbb2c129b089\" returns successfully" Oct 9 07:54:31.560575 containerd[1473]: time="2024-10-09T07:54:31.559700023Z" level=info msg="RemovePodSandbox for \"5df1028d435143b90be9133b0391c10e4f49bf5fc6441197bbe7cbb2c129b089\"" Oct 9 07:54:31.560575 containerd[1473]: time="2024-10-09T07:54:31.559765342Z" level=info msg="Forcibly stopping sandbox \"5df1028d435143b90be9133b0391c10e4f49bf5fc6441197bbe7cbb2c129b089\"" Oct 9 07:54:31.658017 containerd[1473]: 2024-10-09 07:54:31.613 [WARNING][4839] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5df1028d435143b90be9133b0391c10e4f49bf5fc6441197bbe7cbb2c129b089" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.1.0--5--a4f881141a-k8s-coredns--76f75df574--lsh4c-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"4d558ca1-372f-4592-9031-7ad3d0cc0acb", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 53, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.1.0-5-a4f881141a", ContainerID:"4cd76a47e5de381e46e7ecfefbd183d4150991fc4913858ef68d8eb9fcaeda85", Pod:"coredns-76f75df574-lsh4c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2fa380cd917", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:54:31.658017 containerd[1473]: 2024-10-09 07:54:31.613 [INFO][4839] k8s.go 608: Cleaning up netns ContainerID="5df1028d435143b90be9133b0391c10e4f49bf5fc6441197bbe7cbb2c129b089" Oct 9 07:54:31.658017 containerd[1473]: 2024-10-09 07:54:31.613 [INFO][4839] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="5df1028d435143b90be9133b0391c10e4f49bf5fc6441197bbe7cbb2c129b089" iface="eth0" netns="" Oct 9 07:54:31.658017 containerd[1473]: 2024-10-09 07:54:31.613 [INFO][4839] k8s.go 615: Releasing IP address(es) ContainerID="5df1028d435143b90be9133b0391c10e4f49bf5fc6441197bbe7cbb2c129b089" Oct 9 07:54:31.658017 containerd[1473]: 2024-10-09 07:54:31.613 [INFO][4839] utils.go 188: Calico CNI releasing IP address ContainerID="5df1028d435143b90be9133b0391c10e4f49bf5fc6441197bbe7cbb2c129b089" Oct 9 07:54:31.658017 containerd[1473]: 2024-10-09 07:54:31.641 [INFO][4845] ipam_plugin.go 417: Releasing address using handleID ContainerID="5df1028d435143b90be9133b0391c10e4f49bf5fc6441197bbe7cbb2c129b089" HandleID="k8s-pod-network.5df1028d435143b90be9133b0391c10e4f49bf5fc6441197bbe7cbb2c129b089" Workload="ci--4081.1.0--5--a4f881141a-k8s-coredns--76f75df574--lsh4c-eth0" Oct 9 07:54:31.658017 containerd[1473]: 2024-10-09 07:54:31.641 [INFO][4845] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:54:31.658017 containerd[1473]: 2024-10-09 07:54:31.641 [INFO][4845] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:54:31.658017 containerd[1473]: 2024-10-09 07:54:31.648 [WARNING][4845] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="5df1028d435143b90be9133b0391c10e4f49bf5fc6441197bbe7cbb2c129b089" HandleID="k8s-pod-network.5df1028d435143b90be9133b0391c10e4f49bf5fc6441197bbe7cbb2c129b089" Workload="ci--4081.1.0--5--a4f881141a-k8s-coredns--76f75df574--lsh4c-eth0" Oct 9 07:54:31.658017 containerd[1473]: 2024-10-09 07:54:31.648 [INFO][4845] ipam_plugin.go 445: Releasing address using workloadID ContainerID="5df1028d435143b90be9133b0391c10e4f49bf5fc6441197bbe7cbb2c129b089" HandleID="k8s-pod-network.5df1028d435143b90be9133b0391c10e4f49bf5fc6441197bbe7cbb2c129b089" Workload="ci--4081.1.0--5--a4f881141a-k8s-coredns--76f75df574--lsh4c-eth0" Oct 9 07:54:31.658017 containerd[1473]: 2024-10-09 07:54:31.652 [INFO][4845] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:54:31.658017 containerd[1473]: 2024-10-09 07:54:31.655 [INFO][4839] k8s.go 621: Teardown processing complete. ContainerID="5df1028d435143b90be9133b0391c10e4f49bf5fc6441197bbe7cbb2c129b089" Oct 9 07:54:31.658685 containerd[1473]: time="2024-10-09T07:54:31.658223284Z" level=info msg="TearDown network for sandbox \"5df1028d435143b90be9133b0391c10e4f49bf5fc6441197bbe7cbb2c129b089\" successfully" Oct 9 07:54:31.662224 containerd[1473]: time="2024-10-09T07:54:31.662173465Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5df1028d435143b90be9133b0391c10e4f49bf5fc6441197bbe7cbb2c129b089\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 07:54:31.662414 containerd[1473]: time="2024-10-09T07:54:31.662289564Z" level=info msg="RemovePodSandbox \"5df1028d435143b90be9133b0391c10e4f49bf5fc6441197bbe7cbb2c129b089\" returns successfully" Oct 9 07:54:35.270533 systemd[1]: Started sshd@13-143.198.229.119:22-139.178.89.65:44910.service - OpenSSH per-connection server daemon (139.178.89.65:44910). Oct 9 07:54:35.343523 sshd[4867]: Accepted publickey for core from 139.178.89.65 port 44910 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:54:35.345699 sshd[4867]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:54:35.351736 systemd-logind[1448]: New session 14 of user core. Oct 9 07:54:35.355290 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 9 07:54:35.537773 sshd[4867]: pam_unix(sshd:session): session closed for user core Oct 9 07:54:35.541586 systemd-logind[1448]: Session 14 logged out. Waiting for processes to exit. Oct 9 07:54:35.542426 systemd[1]: sshd@13-143.198.229.119:22-139.178.89.65:44910.service: Deactivated successfully. Oct 9 07:54:35.545748 systemd[1]: session-14.scope: Deactivated successfully. Oct 9 07:54:35.549097 systemd-logind[1448]: Removed session 14. Oct 9 07:54:37.088521 systemd[1]: run-containerd-runc-k8s.io-08af2b261c102a660d4e393da811cc815dc19d859946ade8b50e5ec353d6872c-runc.tnqqno.mount: Deactivated successfully. Oct 9 07:54:39.643021 kubelet[2549]: I1009 07:54:39.642562 2549 topology_manager.go:215] "Topology Admit Handler" podUID="013c5530-04ea-4f40-9191-833b5fdd3c0b" podNamespace="calico-apiserver" podName="calico-apiserver-697bffd85-p4s7h" Oct 9 07:54:39.668142 systemd[1]: Created slice kubepods-besteffort-pod013c5530_04ea_4f40_9191_833b5fdd3c0b.slice - libcontainer container kubepods-besteffort-pod013c5530_04ea_4f40_9191_833b5fdd3c0b.slice. Oct 9 07:54:39.739548 kubelet[2549]: I1009 07:54:39.738697 2549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/013c5530-04ea-4f40-9191-833b5fdd3c0b-calico-apiserver-certs\") pod \"calico-apiserver-697bffd85-p4s7h\" (UID: \"013c5530-04ea-4f40-9191-833b5fdd3c0b\") " pod="calico-apiserver/calico-apiserver-697bffd85-p4s7h" Oct 9 07:54:39.744102 kubelet[2549]: I1009 07:54:39.741506 2549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfnst\" (UniqueName: \"kubernetes.io/projected/013c5530-04ea-4f40-9191-833b5fdd3c0b-kube-api-access-vfnst\") pod \"calico-apiserver-697bffd85-p4s7h\" (UID: \"013c5530-04ea-4f40-9191-833b5fdd3c0b\") " pod="calico-apiserver/calico-apiserver-697bffd85-p4s7h" Oct 9 07:54:39.843395 kubelet[2549]: E1009 07:54:39.842920 2549 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Oct 9 07:54:39.850737 kubelet[2549]: E1009 07:54:39.850685 2549 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/013c5530-04ea-4f40-9191-833b5fdd3c0b-calico-apiserver-certs podName:013c5530-04ea-4f40-9191-833b5fdd3c0b nodeName:}" failed. No retries permitted until 2024-10-09 07:54:40.342988768 +0000 UTC m=+69.854871969 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/013c5530-04ea-4f40-9191-833b5fdd3c0b-calico-apiserver-certs") pod "calico-apiserver-697bffd85-p4s7h" (UID: "013c5530-04ea-4f40-9191-833b5fdd3c0b") : secret "calico-apiserver-certs" not found Oct 9 07:54:40.559538 systemd[1]: Started sshd@14-143.198.229.119:22-139.178.89.65:44912.service - OpenSSH per-connection server daemon (139.178.89.65:44912). Oct 9 07:54:40.573454 containerd[1473]: time="2024-10-09T07:54:40.572982654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-697bffd85-p4s7h,Uid:013c5530-04ea-4f40-9191-833b5fdd3c0b,Namespace:calico-apiserver,Attempt:0,}" Oct 9 07:54:40.618369 sshd[4907]: Accepted publickey for core from 139.178.89.65 port 44912 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:54:40.632861 sshd[4907]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:54:40.647033 systemd-logind[1448]: New session 15 of user core. Oct 9 07:54:40.652435 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 9 07:54:40.873023 systemd-networkd[1366]: calib062506c0ba: Link UP Oct 9 07:54:40.876272 systemd-networkd[1366]: calib062506c0ba: Gained carrier Oct 9 07:54:40.909122 containerd[1473]: 2024-10-09 07:54:40.689 [INFO][4910] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.1.0--5--a4f881141a-k8s-calico--apiserver--697bffd85--p4s7h-eth0 calico-apiserver-697bffd85- calico-apiserver 013c5530-04ea-4f40-9191-833b5fdd3c0b 1048 0 2024-10-09 07:54:39 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:697bffd85 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.1.0-5-a4f881141a calico-apiserver-697bffd85-p4s7h eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib062506c0ba [] []}} ContainerID="4e37e0731eb20c2dde315c987d9f6f1d9756fe33969f8bf791fe3a75347502ee" Namespace="calico-apiserver" Pod="calico-apiserver-697bffd85-p4s7h" WorkloadEndpoint="ci--4081.1.0--5--a4f881141a-k8s-calico--apiserver--697bffd85--p4s7h-" Oct 9 07:54:40.909122 containerd[1473]: 2024-10-09 07:54:40.690 [INFO][4910] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4e37e0731eb20c2dde315c987d9f6f1d9756fe33969f8bf791fe3a75347502ee" Namespace="calico-apiserver" Pod="calico-apiserver-697bffd85-p4s7h" WorkloadEndpoint="ci--4081.1.0--5--a4f881141a-k8s-calico--apiserver--697bffd85--p4s7h-eth0" Oct 9 07:54:40.909122 containerd[1473]: 2024-10-09 07:54:40.753 [INFO][4922] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4e37e0731eb20c2dde315c987d9f6f1d9756fe33969f8bf791fe3a75347502ee" HandleID="k8s-pod-network.4e37e0731eb20c2dde315c987d9f6f1d9756fe33969f8bf791fe3a75347502ee" Workload="ci--4081.1.0--5--a4f881141a-k8s-calico--apiserver--697bffd85--p4s7h-eth0" Oct 9 07:54:40.909122 containerd[1473]: 2024-10-09 07:54:40.778 [INFO][4922] ipam_plugin.go 270: Auto assigning IP ContainerID="4e37e0731eb20c2dde315c987d9f6f1d9756fe33969f8bf791fe3a75347502ee" HandleID="k8s-pod-network.4e37e0731eb20c2dde315c987d9f6f1d9756fe33969f8bf791fe3a75347502ee" Workload="ci--4081.1.0--5--a4f881141a-k8s-calico--apiserver--697bffd85--p4s7h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002efd40), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.1.0-5-a4f881141a", "pod":"calico-apiserver-697bffd85-p4s7h", "timestamp":"2024-10-09 07:54:40.75385926 +0000 UTC"}, Hostname:"ci-4081.1.0-5-a4f881141a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 07:54:40.909122 containerd[1473]: 2024-10-09 07:54:40.779 [INFO][4922] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:54:40.909122 containerd[1473]: 2024-10-09 07:54:40.779 [INFO][4922] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:54:40.909122 containerd[1473]: 2024-10-09 07:54:40.779 [INFO][4922] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.1.0-5-a4f881141a' Oct 9 07:54:40.909122 containerd[1473]: 2024-10-09 07:54:40.782 [INFO][4922] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4e37e0731eb20c2dde315c987d9f6f1d9756fe33969f8bf791fe3a75347502ee" host="ci-4081.1.0-5-a4f881141a" Oct 9 07:54:40.909122 containerd[1473]: 2024-10-09 07:54:40.797 [INFO][4922] ipam.go 372: Looking up existing affinities for host host="ci-4081.1.0-5-a4f881141a" Oct 9 07:54:40.909122 containerd[1473]: 2024-10-09 07:54:40.809 [INFO][4922] ipam.go 489: Trying affinity for 192.168.52.128/26 host="ci-4081.1.0-5-a4f881141a" Oct 9 07:54:40.909122 containerd[1473]: 2024-10-09 07:54:40.815 [INFO][4922] ipam.go 155: Attempting to load block cidr=192.168.52.128/26 host="ci-4081.1.0-5-a4f881141a" Oct 9 07:54:40.909122 containerd[1473]: 2024-10-09 07:54:40.819 [INFO][4922] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.52.128/26 host="ci-4081.1.0-5-a4f881141a" Oct 9 07:54:40.909122 containerd[1473]: 2024-10-09 07:54:40.820 [INFO][4922] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.52.128/26 handle="k8s-pod-network.4e37e0731eb20c2dde315c987d9f6f1d9756fe33969f8bf791fe3a75347502ee" host="ci-4081.1.0-5-a4f881141a" Oct 9 07:54:40.909122 containerd[1473]: 2024-10-09 07:54:40.826 [INFO][4922] ipam.go 1685: Creating new handle: k8s-pod-network.4e37e0731eb20c2dde315c987d9f6f1d9756fe33969f8bf791fe3a75347502ee Oct 9 07:54:40.909122 containerd[1473]: 2024-10-09 07:54:40.843 [INFO][4922] ipam.go 1203: Writing block in order to claim IPs block=192.168.52.128/26 handle="k8s-pod-network.4e37e0731eb20c2dde315c987d9f6f1d9756fe33969f8bf791fe3a75347502ee" host="ci-4081.1.0-5-a4f881141a" Oct 9 07:54:40.909122 containerd[1473]: 2024-10-09 07:54:40.853 [INFO][4922] ipam.go 1216: Successfully claimed IPs: [192.168.52.133/26] block=192.168.52.128/26 handle="k8s-pod-network.4e37e0731eb20c2dde315c987d9f6f1d9756fe33969f8bf791fe3a75347502ee" host="ci-4081.1.0-5-a4f881141a" Oct 9 07:54:40.909122 containerd[1473]: 2024-10-09 07:54:40.854 [INFO][4922] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.52.133/26] handle="k8s-pod-network.4e37e0731eb20c2dde315c987d9f6f1d9756fe33969f8bf791fe3a75347502ee" host="ci-4081.1.0-5-a4f881141a" Oct 9 07:54:40.909122 containerd[1473]: 2024-10-09 07:54:40.854 [INFO][4922] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:54:40.909122 containerd[1473]: 2024-10-09 07:54:40.854 [INFO][4922] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.52.133/26] IPv6=[] ContainerID="4e37e0731eb20c2dde315c987d9f6f1d9756fe33969f8bf791fe3a75347502ee" HandleID="k8s-pod-network.4e37e0731eb20c2dde315c987d9f6f1d9756fe33969f8bf791fe3a75347502ee" Workload="ci--4081.1.0--5--a4f881141a-k8s-calico--apiserver--697bffd85--p4s7h-eth0" Oct 9 07:54:40.909859 containerd[1473]: 2024-10-09 07:54:40.861 [INFO][4910] k8s.go 386: Populated endpoint ContainerID="4e37e0731eb20c2dde315c987d9f6f1d9756fe33969f8bf791fe3a75347502ee" Namespace="calico-apiserver" Pod="calico-apiserver-697bffd85-p4s7h" WorkloadEndpoint="ci--4081.1.0--5--a4f881141a-k8s-calico--apiserver--697bffd85--p4s7h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.1.0--5--a4f881141a-k8s-calico--apiserver--697bffd85--p4s7h-eth0", GenerateName:"calico-apiserver-697bffd85-", Namespace:"calico-apiserver", SelfLink:"", UID:"013c5530-04ea-4f40-9191-833b5fdd3c0b", ResourceVersion:"1048", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 54, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"697bffd85", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.1.0-5-a4f881141a", ContainerID:"", Pod:"calico-apiserver-697bffd85-p4s7h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.52.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib062506c0ba", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:54:40.909859 containerd[1473]: 2024-10-09 07:54:40.861 [INFO][4910] k8s.go 387: Calico CNI using IPs: [192.168.52.133/32] ContainerID="4e37e0731eb20c2dde315c987d9f6f1d9756fe33969f8bf791fe3a75347502ee" Namespace="calico-apiserver" Pod="calico-apiserver-697bffd85-p4s7h" WorkloadEndpoint="ci--4081.1.0--5--a4f881141a-k8s-calico--apiserver--697bffd85--p4s7h-eth0" Oct 9 07:54:40.909859 containerd[1473]: 2024-10-09 07:54:40.863 [INFO][4910] dataplane_linux.go 68: Setting the host side veth name to calib062506c0ba ContainerID="4e37e0731eb20c2dde315c987d9f6f1d9756fe33969f8bf791fe3a75347502ee" Namespace="calico-apiserver" Pod="calico-apiserver-697bffd85-p4s7h" WorkloadEndpoint="ci--4081.1.0--5--a4f881141a-k8s-calico--apiserver--697bffd85--p4s7h-eth0" Oct 9 07:54:40.909859 containerd[1473]: 2024-10-09 07:54:40.872 [INFO][4910] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="4e37e0731eb20c2dde315c987d9f6f1d9756fe33969f8bf791fe3a75347502ee" Namespace="calico-apiserver" Pod="calico-apiserver-697bffd85-p4s7h" WorkloadEndpoint="ci--4081.1.0--5--a4f881141a-k8s-calico--apiserver--697bffd85--p4s7h-eth0" Oct 9 07:54:40.909859 containerd[1473]: 2024-10-09 07:54:40.873 [INFO][4910] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4e37e0731eb20c2dde315c987d9f6f1d9756fe33969f8bf791fe3a75347502ee" Namespace="calico-apiserver" Pod="calico-apiserver-697bffd85-p4s7h" WorkloadEndpoint="ci--4081.1.0--5--a4f881141a-k8s-calico--apiserver--697bffd85--p4s7h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.1.0--5--a4f881141a-k8s-calico--apiserver--697bffd85--p4s7h-eth0", GenerateName:"calico-apiserver-697bffd85-", Namespace:"calico-apiserver", SelfLink:"", UID:"013c5530-04ea-4f40-9191-833b5fdd3c0b", ResourceVersion:"1048", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 54, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"697bffd85", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.1.0-5-a4f881141a", ContainerID:"4e37e0731eb20c2dde315c987d9f6f1d9756fe33969f8bf791fe3a75347502ee", Pod:"calico-apiserver-697bffd85-p4s7h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.52.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib062506c0ba", MAC:"f6:39:fb:c7:60:01", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:54:40.909859 containerd[1473]: 2024-10-09 07:54:40.902 [INFO][4910] k8s.go 500: Wrote updated endpoint to datastore ContainerID="4e37e0731eb20c2dde315c987d9f6f1d9756fe33969f8bf791fe3a75347502ee" Namespace="calico-apiserver" Pod="calico-apiserver-697bffd85-p4s7h" WorkloadEndpoint="ci--4081.1.0--5--a4f881141a-k8s-calico--apiserver--697bffd85--p4s7h-eth0" Oct 9 07:54:40.972398 containerd[1473]: time="2024-10-09T07:54:40.970497714Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:54:40.972398 containerd[1473]: time="2024-10-09T07:54:40.970806168Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:54:40.972398 containerd[1473]: time="2024-10-09T07:54:40.970840882Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:54:40.972398 containerd[1473]: time="2024-10-09T07:54:40.972317644Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:54:41.025563 systemd[1]: Started cri-containerd-4e37e0731eb20c2dde315c987d9f6f1d9756fe33969f8bf791fe3a75347502ee.scope - libcontainer container 4e37e0731eb20c2dde315c987d9f6f1d9756fe33969f8bf791fe3a75347502ee. Oct 9 07:54:41.137213 sshd[4907]: pam_unix(sshd:session): session closed for user core Oct 9 07:54:41.156170 systemd[1]: sshd@14-143.198.229.119:22-139.178.89.65:44912.service: Deactivated successfully. Oct 9 07:54:41.156989 systemd-logind[1448]: Session 15 logged out. Waiting for processes to exit. Oct 9 07:54:41.162806 systemd[1]: session-15.scope: Deactivated successfully. Oct 9 07:54:41.164415 systemd-logind[1448]: Removed session 15. Oct 9 07:54:41.200308 containerd[1473]: time="2024-10-09T07:54:41.200263193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-697bffd85-p4s7h,Uid:013c5530-04ea-4f40-9191-833b5fdd3c0b,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"4e37e0731eb20c2dde315c987d9f6f1d9756fe33969f8bf791fe3a75347502ee\"" Oct 9 07:54:41.204923 containerd[1473]: time="2024-10-09T07:54:41.203758908Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Oct 9 07:54:42.816147 systemd-networkd[1366]: calib062506c0ba: Gained IPv6LL Oct 9 07:54:43.245713 containerd[1473]: time="2024-10-09T07:54:43.244790731Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:54:43.245713 containerd[1473]: time="2024-10-09T07:54:43.245629091Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=40419849" Oct 9 07:54:43.246388 containerd[1473]: time="2024-10-09T07:54:43.246350661Z" level=info msg="ImageCreate event name:\"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:54:43.249193 containerd[1473]: time="2024-10-09T07:54:43.249141108Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:54:43.250051 containerd[1473]: time="2024-10-09T07:54:43.250008661Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"41912266\" in 2.046146899s" Oct 9 07:54:43.250051 containerd[1473]: time="2024-10-09T07:54:43.250050844Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\"" Oct 9 07:54:43.253053 containerd[1473]: time="2024-10-09T07:54:43.253016097Z" level=info msg="CreateContainer within sandbox \"4e37e0731eb20c2dde315c987d9f6f1d9756fe33969f8bf791fe3a75347502ee\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 9 07:54:43.270113 containerd[1473]: time="2024-10-09T07:54:43.270048867Z" level=info msg="CreateContainer within sandbox \"4e37e0731eb20c2dde315c987d9f6f1d9756fe33969f8bf791fe3a75347502ee\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f2d37c6f936e8a64676ba2c28367d038c0c4e08aeafc6332c1d8cd923d940661\"" Oct 9 07:54:43.271528 containerd[1473]: time="2024-10-09T07:54:43.271471119Z" level=info msg="StartContainer for \"f2d37c6f936e8a64676ba2c28367d038c0c4e08aeafc6332c1d8cd923d940661\"" Oct 9 07:54:43.332954 systemd[1]: run-containerd-runc-k8s.io-f2d37c6f936e8a64676ba2c28367d038c0c4e08aeafc6332c1d8cd923d940661-runc.AVZZTN.mount: Deactivated successfully. Oct 9 07:54:43.347585 systemd[1]: Started cri-containerd-f2d37c6f936e8a64676ba2c28367d038c0c4e08aeafc6332c1d8cd923d940661.scope - libcontainer container f2d37c6f936e8a64676ba2c28367d038c0c4e08aeafc6332c1d8cd923d940661. Oct 9 07:54:43.405355 containerd[1473]: time="2024-10-09T07:54:43.405213998Z" level=info msg="StartContainer for \"f2d37c6f936e8a64676ba2c28367d038c0c4e08aeafc6332c1d8cd923d940661\" returns successfully" Oct 9 07:54:44.199217 kubelet[2549]: I1009 07:54:44.198893 2549 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-697bffd85-p4s7h" podStartSLOduration=3.151423515 podStartE2EDuration="5.198846789s" podCreationTimestamp="2024-10-09 07:54:39 +0000 UTC" firstStartedPulling="2024-10-09 07:54:41.202886686 +0000 UTC m=+70.714769901" lastFinishedPulling="2024-10-09 07:54:43.25030997 +0000 UTC m=+72.762193175" observedRunningTime="2024-10-09 07:54:44.19874058 +0000 UTC m=+73.710623803" watchObservedRunningTime="2024-10-09 07:54:44.198846789 +0000 UTC m=+73.710730012" Oct 9 07:54:44.655309 kubelet[2549]: E1009 07:54:44.654814 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:54:45.656812 kubelet[2549]: E1009 07:54:45.655473 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:54:46.154865 systemd[1]: Started sshd@15-143.198.229.119:22-139.178.89.65:34670.service - OpenSSH per-connection server daemon (139.178.89.65:34670). Oct 9 07:54:46.255571 sshd[5051]: Accepted publickey for core from 139.178.89.65 port 34670 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:54:46.259343 sshd[5051]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:54:46.271394 systemd-logind[1448]: New session 16 of user core. Oct 9 07:54:46.277497 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 9 07:54:46.785329 sshd[5051]: pam_unix(sshd:session): session closed for user core Oct 9 07:54:46.790192 systemd[1]: sshd@15-143.198.229.119:22-139.178.89.65:34670.service: Deactivated successfully. Oct 9 07:54:46.794677 systemd[1]: session-16.scope: Deactivated successfully. Oct 9 07:54:46.796479 systemd-logind[1448]: Session 16 logged out. Waiting for processes to exit. Oct 9 07:54:46.797649 systemd-logind[1448]: Removed session 16. Oct 9 07:54:51.655708 kubelet[2549]: E1009 07:54:51.655277 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:54:51.808584 systemd[1]: Started sshd@16-143.198.229.119:22-139.178.89.65:34678.service - OpenSSH per-connection server daemon (139.178.89.65:34678). Oct 9 07:54:51.889229 sshd[5070]: Accepted publickey for core from 139.178.89.65 port 34678 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:54:51.892105 sshd[5070]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:54:51.904303 systemd-logind[1448]: New session 17 of user core. Oct 9 07:54:51.911133 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 9 07:54:52.185606 sshd[5070]: pam_unix(sshd:session): session closed for user core Oct 9 07:54:52.207732 systemd[1]: Started sshd@17-143.198.229.119:22-139.178.89.65:34686.service - OpenSSH per-connection server daemon (139.178.89.65:34686). Oct 9 07:54:52.209230 systemd[1]: sshd@16-143.198.229.119:22-139.178.89.65:34678.service: Deactivated successfully. Oct 9 07:54:52.214779 systemd[1]: session-17.scope: Deactivated successfully. Oct 9 07:54:52.219859 systemd-logind[1448]: Session 17 logged out. Waiting for processes to exit. Oct 9 07:54:52.225607 systemd-logind[1448]: Removed session 17. Oct 9 07:54:52.301925 sshd[5107]: Accepted publickey for core from 139.178.89.65 port 34686 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:54:52.303327 sshd[5107]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:54:52.316214 systemd-logind[1448]: New session 18 of user core. Oct 9 07:54:52.320305 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 9 07:54:52.663186 sshd[5107]: pam_unix(sshd:session): session closed for user core Oct 9 07:54:52.679050 systemd[1]: sshd@17-143.198.229.119:22-139.178.89.65:34686.service: Deactivated successfully. Oct 9 07:54:52.683916 systemd[1]: session-18.scope: Deactivated successfully. Oct 9 07:54:52.685533 systemd-logind[1448]: Session 18 logged out. Waiting for processes to exit. Oct 9 07:54:52.695408 systemd[1]: Started sshd@18-143.198.229.119:22-139.178.89.65:34688.service - OpenSSH per-connection server daemon (139.178.89.65:34688). Oct 9 07:54:52.701183 systemd-logind[1448]: Removed session 18. Oct 9 07:54:52.772406 sshd[5120]: Accepted publickey for core from 139.178.89.65 port 34688 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:54:52.775921 sshd[5120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:54:52.783650 systemd-logind[1448]: New session 19 of user core. Oct 9 07:54:52.790386 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 9 07:54:54.657885 kubelet[2549]: E1009 07:54:54.655796 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:54:55.248383 sshd[5120]: pam_unix(sshd:session): session closed for user core Oct 9 07:54:55.263825 systemd[1]: sshd@18-143.198.229.119:22-139.178.89.65:34688.service: Deactivated successfully. Oct 9 07:54:55.268538 systemd[1]: session-19.scope: Deactivated successfully. Oct 9 07:54:55.270305 systemd-logind[1448]: Session 19 logged out. Waiting for processes to exit. Oct 9 07:54:55.281550 systemd[1]: Started sshd@19-143.198.229.119:22-139.178.89.65:56266.service - OpenSSH per-connection server daemon (139.178.89.65:56266). Oct 9 07:54:55.288217 systemd-logind[1448]: Removed session 19. Oct 9 07:54:55.368407 sshd[5142]: Accepted publickey for core from 139.178.89.65 port 56266 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:54:55.368140 sshd[5142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:54:55.374045 systemd-logind[1448]: New session 20 of user core. Oct 9 07:54:55.378364 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 9 07:54:56.159338 sshd[5142]: pam_unix(sshd:session): session closed for user core Oct 9 07:54:56.171803 systemd[1]: sshd@19-143.198.229.119:22-139.178.89.65:56266.service: Deactivated successfully. Oct 9 07:54:56.177983 systemd[1]: session-20.scope: Deactivated successfully. Oct 9 07:54:56.181688 systemd-logind[1448]: Session 20 logged out. Waiting for processes to exit. Oct 9 07:54:56.188529 systemd[1]: Started sshd@20-143.198.229.119:22-139.178.89.65:56280.service - OpenSSH per-connection server daemon (139.178.89.65:56280). Oct 9 07:54:56.191590 systemd-logind[1448]: Removed session 20. Oct 9 07:54:56.251371 sshd[5154]: Accepted publickey for core from 139.178.89.65 port 56280 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:54:56.253950 sshd[5154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:54:56.261932 systemd-logind[1448]: New session 21 of user core. Oct 9 07:54:56.269427 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 9 07:54:56.416814 sshd[5154]: pam_unix(sshd:session): session closed for user core Oct 9 07:54:56.422333 systemd[1]: sshd@20-143.198.229.119:22-139.178.89.65:56280.service: Deactivated successfully. Oct 9 07:54:56.425777 systemd[1]: session-21.scope: Deactivated successfully. Oct 9 07:54:56.426736 systemd-logind[1448]: Session 21 logged out. Waiting for processes to exit. Oct 9 07:54:56.428246 systemd-logind[1448]: Removed session 21. Oct 9 07:55:01.439740 systemd[1]: Started sshd@21-143.198.229.119:22-139.178.89.65:56296.service - OpenSSH per-connection server daemon (139.178.89.65:56296). Oct 9 07:55:01.556237 sshd[5169]: Accepted publickey for core from 139.178.89.65 port 56296 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:55:01.558745 sshd[5169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:55:01.567345 systemd-logind[1448]: New session 22 of user core. Oct 9 07:55:01.574464 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 9 07:55:01.778569 sshd[5169]: pam_unix(sshd:session): session closed for user core Oct 9 07:55:01.787802 systemd[1]: sshd@21-143.198.229.119:22-139.178.89.65:56296.service: Deactivated successfully. Oct 9 07:55:01.791017 systemd[1]: session-22.scope: Deactivated successfully. Oct 9 07:55:01.793119 systemd-logind[1448]: Session 22 logged out. Waiting for processes to exit. Oct 9 07:55:01.795381 systemd-logind[1448]: Removed session 22. Oct 9 07:55:06.804556 systemd[1]: Started sshd@22-143.198.229.119:22-139.178.89.65:58930.service - OpenSSH per-connection server daemon (139.178.89.65:58930). Oct 9 07:55:06.873367 sshd[5190]: Accepted publickey for core from 139.178.89.65 port 58930 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:55:06.876289 sshd[5190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:55:06.883658 systemd-logind[1448]: New session 23 of user core. Oct 9 07:55:06.892838 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 9 07:55:07.047801 sshd[5190]: pam_unix(sshd:session): session closed for user core Oct 9 07:55:07.053282 systemd[1]: sshd@22-143.198.229.119:22-139.178.89.65:58930.service: Deactivated successfully. Oct 9 07:55:07.056440 systemd[1]: session-23.scope: Deactivated successfully. Oct 9 07:55:07.059797 systemd-logind[1448]: Session 23 logged out. Waiting for processes to exit. Oct 9 07:55:07.061269 systemd-logind[1448]: Removed session 23. Oct 9 07:55:12.076231 systemd[1]: Started sshd@23-143.198.229.119:22-139.178.89.65:58940.service - OpenSSH per-connection server daemon (139.178.89.65:58940). Oct 9 07:55:12.149486 sshd[5223]: Accepted publickey for core from 139.178.89.65 port 58940 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:55:12.152532 sshd[5223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:55:12.163269 systemd-logind[1448]: New session 24 of user core. Oct 9 07:55:12.170365 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 9 07:55:12.338399 sshd[5223]: pam_unix(sshd:session): session closed for user core Oct 9 07:55:12.345528 systemd-logind[1448]: Session 24 logged out. Waiting for processes to exit. Oct 9 07:55:12.346785 systemd[1]: sshd@23-143.198.229.119:22-139.178.89.65:58940.service: Deactivated successfully. Oct 9 07:55:12.351106 systemd[1]: session-24.scope: Deactivated successfully. Oct 9 07:55:12.353555 systemd-logind[1448]: Removed session 24. Oct 9 07:55:17.359501 systemd[1]: Started sshd@24-143.198.229.119:22-139.178.89.65:40020.service - OpenSSH per-connection server daemon (139.178.89.65:40020). Oct 9 07:55:17.423838 sshd[5263]: Accepted publickey for core from 139.178.89.65 port 40020 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:55:17.425730 sshd[5263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:55:17.431373 systemd-logind[1448]: New session 25 of user core. Oct 9 07:55:17.440383 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 9 07:55:17.596527 sshd[5263]: pam_unix(sshd:session): session closed for user core Oct 9 07:55:17.600716 systemd[1]: sshd@24-143.198.229.119:22-139.178.89.65:40020.service: Deactivated successfully. Oct 9 07:55:17.603523 systemd[1]: session-25.scope: Deactivated successfully. Oct 9 07:55:17.604887 systemd-logind[1448]: Session 25 logged out. Waiting for processes to exit. Oct 9 07:55:17.606587 systemd-logind[1448]: Removed session 25. Oct 9 07:55:18.655890 kubelet[2549]: E1009 07:55:18.655316 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Oct 9 07:55:19.655854 kubelet[2549]: E1009 07:55:19.655218 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"