May 8 00:05:52.068868 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Wed May 7 22:19:27 -00 2025 May 8 00:05:52.068908 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=90f0413c3d850985bb1e645e67699e9890362068cb417837636fe4022f4be979 May 8 00:05:52.068931 kernel: BIOS-provided physical RAM map: May 8 00:05:52.068946 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 8 00:05:52.068960 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 8 00:05:52.068975 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 8 00:05:52.068994 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable May 8 00:05:52.069010 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved May 8 00:05:52.069026 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 8 00:05:52.069042 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 8 00:05:52.069061 kernel: NX (Execute Disable) protection: active May 8 00:05:52.069077 kernel: APIC: Static calls initialized May 8 00:05:52.069096 kernel: SMBIOS 2.8 present. May 8 00:05:52.069113 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 May 8 00:05:52.069189 kernel: Hypervisor detected: KVM May 8 00:05:52.069210 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 8 00:05:52.069241 kernel: kvm-clock: using sched offset of 3737104777 cycles May 8 00:05:52.069263 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 8 00:05:52.069302 kernel: tsc: Detected 2294.608 MHz processor May 8 00:05:52.069324 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 8 00:05:52.069346 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 8 00:05:52.069369 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 May 8 00:05:52.069394 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 8 00:05:52.069416 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 8 00:05:52.069444 kernel: ACPI: Early table checksum verification disabled May 8 00:05:52.069466 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) May 8 00:05:52.069488 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:05:52.069510 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:05:52.069532 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:05:52.069553 kernel: ACPI: FACS 0x000000007FFE0000 000040 May 8 00:05:52.069571 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:05:52.069589 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:05:52.069607 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:05:52.069628 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:05:52.069646 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] May 8 00:05:52.069664 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] May 8 00:05:52.069682 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] May 8 00:05:52.069700 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] May 8 00:05:52.069718 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] May 8 00:05:52.069736 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] May 8 00:05:52.069761 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] May 8 00:05:52.069783 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 May 8 00:05:52.070248 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 May 8 00:05:52.072332 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] May 8 00:05:52.072357 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] May 8 00:05:52.072384 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] May 8 00:05:52.072404 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] May 8 00:05:52.072430 kernel: Zone ranges: May 8 00:05:52.072451 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 8 00:05:52.072469 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] May 8 00:05:52.072485 kernel: Normal empty May 8 00:05:52.072499 kernel: Movable zone start for each node May 8 00:05:52.072514 kernel: Early memory node ranges May 8 00:05:52.072529 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 8 00:05:52.072543 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] May 8 00:05:52.072557 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] May 8 00:05:52.072578 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 8 00:05:52.072594 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 8 00:05:52.072614 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges May 8 00:05:52.072628 kernel: ACPI: PM-Timer IO Port: 0x608 May 8 00:05:52.072641 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 8 00:05:52.072655 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 8 00:05:52.072670 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 8 00:05:52.072685 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 8 00:05:52.072700 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 8 00:05:52.072715 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 8 00:05:52.072734 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 8 00:05:52.072749 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 8 00:05:52.072763 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 8 00:05:52.072779 kernel: TSC deadline timer available May 8 00:05:52.072794 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs May 8 00:05:52.072809 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 8 00:05:52.072824 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices May 8 00:05:52.072844 kernel: Booting paravirtualized kernel on KVM May 8 00:05:52.072859 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 8 00:05:52.072878 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 8 00:05:52.072892 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 May 8 00:05:52.072907 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 May 8 00:05:52.072922 kernel: pcpu-alloc: [0] 0 1 May 8 00:05:52.072936 kernel: kvm-guest: PV spinlocks disabled, no host support May 8 00:05:52.072954 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=90f0413c3d850985bb1e645e67699e9890362068cb417837636fe4022f4be979 May 8 00:05:52.072970 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 8 00:05:52.072984 kernel: random: crng init done May 8 00:05:52.073003 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 8 00:05:52.073018 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 8 00:05:52.073033 kernel: Fallback order for Node 0: 0 May 8 00:05:52.073047 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 May 8 00:05:52.073063 kernel: Policy zone: DMA32 May 8 00:05:52.073079 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 8 00:05:52.073095 kernel: Memory: 1969156K/2096612K available (14336K kernel code, 2295K rwdata, 22864K rodata, 43484K init, 1592K bss, 127196K reserved, 0K cma-reserved) May 8 00:05:52.073110 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 8 00:05:52.073126 kernel: Kernel/User page tables isolation: enabled May 8 00:05:52.073165 kernel: ftrace: allocating 37918 entries in 149 pages May 8 00:05:52.073183 kernel: ftrace: allocated 149 pages with 4 groups May 8 00:05:52.073198 kernel: Dynamic Preempt: voluntary May 8 00:05:52.073212 kernel: rcu: Preemptible hierarchical RCU implementation. May 8 00:05:52.073229 kernel: rcu: RCU event tracing is enabled. May 8 00:05:52.073244 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 8 00:05:52.073260 kernel: Trampoline variant of Tasks RCU enabled. May 8 00:05:52.073329 kernel: Rude variant of Tasks RCU enabled. May 8 00:05:52.073344 kernel: Tracing variant of Tasks RCU enabled. May 8 00:05:52.073364 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 8 00:05:52.073380 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 8 00:05:52.073395 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 8 00:05:52.073411 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 8 00:05:52.073431 kernel: Console: colour VGA+ 80x25 May 8 00:05:52.073446 kernel: printk: console [tty0] enabled May 8 00:05:52.073461 kernel: printk: console [ttyS0] enabled May 8 00:05:52.073477 kernel: ACPI: Core revision 20230628 May 8 00:05:52.073493 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 8 00:05:52.073513 kernel: APIC: Switch to symmetric I/O mode setup May 8 00:05:52.073529 kernel: x2apic enabled May 8 00:05:52.073544 kernel: APIC: Switched APIC routing to: physical x2apic May 8 00:05:52.073560 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 8 00:05:52.073576 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x21134f58f0d, max_idle_ns: 440795217993 ns May 8 00:05:52.073591 kernel: Calibrating delay loop (skipped) preset value.. 4589.21 BogoMIPS (lpj=2294608) May 8 00:05:52.073607 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 May 8 00:05:52.073623 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 May 8 00:05:52.073655 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 8 00:05:52.073671 kernel: Spectre V2 : Mitigation: Retpolines May 8 00:05:52.073688 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch May 8 00:05:52.073709 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT May 8 00:05:52.073726 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls May 8 00:05:52.073743 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 8 00:05:52.073761 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 8 00:05:52.073777 kernel: MDS: Mitigation: Clear CPU buffers May 8 00:05:52.073794 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode May 8 00:05:52.073819 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 8 00:05:52.073836 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 8 00:05:52.073853 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 8 00:05:52.073869 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 8 00:05:52.073886 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. May 8 00:05:52.073903 kernel: Freeing SMP alternatives memory: 32K May 8 00:05:52.073919 kernel: pid_max: default: 32768 minimum: 301 May 8 00:05:52.073936 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 8 00:05:52.073957 kernel: landlock: Up and running. May 8 00:05:52.073974 kernel: SELinux: Initializing. May 8 00:05:52.073990 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 8 00:05:52.074008 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 8 00:05:52.074024 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) May 8 00:05:52.074041 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 8 00:05:52.074058 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 8 00:05:52.074074 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 8 00:05:52.074091 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. May 8 00:05:52.074112 kernel: signal: max sigframe size: 1776 May 8 00:05:52.074128 kernel: rcu: Hierarchical SRCU implementation. May 8 00:05:52.074146 kernel: rcu: Max phase no-delay instances is 400. May 8 00:05:52.074163 kernel: NMI watchdog: Perf NMI watchdog permanently disabled May 8 00:05:52.074181 kernel: smp: Bringing up secondary CPUs ... May 8 00:05:52.074198 kernel: smpboot: x86: Booting SMP configuration: May 8 00:05:52.074215 kernel: .... node #0, CPUs: #1 May 8 00:05:52.074231 kernel: smp: Brought up 1 node, 2 CPUs May 8 00:05:52.074251 kernel: smpboot: Max logical packages: 1 May 8 00:05:52.074272 kernel: smpboot: Total of 2 processors activated (9178.43 BogoMIPS) May 8 00:05:52.076331 kernel: devtmpfs: initialized May 8 00:05:52.076359 kernel: x86/mm: Memory block size: 128MB May 8 00:05:52.076381 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 8 00:05:52.076403 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 8 00:05:52.076426 kernel: pinctrl core: initialized pinctrl subsystem May 8 00:05:52.076448 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 8 00:05:52.076467 kernel: audit: initializing netlink subsys (disabled) May 8 00:05:52.076492 kernel: audit: type=2000 audit(1746662750.596:1): state=initialized audit_enabled=0 res=1 May 8 00:05:52.076525 kernel: thermal_sys: Registered thermal governor 'step_wise' May 8 00:05:52.076546 kernel: thermal_sys: Registered thermal governor 'user_space' May 8 00:05:52.076567 kernel: cpuidle: using governor menu May 8 00:05:52.076588 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 8 00:05:52.076608 kernel: dca service started, version 1.12.1 May 8 00:05:52.076629 kernel: PCI: Using configuration type 1 for base access May 8 00:05:52.076650 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 8 00:05:52.076670 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 8 00:05:52.076691 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 8 00:05:52.076715 kernel: ACPI: Added _OSI(Module Device) May 8 00:05:52.076736 kernel: ACPI: Added _OSI(Processor Device) May 8 00:05:52.076756 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 8 00:05:52.076777 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 8 00:05:52.076797 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 8 00:05:52.076818 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 8 00:05:52.076838 kernel: ACPI: Interpreter enabled May 8 00:05:52.076859 kernel: ACPI: PM: (supports S0 S5) May 8 00:05:52.076879 kernel: ACPI: Using IOAPIC for interrupt routing May 8 00:05:52.076904 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 8 00:05:52.076924 kernel: PCI: Using E820 reservations for host bridge windows May 8 00:05:52.076945 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F May 8 00:05:52.076966 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 8 00:05:52.077236 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] May 8 00:05:52.077433 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] May 8 00:05:52.077551 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge May 8 00:05:52.077569 kernel: acpiphp: Slot [3] registered May 8 00:05:52.077585 kernel: acpiphp: Slot [4] registered May 8 00:05:52.077599 kernel: acpiphp: Slot [5] registered May 8 00:05:52.077608 kernel: acpiphp: Slot [6] registered May 8 00:05:52.077617 kernel: acpiphp: Slot [7] registered May 8 00:05:52.077627 kernel: acpiphp: Slot [8] registered May 8 00:05:52.077636 kernel: acpiphp: Slot [9] registered May 8 00:05:52.077646 kernel: acpiphp: Slot [10] registered May 8 00:05:52.077655 kernel: acpiphp: Slot [11] registered May 8 00:05:52.077668 kernel: acpiphp: Slot [12] registered May 8 00:05:52.077677 kernel: acpiphp: Slot [13] registered May 8 00:05:52.077686 kernel: acpiphp: Slot [14] registered May 8 00:05:52.077696 kernel: acpiphp: Slot [15] registered May 8 00:05:52.077705 kernel: acpiphp: Slot [16] registered May 8 00:05:52.077714 kernel: acpiphp: Slot [17] registered May 8 00:05:52.077723 kernel: acpiphp: Slot [18] registered May 8 00:05:52.077732 kernel: acpiphp: Slot [19] registered May 8 00:05:52.077742 kernel: acpiphp: Slot [20] registered May 8 00:05:52.077751 kernel: acpiphp: Slot [21] registered May 8 00:05:52.077763 kernel: acpiphp: Slot [22] registered May 8 00:05:52.077772 kernel: acpiphp: Slot [23] registered May 8 00:05:52.077782 kernel: acpiphp: Slot [24] registered May 8 00:05:52.077791 kernel: acpiphp: Slot [25] registered May 8 00:05:52.077800 kernel: acpiphp: Slot [26] registered May 8 00:05:52.077810 kernel: acpiphp: Slot [27] registered May 8 00:05:52.077819 kernel: acpiphp: Slot [28] registered May 8 00:05:52.077828 kernel: acpiphp: Slot [29] registered May 8 00:05:52.077837 kernel: acpiphp: Slot [30] registered May 8 00:05:52.077849 kernel: acpiphp: Slot [31] registered May 8 00:05:52.077859 kernel: PCI host bridge to bus 0000:00 May 8 00:05:52.077984 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 8 00:05:52.078083 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 8 00:05:52.078178 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 8 00:05:52.078269 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] May 8 00:05:52.078623 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] May 8 00:05:52.078721 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 8 00:05:52.078881 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 May 8 00:05:52.078997 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 May 8 00:05:52.079116 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 May 8 00:05:52.079219 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] May 8 00:05:52.079335 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] May 8 00:05:52.080016 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] May 8 00:05:52.080194 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] May 8 00:05:52.082398 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] May 8 00:05:52.082577 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 May 8 00:05:52.082701 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] May 8 00:05:52.082881 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 May 8 00:05:52.083034 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI May 8 00:05:52.083192 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB May 8 00:05:52.083373 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 May 8 00:05:52.083527 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] May 8 00:05:52.083677 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] May 8 00:05:52.083833 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] May 8 00:05:52.083990 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] May 8 00:05:52.084139 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 8 00:05:52.085446 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 May 8 00:05:52.085589 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] May 8 00:05:52.085695 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] May 8 00:05:52.085797 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] May 8 00:05:52.085903 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 8 00:05:52.086003 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] May 8 00:05:52.086104 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] May 8 00:05:52.086212 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] May 8 00:05:52.087365 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 May 8 00:05:52.087482 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] May 8 00:05:52.087584 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] May 8 00:05:52.087708 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] May 8 00:05:52.087850 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 May 8 00:05:52.087994 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] May 8 00:05:52.088104 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] May 8 00:05:52.088204 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] May 8 00:05:52.089417 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 May 8 00:05:52.089599 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] May 8 00:05:52.089822 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] May 8 00:05:52.090024 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] May 8 00:05:52.090245 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 May 8 00:05:52.090470 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] May 8 00:05:52.090577 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] May 8 00:05:52.090590 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 8 00:05:52.090600 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 8 00:05:52.090610 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 8 00:05:52.090620 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 8 00:05:52.090630 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 May 8 00:05:52.090644 kernel: iommu: Default domain type: Translated May 8 00:05:52.090653 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 8 00:05:52.090663 kernel: PCI: Using ACPI for IRQ routing May 8 00:05:52.090673 kernel: PCI: pci_cache_line_size set to 64 bytes May 8 00:05:52.090682 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 8 00:05:52.090692 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] May 8 00:05:52.090796 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device May 8 00:05:52.090897 kernel: pci 0000:00:02.0: vgaarb: bridge control possible May 8 00:05:52.090999 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 8 00:05:52.091012 kernel: vgaarb: loaded May 8 00:05:52.091021 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 8 00:05:52.091031 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 8 00:05:52.093336 kernel: clocksource: Switched to clocksource kvm-clock May 8 00:05:52.093352 kernel: VFS: Disk quotas dquot_6.6.0 May 8 00:05:52.093363 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 8 00:05:52.093372 kernel: pnp: PnP ACPI init May 8 00:05:52.093382 kernel: pnp: PnP ACPI: found 4 devices May 8 00:05:52.093400 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 8 00:05:52.093409 kernel: NET: Registered PF_INET protocol family May 8 00:05:52.093419 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) May 8 00:05:52.093428 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) May 8 00:05:52.093438 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 8 00:05:52.093448 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) May 8 00:05:52.093458 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) May 8 00:05:52.093467 kernel: TCP: Hash tables configured (established 16384 bind 16384) May 8 00:05:52.093477 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) May 8 00:05:52.093489 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) May 8 00:05:52.093499 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 8 00:05:52.093508 kernel: NET: Registered PF_XDP protocol family May 8 00:05:52.093654 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 8 00:05:52.093749 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 8 00:05:52.093840 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 8 00:05:52.093931 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] May 8 00:05:52.094019 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] May 8 00:05:52.094134 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release May 8 00:05:52.094239 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 8 00:05:52.094253 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 May 8 00:05:52.094409 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 41933 usecs May 8 00:05:52.094424 kernel: PCI: CLS 0 bytes, default 64 May 8 00:05:52.094434 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer May 8 00:05:52.094444 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x21134f58f0d, max_idle_ns: 440795217993 ns May 8 00:05:52.094454 kernel: Initialise system trusted keyrings May 8 00:05:52.094469 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 May 8 00:05:52.094478 kernel: Key type asymmetric registered May 8 00:05:52.094488 kernel: Asymmetric key parser 'x509' registered May 8 00:05:52.094498 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 8 00:05:52.094508 kernel: io scheduler mq-deadline registered May 8 00:05:52.094518 kernel: io scheduler kyber registered May 8 00:05:52.094527 kernel: io scheduler bfq registered May 8 00:05:52.094537 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 8 00:05:52.094546 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 May 8 00:05:52.094556 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 May 8 00:05:52.094569 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 May 8 00:05:52.094578 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 8 00:05:52.094588 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 8 00:05:52.094597 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 8 00:05:52.094606 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 8 00:05:52.094616 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 8 00:05:52.094626 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 8 00:05:52.094745 kernel: rtc_cmos 00:03: RTC can wake from S4 May 8 00:05:52.094845 kernel: rtc_cmos 00:03: registered as rtc0 May 8 00:05:52.094937 kernel: rtc_cmos 00:03: setting system clock to 2025-05-08T00:05:51 UTC (1746662751) May 8 00:05:52.095029 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram May 8 00:05:52.095042 kernel: intel_pstate: CPU model not supported May 8 00:05:52.095051 kernel: NET: Registered PF_INET6 protocol family May 8 00:05:52.095061 kernel: Segment Routing with IPv6 May 8 00:05:52.095070 kernel: In-situ OAM (IOAM) with IPv6 May 8 00:05:52.095080 kernel: NET: Registered PF_PACKET protocol family May 8 00:05:52.095093 kernel: Key type dns_resolver registered May 8 00:05:52.095103 kernel: IPI shorthand broadcast: enabled May 8 00:05:52.095113 kernel: sched_clock: Marking stable (1148006098, 172376287)->(1512509975, -192127590) May 8 00:05:52.095122 kernel: registered taskstats version 1 May 8 00:05:52.095131 kernel: Loading compiled-in X.509 certificates May 8 00:05:52.095141 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: dac8423f6f9fa2fb5f636925d45d7c2572b3a9b6' May 8 00:05:52.095150 kernel: Key type .fscrypt registered May 8 00:05:52.095159 kernel: Key type fscrypt-provisioning registered May 8 00:05:52.095168 kernel: ima: No TPM chip found, activating TPM-bypass! May 8 00:05:52.095181 kernel: ima: Allocated hash algorithm: sha1 May 8 00:05:52.095190 kernel: ima: No architecture policies found May 8 00:05:52.095200 kernel: clk: Disabling unused clocks May 8 00:05:52.095209 kernel: Freeing unused kernel image (initmem) memory: 43484K May 8 00:05:52.095219 kernel: Write protecting the kernel read-only data: 38912k May 8 00:05:52.095248 kernel: Freeing unused kernel image (rodata/data gap) memory: 1712K May 8 00:05:52.095260 kernel: Run /init as init process May 8 00:05:52.095270 kernel: with arguments: May 8 00:05:52.095280 kernel: /init May 8 00:05:52.096329 kernel: with environment: May 8 00:05:52.096339 kernel: HOME=/ May 8 00:05:52.096352 kernel: TERM=linux May 8 00:05:52.096361 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 8 00:05:52.096373 systemd[1]: Successfully made /usr/ read-only. May 8 00:05:52.096387 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 8 00:05:52.096398 systemd[1]: Detected virtualization kvm. May 8 00:05:52.096408 systemd[1]: Detected architecture x86-64. May 8 00:05:52.096421 systemd[1]: Running in initrd. May 8 00:05:52.096431 systemd[1]: No hostname configured, using default hostname. May 8 00:05:52.096441 systemd[1]: Hostname set to . May 8 00:05:52.096451 systemd[1]: Initializing machine ID from VM UUID. May 8 00:05:52.096462 systemd[1]: Queued start job for default target initrd.target. May 8 00:05:52.096472 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:05:52.096483 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:05:52.096494 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 8 00:05:52.096508 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 8 00:05:52.096518 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 8 00:05:52.096529 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 8 00:05:52.096541 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 8 00:05:52.096552 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 8 00:05:52.096562 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:05:52.096575 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 8 00:05:52.096585 systemd[1]: Reached target paths.target - Path Units. May 8 00:05:52.096596 systemd[1]: Reached target slices.target - Slice Units. May 8 00:05:52.096609 systemd[1]: Reached target swap.target - Swaps. May 8 00:05:52.096619 systemd[1]: Reached target timers.target - Timer Units. May 8 00:05:52.096630 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 8 00:05:52.096643 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 00:05:52.096653 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 8 00:05:52.096664 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 8 00:05:52.096674 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 8 00:05:52.096685 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 8 00:05:52.096696 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:05:52.096706 systemd[1]: Reached target sockets.target - Socket Units. May 8 00:05:52.096717 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 8 00:05:52.096730 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 8 00:05:52.096740 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 8 00:05:52.096751 systemd[1]: Starting systemd-fsck-usr.service... May 8 00:05:52.096761 systemd[1]: Starting systemd-journald.service - Journal Service... May 8 00:05:52.096771 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 8 00:05:52.096782 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:05:52.096825 systemd-journald[183]: Collecting audit messages is disabled. May 8 00:05:52.096853 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 8 00:05:52.096863 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:05:52.096875 systemd[1]: Finished systemd-fsck-usr.service. May 8 00:05:52.096888 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 8 00:05:52.096900 systemd-journald[183]: Journal started May 8 00:05:52.096923 systemd-journald[183]: Runtime Journal (/run/log/journal/9879a50e405547118f9b76e0d1ebf2d1) is 4.9M, max 39.3M, 34.4M free. May 8 00:05:52.073334 systemd-modules-load[184]: Inserted module 'overlay' May 8 00:05:52.144873 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 8 00:05:52.144933 kernel: Bridge firewalling registered May 8 00:05:52.144958 systemd[1]: Started systemd-journald.service - Journal Service. May 8 00:05:52.115663 systemd-modules-load[184]: Inserted module 'br_netfilter' May 8 00:05:52.145938 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 8 00:05:52.152955 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:05:52.164600 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:05:52.167671 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 00:05:52.172565 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 8 00:05:52.179400 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 8 00:05:52.188864 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 8 00:05:52.193881 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 00:05:52.205780 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:05:52.217623 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 8 00:05:52.220310 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:05:52.222271 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:05:52.224682 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 8 00:05:52.252735 dracut-cmdline[219]: dracut-dracut-053 May 8 00:05:52.259568 dracut-cmdline[219]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=90f0413c3d850985bb1e645e67699e9890362068cb417837636fe4022f4be979 May 8 00:05:52.260956 systemd-resolved[215]: Positive Trust Anchors: May 8 00:05:52.260965 systemd-resolved[215]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 00:05:52.261004 systemd-resolved[215]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 8 00:05:52.265380 systemd-resolved[215]: Defaulting to hostname 'linux'. May 8 00:05:52.267826 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 8 00:05:52.268691 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 8 00:05:52.376362 kernel: SCSI subsystem initialized May 8 00:05:52.388362 kernel: Loading iSCSI transport class v2.0-870. May 8 00:05:52.402331 kernel: iscsi: registered transport (tcp) May 8 00:05:52.433694 kernel: iscsi: registered transport (qla4xxx) May 8 00:05:52.433798 kernel: QLogic iSCSI HBA Driver May 8 00:05:52.497647 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 8 00:05:52.504589 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 8 00:05:52.555848 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 8 00:05:52.555941 kernel: device-mapper: uevent: version 1.0.3 May 8 00:05:52.555988 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 8 00:05:52.612357 kernel: raid6: avx2x4 gen() 16113 MB/s May 8 00:05:52.628349 kernel: raid6: avx2x2 gen() 16311 MB/s May 8 00:05:52.646663 kernel: raid6: avx2x1 gen() 12238 MB/s May 8 00:05:52.646759 kernel: raid6: using algorithm avx2x2 gen() 16311 MB/s May 8 00:05:52.666054 kernel: raid6: .... xor() 18422 MB/s, rmw enabled May 8 00:05:52.666159 kernel: raid6: using avx2x2 recovery algorithm May 8 00:05:52.692351 kernel: xor: automatically using best checksumming function avx May 8 00:05:52.886393 kernel: Btrfs loaded, zoned=no, fsverity=no May 8 00:05:52.902690 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 8 00:05:52.909619 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:05:52.948507 systemd-udevd[402]: Using default interface naming scheme 'v255'. May 8 00:05:52.957989 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:05:52.966495 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 8 00:05:52.986771 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation May 8 00:05:53.033557 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 8 00:05:53.040566 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 8 00:05:53.133515 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:05:53.142999 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 8 00:05:53.184823 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 8 00:05:53.187519 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 8 00:05:53.188597 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:05:53.191034 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 8 00:05:53.199922 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 8 00:05:53.231303 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 8 00:05:53.272356 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues May 8 00:05:53.385463 kernel: scsi host0: Virtio SCSI HBA May 8 00:05:53.385675 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) May 8 00:05:53.385837 kernel: cryptd: max_cpu_qlen set to 1000 May 8 00:05:53.385864 kernel: ACPI: bus type USB registered May 8 00:05:53.385890 kernel: usbcore: registered new interface driver usbfs May 8 00:05:53.385915 kernel: usbcore: registered new interface driver hub May 8 00:05:53.385952 kernel: usbcore: registered new device driver usb May 8 00:05:53.385973 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller May 8 00:05:53.386181 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 May 8 00:05:53.386399 kernel: uhci_hcd 0000:00:01.2: detected 2 ports May 8 00:05:53.386575 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 May 8 00:05:53.386747 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 8 00:05:53.386773 kernel: GPT:9289727 != 125829119 May 8 00:05:53.386798 kernel: GPT:Alternate GPT header not at the end of the disk. May 8 00:05:53.386829 kernel: GPT:9289727 != 125829119 May 8 00:05:53.386854 kernel: GPT: Use GNU Parted to correct GPT errors. May 8 00:05:53.386879 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:05:53.386905 kernel: hub 1-0:1.0: USB hub found May 8 00:05:53.387099 kernel: hub 1-0:1.0: 2 ports detected May 8 00:05:53.387268 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues May 8 00:05:53.408603 kernel: AVX2 version of gcm_enc/dec engaged. May 8 00:05:53.408637 kernel: AES CTR mode by8 optimization enabled May 8 00:05:53.408673 kernel: virtio_blk virtio5: [vdb] 976 512-byte logical blocks (500 kB/488 KiB) May 8 00:05:53.327324 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 00:05:53.524234 kernel: libata version 3.00 loaded. May 8 00:05:53.524277 kernel: ata_piix 0000:00:01.1: version 2.13 May 8 00:05:53.524599 kernel: scsi host1: ata_piix May 8 00:05:53.524811 kernel: scsi host2: ata_piix May 8 00:05:53.525036 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 May 8 00:05:53.525065 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 May 8 00:05:53.525091 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (456) May 8 00:05:53.525211 kernel: BTRFS: device fsid 1c9931ea-0995-4065-8a57-32743027822a devid 1 transid 42 /dev/vda3 scanned by (udev-worker) (462) May 8 00:05:53.327466 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:05:53.328283 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:05:53.329334 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:05:53.329584 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:05:53.330401 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:05:53.341888 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:05:53.342817 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 8 00:05:53.494738 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 8 00:05:53.523525 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:05:53.563058 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 8 00:05:53.576655 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 8 00:05:53.577608 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 8 00:05:53.597161 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 8 00:05:53.603549 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 8 00:05:53.615620 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:05:53.626885 disk-uuid[540]: Primary Header is updated. May 8 00:05:53.626885 disk-uuid[540]: Secondary Entries is updated. May 8 00:05:53.626885 disk-uuid[540]: Secondary Header is updated. May 8 00:05:53.635503 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:05:53.646923 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:05:54.654272 disk-uuid[543]: The operation has completed successfully. May 8 00:05:54.656268 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:05:54.716946 systemd[1]: disk-uuid.service: Deactivated successfully. May 8 00:05:54.718224 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 8 00:05:54.784565 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 8 00:05:54.791303 sh[560]: Success May 8 00:05:54.808407 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" May 8 00:05:54.881584 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 8 00:05:54.902406 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 8 00:05:54.903973 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 8 00:05:54.940588 kernel: BTRFS info (device dm-0): first mount of filesystem 1c9931ea-0995-4065-8a57-32743027822a May 8 00:05:54.940673 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 8 00:05:54.942758 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 8 00:05:54.944684 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 8 00:05:54.946398 kernel: BTRFS info (device dm-0): using free space tree May 8 00:05:54.958229 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 8 00:05:54.959992 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 8 00:05:54.966834 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 8 00:05:54.974007 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 8 00:05:55.006160 kernel: BTRFS info (device vda6): first mount of filesystem 13774eeb-24b8-4f6d-a245-c0facb6e43f9 May 8 00:05:55.006272 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 8 00:05:55.006326 kernel: BTRFS info (device vda6): using free space tree May 8 00:05:55.012347 kernel: BTRFS info (device vda6): auto enabling async discard May 8 00:05:55.019360 kernel: BTRFS info (device vda6): last unmount of filesystem 13774eeb-24b8-4f6d-a245-c0facb6e43f9 May 8 00:05:55.023668 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 8 00:05:55.032752 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 8 00:05:55.191053 ignition[657]: Ignition 2.20.0 May 8 00:05:55.192216 ignition[657]: Stage: fetch-offline May 8 00:05:55.192359 ignition[657]: no configs at "/usr/lib/ignition/base.d" May 8 00:05:55.192377 ignition[657]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 8 00:05:55.193307 ignition[657]: parsed url from cmdline: "" May 8 00:05:55.193323 ignition[657]: no config URL provided May 8 00:05:55.193336 ignition[657]: reading system config file "/usr/lib/ignition/user.ign" May 8 00:05:55.198271 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 8 00:05:55.193352 ignition[657]: no config at "/usr/lib/ignition/user.ign" May 8 00:05:55.193362 ignition[657]: failed to fetch config: resource requires networking May 8 00:05:55.193959 ignition[657]: Ignition finished successfully May 8 00:05:55.221490 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 00:05:55.231834 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 8 00:05:55.287106 systemd-networkd[748]: lo: Link UP May 8 00:05:55.287125 systemd-networkd[748]: lo: Gained carrier May 8 00:05:55.291865 systemd-networkd[748]: Enumeration completed May 8 00:05:55.292538 systemd-networkd[748]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. May 8 00:05:55.292546 systemd-networkd[748]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. May 8 00:05:55.292854 systemd[1]: Started systemd-networkd.service - Network Configuration. May 8 00:05:55.294670 systemd-networkd[748]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:05:55.294684 systemd-networkd[748]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 00:05:55.295933 systemd-networkd[748]: eth0: Link UP May 8 00:05:55.295941 systemd-networkd[748]: eth0: Gained carrier May 8 00:05:55.295961 systemd-networkd[748]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. May 8 00:05:55.296027 systemd[1]: Reached target network.target - Network. May 8 00:05:55.302150 systemd-networkd[748]: eth1: Link UP May 8 00:05:55.302159 systemd-networkd[748]: eth1: Gained carrier May 8 00:05:55.302186 systemd-networkd[748]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:05:55.307684 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 8 00:05:55.318540 systemd-networkd[748]: eth0: DHCPv4 address 146.190.122.31/20, gateway 146.190.112.1 acquired from 169.254.169.253 May 8 00:05:55.322483 systemd-networkd[748]: eth1: DHCPv4 address 10.124.0.20/20 acquired from 169.254.169.253 May 8 00:05:55.336668 ignition[751]: Ignition 2.20.0 May 8 00:05:55.336684 ignition[751]: Stage: fetch May 8 00:05:55.336940 ignition[751]: no configs at "/usr/lib/ignition/base.d" May 8 00:05:55.336952 ignition[751]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 8 00:05:55.337156 ignition[751]: parsed url from cmdline: "" May 8 00:05:55.337160 ignition[751]: no config URL provided May 8 00:05:55.337167 ignition[751]: reading system config file "/usr/lib/ignition/user.ign" May 8 00:05:55.337178 ignition[751]: no config at "/usr/lib/ignition/user.ign" May 8 00:05:55.337209 ignition[751]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 May 8 00:05:55.372753 ignition[751]: GET result: OK May 8 00:05:55.373053 ignition[751]: parsing config with SHA512: d564a9adee761325af472f6e4a6390fe2a38dd8283e4f0b9e4bea5fbd90e61f92ced7e43b274cc779cb942e00b61a300b59a28d617cdf4567970b9a0a074a647 May 8 00:05:55.381606 unknown[751]: fetched base config from "system" May 8 00:05:55.381628 unknown[751]: fetched base config from "system" May 8 00:05:55.382671 ignition[751]: fetch: fetch complete May 8 00:05:55.381641 unknown[751]: fetched user config from "digitalocean" May 8 00:05:55.382685 ignition[751]: fetch: fetch passed May 8 00:05:55.385815 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 8 00:05:55.382799 ignition[751]: Ignition finished successfully May 8 00:05:55.392694 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 8 00:05:55.432331 ignition[759]: Ignition 2.20.0 May 8 00:05:55.432350 ignition[759]: Stage: kargs May 8 00:05:55.432690 ignition[759]: no configs at "/usr/lib/ignition/base.d" May 8 00:05:55.432722 ignition[759]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 8 00:05:55.434576 ignition[759]: kargs: kargs passed May 8 00:05:55.436218 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 8 00:05:55.434688 ignition[759]: Ignition finished successfully May 8 00:05:55.450567 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 8 00:05:55.473398 ignition[765]: Ignition 2.20.0 May 8 00:05:55.473415 ignition[765]: Stage: disks May 8 00:05:55.473685 ignition[765]: no configs at "/usr/lib/ignition/base.d" May 8 00:05:55.473698 ignition[765]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 8 00:05:55.475060 ignition[765]: disks: disks passed May 8 00:05:55.479456 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 8 00:05:55.475130 ignition[765]: Ignition finished successfully May 8 00:05:55.485524 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 8 00:05:55.486619 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 8 00:05:55.487763 systemd[1]: Reached target local-fs.target - Local File Systems. May 8 00:05:55.489075 systemd[1]: Reached target sysinit.target - System Initialization. May 8 00:05:55.490514 systemd[1]: Reached target basic.target - Basic System. May 8 00:05:55.505865 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 8 00:05:55.530712 systemd-fsck[774]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 8 00:05:55.536610 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 8 00:05:55.548376 systemd[1]: Mounting sysroot.mount - /sysroot... May 8 00:05:55.678339 kernel: EXT4-fs (vda9): mounted filesystem 369e2962-701e-4244-8c1c-27f8fa83bc64 r/w with ordered data mode. Quota mode: none. May 8 00:05:55.680790 systemd[1]: Mounted sysroot.mount - /sysroot. May 8 00:05:55.682918 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 8 00:05:55.691498 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 00:05:55.694457 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 8 00:05:55.700576 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... May 8 00:05:55.706794 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 8 00:05:55.707636 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 8 00:05:55.707683 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 8 00:05:55.728259 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (782) May 8 00:05:55.728341 kernel: BTRFS info (device vda6): first mount of filesystem 13774eeb-24b8-4f6d-a245-c0facb6e43f9 May 8 00:05:55.728369 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 8 00:05:55.728394 kernel: BTRFS info (device vda6): using free space tree May 8 00:05:55.718273 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 8 00:05:55.735086 kernel: BTRFS info (device vda6): auto enabling async discard May 8 00:05:55.757097 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 8 00:05:55.775765 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 00:05:55.842411 coreos-metadata[784]: May 08 00:05:55.842 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 May 8 00:05:55.848388 coreos-metadata[785]: May 08 00:05:55.848 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 May 8 00:05:55.851917 initrd-setup-root[812]: cut: /sysroot/etc/passwd: No such file or directory May 8 00:05:55.857860 coreos-metadata[784]: May 08 00:05:55.856 INFO Fetch successful May 8 00:05:55.865325 initrd-setup-root[819]: cut: /sysroot/etc/group: No such file or directory May 8 00:05:55.866986 coreos-metadata[785]: May 08 00:05:55.866 INFO Fetch successful May 8 00:05:55.870089 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. May 8 00:05:55.870267 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. May 8 00:05:55.879635 initrd-setup-root[827]: cut: /sysroot/etc/shadow: No such file or directory May 8 00:05:55.882027 coreos-metadata[785]: May 08 00:05:55.881 INFO wrote hostname ci-4230.1.1-n-e3439e552d to /sysroot/etc/hostname May 8 00:05:55.884175 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 8 00:05:55.890816 initrd-setup-root[835]: cut: /sysroot/etc/gshadow: No such file or directory May 8 00:05:56.040906 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 8 00:05:56.047447 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 8 00:05:56.051553 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 8 00:05:56.061267 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 8 00:05:56.066320 kernel: BTRFS info (device vda6): last unmount of filesystem 13774eeb-24b8-4f6d-a245-c0facb6e43f9 May 8 00:05:56.098057 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 8 00:05:56.102349 ignition[902]: INFO : Ignition 2.20.0 May 8 00:05:56.102349 ignition[902]: INFO : Stage: mount May 8 00:05:56.103756 ignition[902]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:05:56.103756 ignition[902]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 8 00:05:56.103756 ignition[902]: INFO : mount: mount passed May 8 00:05:56.103756 ignition[902]: INFO : Ignition finished successfully May 8 00:05:56.105225 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 8 00:05:56.109501 systemd[1]: Starting ignition-files.service - Ignition (files)... May 8 00:05:56.132842 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 00:05:56.145329 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (914) May 8 00:05:56.148535 kernel: BTRFS info (device vda6): first mount of filesystem 13774eeb-24b8-4f6d-a245-c0facb6e43f9 May 8 00:05:56.148608 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 8 00:05:56.150578 kernel: BTRFS info (device vda6): using free space tree May 8 00:05:56.155332 kernel: BTRFS info (device vda6): auto enabling async discard May 8 00:05:56.158714 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 00:05:56.206667 ignition[931]: INFO : Ignition 2.20.0 May 8 00:05:56.206667 ignition[931]: INFO : Stage: files May 8 00:05:56.208165 ignition[931]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:05:56.208165 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 8 00:05:56.209948 ignition[931]: DEBUG : files: compiled without relabeling support, skipping May 8 00:05:56.209948 ignition[931]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 8 00:05:56.209948 ignition[931]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 8 00:05:56.213502 ignition[931]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 8 00:05:56.214520 ignition[931]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 8 00:05:56.214520 ignition[931]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 8 00:05:56.214112 unknown[931]: wrote ssh authorized keys file for user: core May 8 00:05:56.217705 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 8 00:05:56.217705 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 8 00:05:56.492000 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 8 00:05:56.722446 systemd-networkd[748]: eth1: Gained IPv6LL May 8 00:05:56.978656 systemd-networkd[748]: eth0: Gained IPv6LL May 8 00:05:57.080952 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 8 00:05:57.080952 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 8 00:05:57.083625 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 8 00:05:57.560865 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 8 00:05:57.630328 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 8 00:05:57.632120 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 8 00:05:57.632120 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 8 00:05:57.632120 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 8 00:05:57.636034 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 8 00:05:57.636034 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 00:05:57.636034 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 00:05:57.636034 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 00:05:57.636034 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 00:05:57.636034 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 8 00:05:57.636034 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 8 00:05:57.636034 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 8 00:05:57.636034 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 8 00:05:57.636034 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 8 00:05:57.636034 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 May 8 00:05:58.047330 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 8 00:05:58.465748 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 8 00:05:58.465748 ignition[931]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 8 00:05:58.468386 ignition[931]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 00:05:58.468386 ignition[931]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 00:05:58.468386 ignition[931]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 8 00:05:58.468386 ignition[931]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" May 8 00:05:58.468386 ignition[931]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" May 8 00:05:58.468386 ignition[931]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" May 8 00:05:58.468386 ignition[931]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" May 8 00:05:58.468386 ignition[931]: INFO : files: files passed May 8 00:05:58.468386 ignition[931]: INFO : Ignition finished successfully May 8 00:05:58.469856 systemd[1]: Finished ignition-files.service - Ignition (files). May 8 00:05:58.480561 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 8 00:05:58.485536 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 8 00:05:58.489641 systemd[1]: ignition-quench.service: Deactivated successfully. May 8 00:05:58.489785 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 8 00:05:58.516742 initrd-setup-root-after-ignition[959]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 00:05:58.516742 initrd-setup-root-after-ignition[959]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 8 00:05:58.520421 initrd-setup-root-after-ignition[963]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 00:05:58.521804 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 00:05:58.523146 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 8 00:05:58.528492 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 8 00:05:58.574492 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 8 00:05:58.574650 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 8 00:05:58.576722 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 8 00:05:58.577587 systemd[1]: Reached target initrd.target - Initrd Default Target. May 8 00:05:58.578975 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 8 00:05:58.584573 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 8 00:05:58.612535 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 00:05:58.624637 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 8 00:05:58.642946 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 8 00:05:58.643884 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:05:58.645306 systemd[1]: Stopped target timers.target - Timer Units. May 8 00:05:58.646568 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 8 00:05:58.646776 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 00:05:58.648268 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 8 00:05:58.649837 systemd[1]: Stopped target basic.target - Basic System. May 8 00:05:58.650885 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 8 00:05:58.651981 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 8 00:05:58.653210 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 8 00:05:58.654432 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 8 00:05:58.655638 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 8 00:05:58.657091 systemd[1]: Stopped target sysinit.target - System Initialization. May 8 00:05:58.658397 systemd[1]: Stopped target local-fs.target - Local File Systems. May 8 00:05:58.659711 systemd[1]: Stopped target swap.target - Swaps. May 8 00:05:58.660792 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 8 00:05:58.661004 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 8 00:05:58.662412 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 8 00:05:58.663340 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:05:58.664598 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 8 00:05:58.664747 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:05:58.666002 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 8 00:05:58.666242 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 8 00:05:58.667595 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 8 00:05:58.667842 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 00:05:58.669407 systemd[1]: ignition-files.service: Deactivated successfully. May 8 00:05:58.669586 systemd[1]: Stopped ignition-files.service - Ignition (files). May 8 00:05:58.670568 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 8 00:05:58.670792 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 8 00:05:58.680681 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 8 00:05:58.688654 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 8 00:05:58.689968 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 8 00:05:58.690198 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:05:58.692648 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 8 00:05:58.692892 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 8 00:05:58.707688 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 8 00:05:58.707850 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 8 00:05:58.712545 ignition[983]: INFO : Ignition 2.20.0 May 8 00:05:58.712545 ignition[983]: INFO : Stage: umount May 8 00:05:58.712545 ignition[983]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:05:58.712545 ignition[983]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 8 00:05:58.723389 ignition[983]: INFO : umount: umount passed May 8 00:05:58.723389 ignition[983]: INFO : Ignition finished successfully May 8 00:05:58.717024 systemd[1]: ignition-mount.service: Deactivated successfully. May 8 00:05:58.717197 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 8 00:05:58.726179 systemd[1]: ignition-disks.service: Deactivated successfully. May 8 00:05:58.726364 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 8 00:05:58.727085 systemd[1]: ignition-kargs.service: Deactivated successfully. May 8 00:05:58.727148 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 8 00:05:58.727791 systemd[1]: ignition-fetch.service: Deactivated successfully. May 8 00:05:58.727846 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 8 00:05:58.731616 systemd[1]: Stopped target network.target - Network. May 8 00:05:58.732530 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 8 00:05:58.732629 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 8 00:05:58.733419 systemd[1]: Stopped target paths.target - Path Units. May 8 00:05:58.733966 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 8 00:05:58.745126 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:05:58.746261 systemd[1]: Stopped target slices.target - Slice Units. May 8 00:05:58.749444 systemd[1]: Stopped target sockets.target - Socket Units. May 8 00:05:58.750322 systemd[1]: iscsid.socket: Deactivated successfully. May 8 00:05:58.750394 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 8 00:05:58.751058 systemd[1]: iscsiuio.socket: Deactivated successfully. May 8 00:05:58.751116 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 00:05:58.753963 systemd[1]: ignition-setup.service: Deactivated successfully. May 8 00:05:58.754058 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 8 00:05:58.761907 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 8 00:05:58.761997 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 8 00:05:58.764848 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 8 00:05:58.765636 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 8 00:05:58.770201 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 8 00:05:58.777413 systemd[1]: systemd-resolved.service: Deactivated successfully. May 8 00:05:58.777770 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 8 00:05:58.785262 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 8 00:05:58.785773 systemd[1]: systemd-networkd.service: Deactivated successfully. May 8 00:05:58.787768 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 8 00:05:58.793705 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 8 00:05:58.797458 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 8 00:05:58.797590 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 8 00:05:58.805678 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 8 00:05:58.813511 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 8 00:05:58.813628 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 00:05:58.816815 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 8 00:05:58.816908 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 8 00:05:58.818374 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 8 00:05:58.818456 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 8 00:05:58.819849 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 8 00:05:58.819925 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:05:58.822692 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:05:58.828721 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 8 00:05:58.828838 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 8 00:05:58.829706 systemd[1]: sysroot-boot.service: Deactivated successfully. May 8 00:05:58.829844 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 8 00:05:58.843503 systemd[1]: systemd-udevd.service: Deactivated successfully. May 8 00:05:58.843736 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:05:58.847465 systemd[1]: network-cleanup.service: Deactivated successfully. May 8 00:05:58.847650 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 8 00:05:58.849956 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 8 00:05:58.850057 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 8 00:05:58.851627 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 8 00:05:58.851687 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:05:58.853076 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 8 00:05:58.853160 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 8 00:05:58.855158 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 8 00:05:58.855236 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 8 00:05:58.856490 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 00:05:58.856569 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:05:58.858075 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 8 00:05:58.858146 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 8 00:05:58.869590 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 8 00:05:58.872699 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 8 00:05:58.872814 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:05:58.874442 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 8 00:05:58.874524 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 8 00:05:58.876821 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 8 00:05:58.876909 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:05:58.877676 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:05:58.877740 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:05:58.881885 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 8 00:05:58.881991 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 8 00:05:58.882631 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 8 00:05:58.882768 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 8 00:05:58.884688 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 8 00:05:58.891621 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 8 00:05:58.911474 systemd[1]: Switching root. May 8 00:05:58.958821 systemd-journald[183]: Journal stopped May 8 00:06:00.628313 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). May 8 00:06:00.628458 kernel: SELinux: policy capability network_peer_controls=1 May 8 00:06:00.628492 kernel: SELinux: policy capability open_perms=1 May 8 00:06:00.628513 kernel: SELinux: policy capability extended_socket_class=1 May 8 00:06:00.628534 kernel: SELinux: policy capability always_check_network=0 May 8 00:06:00.628552 kernel: SELinux: policy capability cgroup_seclabel=1 May 8 00:06:00.628571 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 8 00:06:00.628591 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 8 00:06:00.628622 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 8 00:06:00.628644 kernel: audit: type=1403 audit(1746662759.104:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 8 00:06:00.628669 systemd[1]: Successfully loaded SELinux policy in 46.801ms. May 8 00:06:00.628703 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 20.988ms. May 8 00:06:00.628727 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 8 00:06:00.628748 systemd[1]: Detected virtualization kvm. May 8 00:06:00.628769 systemd[1]: Detected architecture x86-64. May 8 00:06:00.628790 systemd[1]: Detected first boot. May 8 00:06:00.628811 systemd[1]: Hostname set to . May 8 00:06:00.628830 systemd[1]: Initializing machine ID from VM UUID. May 8 00:06:00.628849 zram_generator::config[1028]: No configuration found. May 8 00:06:00.628876 kernel: Guest personality initialized and is inactive May 8 00:06:00.628907 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 8 00:06:00.628928 kernel: Initialized host personality May 8 00:06:00.628949 kernel: NET: Registered PF_VSOCK protocol family May 8 00:06:00.628972 systemd[1]: Populated /etc with preset unit settings. May 8 00:06:00.628996 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 8 00:06:00.629037 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 8 00:06:00.629061 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 8 00:06:00.629091 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 8 00:06:00.629116 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 8 00:06:00.629139 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 8 00:06:00.629160 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 8 00:06:00.629182 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 8 00:06:00.629202 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 8 00:06:00.629231 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 8 00:06:00.629252 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 8 00:06:00.629276 systemd[1]: Created slice user.slice - User and Session Slice. May 8 00:06:00.629312 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:06:00.629332 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:06:00.629352 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 8 00:06:00.629371 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 8 00:06:00.629397 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 8 00:06:00.629424 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 8 00:06:00.629446 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 8 00:06:00.629465 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:06:00.629485 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 8 00:06:00.629507 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 8 00:06:00.629529 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 8 00:06:00.629550 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 8 00:06:00.629574 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:06:00.629595 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 8 00:06:00.629618 systemd[1]: Reached target slices.target - Slice Units. May 8 00:06:00.629645 systemd[1]: Reached target swap.target - Swaps. May 8 00:06:00.629667 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 8 00:06:00.629691 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 8 00:06:00.629717 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 8 00:06:00.629740 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 8 00:06:00.629763 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 8 00:06:00.629785 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:06:00.629808 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 8 00:06:00.629837 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 8 00:06:00.629867 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 8 00:06:00.629890 systemd[1]: Mounting media.mount - External Media Directory... May 8 00:06:00.629913 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:06:00.629935 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 8 00:06:00.629958 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 8 00:06:00.629981 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 8 00:06:00.630005 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 8 00:06:00.630027 systemd[1]: Reached target machines.target - Containers. May 8 00:06:00.630053 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 8 00:06:00.630076 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:06:00.630097 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 8 00:06:00.630116 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 8 00:06:00.630135 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:06:00.630157 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 8 00:06:00.630177 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:06:00.630197 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 8 00:06:00.630221 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:06:00.630248 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 8 00:06:00.630270 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 8 00:06:00.632365 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 8 00:06:00.632421 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 8 00:06:00.632445 systemd[1]: Stopped systemd-fsck-usr.service. May 8 00:06:00.632469 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 8 00:06:00.632490 systemd[1]: Starting systemd-journald.service - Journal Service... May 8 00:06:00.632510 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 8 00:06:00.632544 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 8 00:06:00.632562 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 8 00:06:00.632584 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 8 00:06:00.632608 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 8 00:06:00.632639 systemd[1]: verity-setup.service: Deactivated successfully. May 8 00:06:00.632669 systemd[1]: Stopped verity-setup.service. May 8 00:06:00.632692 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:06:00.632714 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 8 00:06:00.632736 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 8 00:06:00.632760 systemd[1]: Mounted media.mount - External Media Directory. May 8 00:06:00.632792 kernel: loop: module loaded May 8 00:06:00.632816 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 8 00:06:00.632839 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 8 00:06:00.632861 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 8 00:06:00.632883 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:06:00.632906 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 8 00:06:00.632929 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 8 00:06:00.632951 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:06:00.632973 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:06:00.632997 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:06:00.633041 kernel: fuse: init (API version 7.39) May 8 00:06:00.633063 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:06:00.633085 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:06:00.633107 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:06:00.633130 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 8 00:06:00.633150 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 8 00:06:00.633171 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 8 00:06:00.633191 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 8 00:06:00.633219 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 8 00:06:00.633241 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 8 00:06:00.633265 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 8 00:06:00.633375 systemd-journald[1109]: Collecting audit messages is disabled. May 8 00:06:00.633423 systemd[1]: Reached target network-pre.target - Preparation for Network. May 8 00:06:00.633455 systemd-journald[1109]: Journal started May 8 00:06:00.633504 systemd-journald[1109]: Runtime Journal (/run/log/journal/9879a50e405547118f9b76e0d1ebf2d1) is 4.9M, max 39.3M, 34.4M free. May 8 00:06:00.145202 systemd[1]: Queued start job for default target multi-user.target. May 8 00:06:00.151411 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 8 00:06:00.152094 systemd[1]: systemd-journald.service: Deactivated successfully. May 8 00:06:00.645055 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 8 00:06:00.648388 kernel: ACPI: bus type drm_connector registered May 8 00:06:00.657333 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 8 00:06:00.661332 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 8 00:06:00.661447 systemd[1]: Reached target local-fs.target - Local File Systems. May 8 00:06:00.668328 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 8 00:06:00.678328 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 8 00:06:00.694314 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 8 00:06:00.698328 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:06:00.711406 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 8 00:06:00.715387 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:06:00.742782 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 8 00:06:00.742910 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 8 00:06:00.762935 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 00:06:00.770690 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 8 00:06:00.792320 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 8 00:06:00.796933 systemd[1]: Started systemd-journald.service - Journal Service. May 8 00:06:00.799270 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 00:06:00.799641 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 8 00:06:00.800980 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 8 00:06:00.801891 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 8 00:06:00.802997 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 8 00:06:00.804955 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 8 00:06:00.847719 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 8 00:06:00.861537 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 8 00:06:00.864893 kernel: loop0: detected capacity change from 0 to 8 May 8 00:06:00.869603 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 8 00:06:00.885971 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 00:06:00.907317 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 8 00:06:00.923244 systemd-journald[1109]: Time spent on flushing to /var/log/journal/9879a50e405547118f9b76e0d1ebf2d1 is 160.531ms for 1011 entries. May 8 00:06:00.923244 systemd-journald[1109]: System Journal (/var/log/journal/9879a50e405547118f9b76e0d1ebf2d1) is 8M, max 195.6M, 187.6M free. May 8 00:06:01.116473 systemd-journald[1109]: Received client request to flush runtime journal. May 8 00:06:01.116557 kernel: loop1: detected capacity change from 0 to 147912 May 8 00:06:01.116585 kernel: loop2: detected capacity change from 0 to 138176 May 8 00:06:00.970623 systemd-tmpfiles[1136]: ACLs are not supported, ignoring. May 8 00:06:00.970665 systemd-tmpfiles[1136]: ACLs are not supported, ignoring. May 8 00:06:00.990490 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 8 00:06:01.004451 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 8 00:06:01.018547 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 8 00:06:01.021061 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:06:01.038362 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 8 00:06:01.121853 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 8 00:06:01.134123 kernel: loop3: detected capacity change from 0 to 210664 May 8 00:06:01.139143 udevadm[1172]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 8 00:06:01.155201 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 8 00:06:01.191327 kernel: loop4: detected capacity change from 0 to 8 May 8 00:06:01.201377 kernel: loop5: detected capacity change from 0 to 147912 May 8 00:06:01.233179 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 8 00:06:01.249067 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 8 00:06:01.262450 kernel: loop6: detected capacity change from 0 to 138176 May 8 00:06:01.340884 systemd-tmpfiles[1181]: ACLs are not supported, ignoring. May 8 00:06:01.340911 systemd-tmpfiles[1181]: ACLs are not supported, ignoring. May 8 00:06:01.347020 kernel: loop7: detected capacity change from 0 to 210664 May 8 00:06:01.363507 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:06:01.394023 (sd-merge)[1179]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. May 8 00:06:01.394923 (sd-merge)[1179]: Merged extensions into '/usr'. May 8 00:06:01.402807 systemd[1]: Reload requested from client PID 1135 ('systemd-sysext') (unit systemd-sysext.service)... May 8 00:06:01.402833 systemd[1]: Reloading... May 8 00:06:01.644355 zram_generator::config[1211]: No configuration found. May 8 00:06:01.734373 ldconfig[1131]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 8 00:06:01.950815 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:06:02.107691 systemd[1]: Reloading finished in 704 ms. May 8 00:06:02.129037 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 8 00:06:02.130692 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 8 00:06:02.154606 systemd[1]: Starting ensure-sysext.service... May 8 00:06:02.159249 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 8 00:06:02.185549 systemd[1]: Reload requested from client PID 1254 ('systemctl') (unit ensure-sysext.service)... May 8 00:06:02.185572 systemd[1]: Reloading... May 8 00:06:02.233430 systemd-tmpfiles[1255]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 8 00:06:02.234002 systemd-tmpfiles[1255]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 8 00:06:02.236396 systemd-tmpfiles[1255]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 8 00:06:02.237334 systemd-tmpfiles[1255]: ACLs are not supported, ignoring. May 8 00:06:02.237631 systemd-tmpfiles[1255]: ACLs are not supported, ignoring. May 8 00:06:02.246125 systemd-tmpfiles[1255]: Detected autofs mount point /boot during canonicalization of boot. May 8 00:06:02.247904 systemd-tmpfiles[1255]: Skipping /boot May 8 00:06:02.307281 systemd-tmpfiles[1255]: Detected autofs mount point /boot during canonicalization of boot. May 8 00:06:02.309570 systemd-tmpfiles[1255]: Skipping /boot May 8 00:06:02.370351 zram_generator::config[1280]: No configuration found. May 8 00:06:02.669453 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:06:02.853758 systemd[1]: Reloading finished in 666 ms. May 8 00:06:02.871321 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 8 00:06:02.886252 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:06:02.904849 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 8 00:06:02.920178 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 8 00:06:02.925864 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 8 00:06:02.939795 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 8 00:06:02.947832 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:06:02.964835 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 8 00:06:02.973601 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:06:02.974709 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:06:02.984334 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:06:02.995741 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:06:03.006339 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:06:03.007588 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:06:03.008170 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 8 00:06:03.021858 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 8 00:06:03.022619 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:06:03.037967 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:06:03.039362 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:06:03.039657 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:06:03.039803 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 8 00:06:03.039951 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:06:03.045524 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 8 00:06:03.060019 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:06:03.060791 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:06:03.073073 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 8 00:06:03.074079 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:06:03.075061 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 8 00:06:03.075331 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:06:03.076947 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:06:03.078446 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:06:03.088668 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 8 00:06:03.090260 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:06:03.091613 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:06:03.095480 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:06:03.095734 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:06:03.112442 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 8 00:06:03.113959 systemd-udevd[1337]: Using default interface naming scheme 'v255'. May 8 00:06:03.120061 systemd[1]: Finished ensure-sysext.service. May 8 00:06:03.122862 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 00:06:03.123415 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 8 00:06:03.126721 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:06:03.126871 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 8 00:06:03.138952 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 8 00:06:03.148566 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 8 00:06:03.149692 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 8 00:06:03.188390 augenrules[1369]: No rules May 8 00:06:03.189281 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:06:03.190580 systemd[1]: audit-rules.service: Deactivated successfully. May 8 00:06:03.191129 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 8 00:06:03.202674 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 8 00:06:03.205036 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 8 00:06:03.210364 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 8 00:06:03.439646 systemd-resolved[1333]: Positive Trust Anchors: May 8 00:06:03.439671 systemd-resolved[1333]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 00:06:03.439732 systemd-resolved[1333]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 8 00:06:03.457845 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 8 00:06:03.458214 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 8 00:06:03.458952 systemd[1]: Reached target time-set.target - System Time Set. May 8 00:06:03.462271 systemd-networkd[1379]: lo: Link UP May 8 00:06:03.464142 systemd-resolved[1333]: Using system hostname 'ci-4230.1.1-n-e3439e552d'. May 8 00:06:03.467389 systemd-networkd[1379]: lo: Gained carrier May 8 00:06:03.468781 systemd-networkd[1379]: Enumeration completed May 8 00:06:03.468925 systemd[1]: Started systemd-networkd.service - Network Configuration. May 8 00:06:03.478708 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 8 00:06:03.486651 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 8 00:06:03.488523 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 8 00:06:03.489319 systemd[1]: Reached target network.target - Network. May 8 00:06:03.489839 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 8 00:06:03.527932 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 8 00:06:03.547619 systemd-networkd[1379]: eth0: Configuring with /run/systemd/network/10-26:8a:8d:95:ec:ed.network. May 8 00:06:03.549163 systemd-networkd[1379]: eth0: Link UP May 8 00:06:03.550843 systemd-networkd[1379]: eth0: Gained carrier May 8 00:06:03.560469 systemd-timesyncd[1365]: Network configuration changed, trying to establish connection. May 8 00:06:03.564500 systemd[1]: Condition check resulted in dev-disk-by\x2dlabel-config\x2d2.device - /dev/disk/by-label/config-2 being skipped. May 8 00:06:03.570884 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... May 8 00:06:03.571889 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:06:03.574473 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:06:03.576171 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:06:03.585504 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:06:03.589574 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:06:03.592184 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:06:03.592246 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 8 00:06:03.592381 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 8 00:06:03.592410 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:06:03.618320 kernel: ISO 9660 Extensions: RRIP_1991A May 8 00:06:03.621237 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. May 8 00:06:03.629355 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 8 00:06:03.629734 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:06:03.630437 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:06:03.631877 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:06:03.632255 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:06:03.640174 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:06:03.643352 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 May 8 00:06:03.646757 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:06:03.651563 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:06:03.651641 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 8 00:06:03.664311 kernel: ACPI: button: Power Button [PWRF] May 8 00:06:03.664425 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (1393) May 8 00:06:03.707111 systemd-networkd[1379]: eth1: Configuring with /run/systemd/network/10-22:d5:53:db:76:04.network. May 8 00:06:03.710433 systemd-networkd[1379]: eth1: Link UP May 8 00:06:03.710446 systemd-networkd[1379]: eth1: Gained carrier May 8 00:06:03.712902 systemd-timesyncd[1365]: Network configuration changed, trying to establish connection. May 8 00:06:03.713770 systemd-timesyncd[1365]: Network configuration changed, trying to establish connection. May 8 00:06:03.715842 systemd-timesyncd[1365]: Network configuration changed, trying to establish connection. May 8 00:06:03.722336 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 8 00:06:03.774617 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 May 8 00:06:03.774692 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console May 8 00:06:03.792048 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 8 00:06:03.797330 kernel: Console: switching to colour dummy device 80x25 May 8 00:06:03.804541 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 8 00:06:03.808133 kernel: [drm] features: -virgl +edid -resource_blob -host_visible May 8 00:06:03.808226 kernel: [drm] features: -context_init May 8 00:06:03.831020 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 8 00:06:03.846236 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:06:03.866592 kernel: mousedev: PS/2 mouse device common for all mice May 8 00:06:03.871740 kernel: [drm] number of scanouts: 1 May 8 00:06:03.872059 kernel: [drm] number of cap sets: 0 May 8 00:06:03.873831 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:06:03.874451 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:06:03.884330 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 May 8 00:06:03.891727 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:06:03.914163 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device May 8 00:06:03.914273 kernel: Console: switching to colour frame buffer device 128x48 May 8 00:06:03.926648 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device May 8 00:06:03.953795 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:06:03.954157 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:06:03.991707 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 8 00:06:04.005782 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:06:04.068143 kernel: EDAC MC: Ver: 3.0.0 May 8 00:06:04.096027 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 8 00:06:04.103634 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 8 00:06:04.112621 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:06:04.129325 lvm[1444]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 00:06:04.164723 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 8 00:06:04.165418 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 8 00:06:04.166197 systemd[1]: Reached target sysinit.target - System Initialization. May 8 00:06:04.166514 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 8 00:06:04.166686 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 8 00:06:04.167044 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 8 00:06:04.167259 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 8 00:06:04.168205 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 8 00:06:04.168797 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 8 00:06:04.168872 systemd[1]: Reached target paths.target - Path Units. May 8 00:06:04.169030 systemd[1]: Reached target timers.target - Timer Units. May 8 00:06:04.171700 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 8 00:06:04.174368 systemd[1]: Starting docker.socket - Docker Socket for the API... May 8 00:06:04.181777 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 8 00:06:04.183405 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 8 00:06:04.183557 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 8 00:06:04.194835 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 8 00:06:04.197310 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 8 00:06:04.204614 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 8 00:06:04.207514 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 8 00:06:04.210181 systemd[1]: Reached target sockets.target - Socket Units. May 8 00:06:04.211046 systemd[1]: Reached target basic.target - Basic System. May 8 00:06:04.213690 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 8 00:06:04.215451 lvm[1449]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 00:06:04.213750 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 8 00:06:04.222476 systemd[1]: Starting containerd.service - containerd container runtime... May 8 00:06:04.229469 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 8 00:06:04.239565 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 8 00:06:04.255468 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 8 00:06:04.260488 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 8 00:06:04.262130 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 8 00:06:04.270591 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 8 00:06:04.282427 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 8 00:06:04.287037 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 8 00:06:04.292809 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 8 00:06:04.300559 dbus-daemon[1452]: [system] SELinux support is enabled May 8 00:06:04.308701 systemd[1]: Starting systemd-logind.service - User Login Management... May 8 00:06:04.314437 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 8 00:06:04.315374 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 8 00:06:04.321588 systemd[1]: Starting update-engine.service - Update Engine... May 8 00:06:04.326257 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 8 00:06:04.329657 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 8 00:06:04.337416 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 8 00:06:04.356724 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 8 00:06:04.356773 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 8 00:06:04.359876 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 8 00:06:04.371398 jq[1453]: false May 8 00:06:04.387607 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 8 00:06:04.389344 update_engine[1464]: I20250508 00:06:04.388687 1464 main.cc:92] Flatcar Update Engine starting May 8 00:06:04.389364 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 8 00:06:04.389896 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). May 8 00:06:04.389950 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 8 00:06:04.392005 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 8 00:06:04.394446 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 8 00:06:04.407319 extend-filesystems[1454]: Found loop4 May 8 00:06:04.407319 extend-filesystems[1454]: Found loop5 May 8 00:06:04.407319 extend-filesystems[1454]: Found loop6 May 8 00:06:04.407319 extend-filesystems[1454]: Found loop7 May 8 00:06:04.407319 extend-filesystems[1454]: Found vda May 8 00:06:04.407319 extend-filesystems[1454]: Found vda1 May 8 00:06:04.407319 extend-filesystems[1454]: Found vda2 May 8 00:06:04.407319 extend-filesystems[1454]: Found vda3 May 8 00:06:04.407319 extend-filesystems[1454]: Found usr May 8 00:06:04.407319 extend-filesystems[1454]: Found vda4 May 8 00:06:04.407319 extend-filesystems[1454]: Found vda6 May 8 00:06:04.407319 extend-filesystems[1454]: Found vda7 May 8 00:06:04.407319 extend-filesystems[1454]: Found vda9 May 8 00:06:04.407319 extend-filesystems[1454]: Checking size of /dev/vda9 May 8 00:06:04.486093 extend-filesystems[1454]: Resized partition /dev/vda9 May 8 00:06:04.486812 coreos-metadata[1451]: May 08 00:06:04.468 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 May 8 00:06:04.487199 update_engine[1464]: I20250508 00:06:04.413254 1464 update_check_scheduler.cc:74] Next update check in 2m46s May 8 00:06:04.410636 systemd[1]: motdgen.service: Deactivated successfully. May 8 00:06:04.410967 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 8 00:06:04.415425 systemd[1]: Started update-engine.service - Update Engine. May 8 00:06:04.497527 jq[1465]: true May 8 00:06:04.425242 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 8 00:06:04.497859 tar[1471]: linux-amd64/helm May 8 00:06:04.493003 (ntainerd)[1485]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 8 00:06:04.510957 jq[1488]: true May 8 00:06:04.511363 coreos-metadata[1451]: May 08 00:06:04.505 INFO Fetch successful May 8 00:06:04.511430 extend-filesystems[1490]: resize2fs 1.47.1 (20-May-2024) May 8 00:06:04.534353 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks May 8 00:06:04.573319 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (1390) May 8 00:06:04.614188 systemd-logind[1463]: New seat seat0. May 8 00:06:04.631503 systemd-logind[1463]: Watching system buttons on /dev/input/event1 (Power Button) May 8 00:06:04.631539 systemd-logind[1463]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 8 00:06:04.631833 systemd[1]: Started systemd-logind.service - User Login Management. May 8 00:06:04.690798 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 8 00:06:04.698616 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 8 00:06:04.708312 kernel: EXT4-fs (vda9): resized filesystem to 15121403 May 8 00:06:04.728663 extend-filesystems[1490]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 8 00:06:04.728663 extend-filesystems[1490]: old_desc_blocks = 1, new_desc_blocks = 8 May 8 00:06:04.728663 extend-filesystems[1490]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. May 8 00:06:04.745720 extend-filesystems[1454]: Resized filesystem in /dev/vda9 May 8 00:06:04.745720 extend-filesystems[1454]: Found vdb May 8 00:06:04.758456 bash[1514]: Updated "/home/core/.ssh/authorized_keys" May 8 00:06:04.760403 systemd[1]: extend-filesystems.service: Deactivated successfully. May 8 00:06:04.760671 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 8 00:06:04.764523 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 8 00:06:04.794485 systemd-networkd[1379]: eth1: Gained IPv6LL May 8 00:06:04.810455 systemd-timesyncd[1365]: Network configuration changed, trying to establish connection. May 8 00:06:04.841789 systemd[1]: Starting sshkeys.service... May 8 00:06:04.853396 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 8 00:06:04.857087 systemd[1]: Reached target network-online.target - Network is Online. May 8 00:06:04.871734 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:06:04.885866 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 8 00:06:04.981413 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 8 00:06:04.996988 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 8 00:06:05.107437 systemd-networkd[1379]: eth0: Gained IPv6LL May 8 00:06:05.108059 systemd-timesyncd[1365]: Network configuration changed, trying to establish connection. May 8 00:06:05.123161 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 8 00:06:05.153880 coreos-metadata[1525]: May 08 00:06:05.153 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 May 8 00:06:05.154586 locksmithd[1482]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 8 00:06:05.171348 coreos-metadata[1525]: May 08 00:06:05.171 INFO Fetch successful May 8 00:06:05.196412 unknown[1525]: wrote ssh authorized keys file for user: core May 8 00:06:05.278737 update-ssh-keys[1543]: Updated "/home/core/.ssh/authorized_keys" May 8 00:06:05.281531 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 8 00:06:05.287028 systemd[1]: Finished sshkeys.service. May 8 00:06:05.362080 containerd[1485]: time="2025-05-08T00:06:05.361882564Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 May 8 00:06:05.467031 containerd[1485]: time="2025-05-08T00:06:05.466922563Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 8 00:06:05.472967 containerd[1485]: time="2025-05-08T00:06:05.472880457Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 8 00:06:05.473557 containerd[1485]: time="2025-05-08T00:06:05.473527991Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 8 00:06:05.473665 containerd[1485]: time="2025-05-08T00:06:05.473647709Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 8 00:06:05.474110 containerd[1485]: time="2025-05-08T00:06:05.474079212Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 8 00:06:05.474548 containerd[1485]: time="2025-05-08T00:06:05.474525523Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 8 00:06:05.474752 containerd[1485]: time="2025-05-08T00:06:05.474725248Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:06:05.477148 containerd[1485]: time="2025-05-08T00:06:05.475308696Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 8 00:06:05.477148 containerd[1485]: time="2025-05-08T00:06:05.475720954Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:06:05.477148 containerd[1485]: time="2025-05-08T00:06:05.475748336Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 8 00:06:05.477148 containerd[1485]: time="2025-05-08T00:06:05.475770917Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:06:05.477148 containerd[1485]: time="2025-05-08T00:06:05.475786682Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 8 00:06:05.477148 containerd[1485]: time="2025-05-08T00:06:05.475946857Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 8 00:06:05.477148 containerd[1485]: time="2025-05-08T00:06:05.476258843Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 8 00:06:05.477840 containerd[1485]: time="2025-05-08T00:06:05.477808206Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:06:05.478112 containerd[1485]: time="2025-05-08T00:06:05.478088209Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 8 00:06:05.478470 containerd[1485]: time="2025-05-08T00:06:05.478442429Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 8 00:06:05.478999 containerd[1485]: time="2025-05-08T00:06:05.478974574Z" level=info msg="metadata content store policy set" policy=shared May 8 00:06:05.488605 containerd[1485]: time="2025-05-08T00:06:05.486698377Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 8 00:06:05.488605 containerd[1485]: time="2025-05-08T00:06:05.486829518Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 8 00:06:05.488605 containerd[1485]: time="2025-05-08T00:06:05.486860818Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 8 00:06:05.488605 containerd[1485]: time="2025-05-08T00:06:05.486891936Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 8 00:06:05.488605 containerd[1485]: time="2025-05-08T00:06:05.486916882Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 8 00:06:05.488605 containerd[1485]: time="2025-05-08T00:06:05.487220267Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 8 00:06:05.488605 containerd[1485]: time="2025-05-08T00:06:05.487647248Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 8 00:06:05.488605 containerd[1485]: time="2025-05-08T00:06:05.488104694Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 8 00:06:05.488605 containerd[1485]: time="2025-05-08T00:06:05.488139883Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 8 00:06:05.488605 containerd[1485]: time="2025-05-08T00:06:05.488167697Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 8 00:06:05.488605 containerd[1485]: time="2025-05-08T00:06:05.488193895Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 8 00:06:05.488605 containerd[1485]: time="2025-05-08T00:06:05.488216790Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 8 00:06:05.488605 containerd[1485]: time="2025-05-08T00:06:05.488239869Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 8 00:06:05.488605 containerd[1485]: time="2025-05-08T00:06:05.488265824Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 8 00:06:05.490322 containerd[1485]: time="2025-05-08T00:06:05.490257315Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 8 00:06:05.490602 containerd[1485]: time="2025-05-08T00:06:05.490574312Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 8 00:06:05.490705 containerd[1485]: time="2025-05-08T00:06:05.490688130Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 8 00:06:05.490786 containerd[1485]: time="2025-05-08T00:06:05.490771145Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 8 00:06:05.492192 containerd[1485]: time="2025-05-08T00:06:05.491683749Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 8 00:06:05.492192 containerd[1485]: time="2025-05-08T00:06:05.491725212Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 8 00:06:05.492192 containerd[1485]: time="2025-05-08T00:06:05.491748382Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 8 00:06:05.492192 containerd[1485]: time="2025-05-08T00:06:05.491770894Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 8 00:06:05.492192 containerd[1485]: time="2025-05-08T00:06:05.491790498Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 8 00:06:05.492192 containerd[1485]: time="2025-05-08T00:06:05.491811100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 8 00:06:05.492192 containerd[1485]: time="2025-05-08T00:06:05.491844971Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 8 00:06:05.492192 containerd[1485]: time="2025-05-08T00:06:05.491867565Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 8 00:06:05.492192 containerd[1485]: time="2025-05-08T00:06:05.491889618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 8 00:06:05.492192 containerd[1485]: time="2025-05-08T00:06:05.491914298Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 8 00:06:05.492192 containerd[1485]: time="2025-05-08T00:06:05.491933915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 8 00:06:05.492192 containerd[1485]: time="2025-05-08T00:06:05.491952478Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 8 00:06:05.492192 containerd[1485]: time="2025-05-08T00:06:05.491973670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 8 00:06:05.492192 containerd[1485]: time="2025-05-08T00:06:05.491997849Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 8 00:06:05.492192 containerd[1485]: time="2025-05-08T00:06:05.492036975Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 8 00:06:05.492865 containerd[1485]: time="2025-05-08T00:06:05.492084368Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 8 00:06:05.492865 containerd[1485]: time="2025-05-08T00:06:05.492109739Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 8 00:06:05.495039 containerd[1485]: time="2025-05-08T00:06:05.493010728Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 8 00:06:05.495039 containerd[1485]: time="2025-05-08T00:06:05.493156745Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 8 00:06:05.495039 containerd[1485]: time="2025-05-08T00:06:05.493180264Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 8 00:06:05.495039 containerd[1485]: time="2025-05-08T00:06:05.493203829Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 8 00:06:05.495039 containerd[1485]: time="2025-05-08T00:06:05.493219518Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 8 00:06:05.495039 containerd[1485]: time="2025-05-08T00:06:05.493272867Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 8 00:06:05.495039 containerd[1485]: time="2025-05-08T00:06:05.493304245Z" level=info msg="NRI interface is disabled by configuration." May 8 00:06:05.495039 containerd[1485]: time="2025-05-08T00:06:05.493322201Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 8 00:06:05.495481 containerd[1485]: time="2025-05-08T00:06:05.493810352Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 8 00:06:05.495481 containerd[1485]: time="2025-05-08T00:06:05.493908530Z" level=info msg="Connect containerd service" May 8 00:06:05.495481 containerd[1485]: time="2025-05-08T00:06:05.493981270Z" level=info msg="using legacy CRI server" May 8 00:06:05.495481 containerd[1485]: time="2025-05-08T00:06:05.493995455Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 8 00:06:05.495481 containerd[1485]: time="2025-05-08T00:06:05.494212518Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 8 00:06:05.500178 containerd[1485]: time="2025-05-08T00:06:05.498849463Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 8 00:06:05.500178 containerd[1485]: time="2025-05-08T00:06:05.499748266Z" level=info msg="Start subscribing containerd event" May 8 00:06:05.500178 containerd[1485]: time="2025-05-08T00:06:05.499836267Z" level=info msg="Start recovering state" May 8 00:06:05.500178 containerd[1485]: time="2025-05-08T00:06:05.499959224Z" level=info msg="Start event monitor" May 8 00:06:05.500178 containerd[1485]: time="2025-05-08T00:06:05.499980893Z" level=info msg="Start snapshots syncer" May 8 00:06:05.500178 containerd[1485]: time="2025-05-08T00:06:05.499997255Z" level=info msg="Start cni network conf syncer for default" May 8 00:06:05.500178 containerd[1485]: time="2025-05-08T00:06:05.500009959Z" level=info msg="Start streaming server" May 8 00:06:05.503759 containerd[1485]: time="2025-05-08T00:06:05.502408840Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 8 00:06:05.503759 containerd[1485]: time="2025-05-08T00:06:05.502578005Z" level=info msg=serving... address=/run/containerd/containerd.sock May 8 00:06:05.510222 containerd[1485]: time="2025-05-08T00:06:05.508505587Z" level=info msg="containerd successfully booted in 0.148049s" May 8 00:06:05.510514 systemd[1]: Started containerd.service - containerd container runtime. May 8 00:06:05.612700 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 8 00:06:05.688938 sshd_keygen[1484]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 8 00:06:05.760931 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 8 00:06:05.774762 systemd[1]: Starting issuegen.service - Generate /run/issue... May 8 00:06:05.789443 systemd[1]: Started sshd@0-146.190.122.31:22-139.178.68.195:53542.service - OpenSSH per-connection server daemon (139.178.68.195:53542). May 8 00:06:05.821480 systemd[1]: issuegen.service: Deactivated successfully. May 8 00:06:05.822429 systemd[1]: Finished issuegen.service - Generate /run/issue. May 8 00:06:05.834407 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 8 00:06:05.891675 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 8 00:06:05.906786 systemd[1]: Started getty@tty1.service - Getty on tty1. May 8 00:06:05.924050 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 8 00:06:05.928456 systemd[1]: Reached target getty.target - Login Prompts. May 8 00:06:05.937421 sshd[1560]: Accepted publickey for core from 139.178.68.195 port 53542 ssh2: RSA SHA256:EMQK/xXwyGW130jHG636zV1LD4ZeZqDZsuuaHw+qK90 May 8 00:06:05.940828 sshd-session[1560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:06:05.958543 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 8 00:06:05.969687 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 8 00:06:06.001388 systemd-logind[1463]: New session 1 of user core. May 8 00:06:06.024253 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 8 00:06:06.036852 systemd[1]: Starting user@500.service - User Manager for UID 500... May 8 00:06:06.044806 (systemd)[1572]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 8 00:06:06.053541 systemd-logind[1463]: New session c1 of user core. May 8 00:06:06.101735 tar[1471]: linux-amd64/LICENSE May 8 00:06:06.101735 tar[1471]: linux-amd64/README.md May 8 00:06:06.131001 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 8 00:06:06.324666 systemd[1572]: Queued start job for default target default.target. May 8 00:06:06.330333 systemd[1572]: Created slice app.slice - User Application Slice. May 8 00:06:06.330389 systemd[1572]: Reached target paths.target - Paths. May 8 00:06:06.330472 systemd[1572]: Reached target timers.target - Timers. May 8 00:06:06.333510 systemd[1572]: Starting dbus.socket - D-Bus User Message Bus Socket... May 8 00:06:06.362232 systemd[1572]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 8 00:06:06.363566 systemd[1572]: Reached target sockets.target - Sockets. May 8 00:06:06.363880 systemd[1572]: Reached target basic.target - Basic System. May 8 00:06:06.363954 systemd[1572]: Reached target default.target - Main User Target. May 8 00:06:06.364007 systemd[1572]: Startup finished in 296ms. May 8 00:06:06.364218 systemd[1]: Started user@500.service - User Manager for UID 500. May 8 00:06:06.377655 systemd[1]: Started session-1.scope - Session 1 of User core. May 8 00:06:06.464829 systemd[1]: Started sshd@1-146.190.122.31:22-139.178.68.195:53546.service - OpenSSH per-connection server daemon (139.178.68.195:53546). May 8 00:06:06.541348 sshd[1586]: Accepted publickey for core from 139.178.68.195 port 53546 ssh2: RSA SHA256:EMQK/xXwyGW130jHG636zV1LD4ZeZqDZsuuaHw+qK90 May 8 00:06:06.543513 sshd-session[1586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:06:06.562601 systemd-logind[1463]: New session 2 of user core. May 8 00:06:06.568663 systemd[1]: Started session-2.scope - Session 2 of User core. May 8 00:06:06.574191 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:06:06.579375 systemd[1]: Reached target multi-user.target - Multi-User System. May 8 00:06:06.583365 systemd[1]: Startup finished in 1.306s (kernel) + 7.368s (initrd) + 7.524s (userspace) = 16.200s. May 8 00:06:06.600120 (kubelet)[1592]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:06:06.651329 sshd[1594]: Connection closed by 139.178.68.195 port 53546 May 8 00:06:06.652624 sshd-session[1586]: pam_unix(sshd:session): session closed for user core May 8 00:06:06.669117 systemd[1]: sshd@1-146.190.122.31:22-139.178.68.195:53546.service: Deactivated successfully. May 8 00:06:06.673195 systemd[1]: session-2.scope: Deactivated successfully. May 8 00:06:06.677198 systemd-logind[1463]: Session 2 logged out. Waiting for processes to exit. May 8 00:06:06.685114 systemd[1]: Started sshd@2-146.190.122.31:22-139.178.68.195:53562.service - OpenSSH per-connection server daemon (139.178.68.195:53562). May 8 00:06:06.690549 systemd-logind[1463]: Removed session 2. May 8 00:06:06.747724 sshd[1603]: Accepted publickey for core from 139.178.68.195 port 53562 ssh2: RSA SHA256:EMQK/xXwyGW130jHG636zV1LD4ZeZqDZsuuaHw+qK90 May 8 00:06:06.750099 sshd-session[1603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:06:06.758668 systemd-logind[1463]: New session 3 of user core. May 8 00:06:06.764633 systemd[1]: Started session-3.scope - Session 3 of User core. May 8 00:06:06.830370 sshd[1606]: Connection closed by 139.178.68.195 port 53562 May 8 00:06:06.834262 sshd-session[1603]: pam_unix(sshd:session): session closed for user core May 8 00:06:06.848151 systemd[1]: sshd@2-146.190.122.31:22-139.178.68.195:53562.service: Deactivated successfully. May 8 00:06:06.852121 systemd[1]: session-3.scope: Deactivated successfully. May 8 00:06:06.854513 systemd-logind[1463]: Session 3 logged out. Waiting for processes to exit. May 8 00:06:06.864566 systemd[1]: Started sshd@3-146.190.122.31:22-139.178.68.195:53578.service - OpenSSH per-connection server daemon (139.178.68.195:53578). May 8 00:06:06.870715 systemd-logind[1463]: Removed session 3. May 8 00:06:06.930694 sshd[1615]: Accepted publickey for core from 139.178.68.195 port 53578 ssh2: RSA SHA256:EMQK/xXwyGW130jHG636zV1LD4ZeZqDZsuuaHw+qK90 May 8 00:06:06.933200 sshd-session[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:06:06.945523 systemd-logind[1463]: New session 4 of user core. May 8 00:06:06.954645 systemd[1]: Started session-4.scope - Session 4 of User core. May 8 00:06:07.026336 sshd[1618]: Connection closed by 139.178.68.195 port 53578 May 8 00:06:07.026776 sshd-session[1615]: pam_unix(sshd:session): session closed for user core May 8 00:06:07.041603 systemd[1]: sshd@3-146.190.122.31:22-139.178.68.195:53578.service: Deactivated successfully. May 8 00:06:07.045391 systemd[1]: session-4.scope: Deactivated successfully. May 8 00:06:07.048433 systemd-logind[1463]: Session 4 logged out. Waiting for processes to exit. May 8 00:06:07.058877 systemd[1]: Started sshd@4-146.190.122.31:22-139.178.68.195:53588.service - OpenSSH per-connection server daemon (139.178.68.195:53588). May 8 00:06:07.065608 systemd-logind[1463]: Removed session 4. May 8 00:06:07.118588 sshd[1623]: Accepted publickey for core from 139.178.68.195 port 53588 ssh2: RSA SHA256:EMQK/xXwyGW130jHG636zV1LD4ZeZqDZsuuaHw+qK90 May 8 00:06:07.122212 sshd-session[1623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:06:07.133877 systemd-logind[1463]: New session 5 of user core. May 8 00:06:07.141693 systemd[1]: Started session-5.scope - Session 5 of User core. May 8 00:06:07.231850 sudo[1627]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 8 00:06:07.232554 sudo[1627]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:06:07.252550 sudo[1627]: pam_unix(sudo:session): session closed for user root May 8 00:06:07.257991 sshd[1626]: Connection closed by 139.178.68.195 port 53588 May 8 00:06:07.259501 sshd-session[1623]: pam_unix(sshd:session): session closed for user core May 8 00:06:07.279156 systemd[1]: sshd@4-146.190.122.31:22-139.178.68.195:53588.service: Deactivated successfully. May 8 00:06:07.284627 systemd[1]: session-5.scope: Deactivated successfully. May 8 00:06:07.286937 systemd-logind[1463]: Session 5 logged out. Waiting for processes to exit. May 8 00:06:07.300201 systemd[1]: Started sshd@5-146.190.122.31:22-139.178.68.195:53594.service - OpenSSH per-connection server daemon (139.178.68.195:53594). May 8 00:06:07.304541 systemd-logind[1463]: Removed session 5. May 8 00:06:07.378326 sshd[1633]: Accepted publickey for core from 139.178.68.195 port 53594 ssh2: RSA SHA256:EMQK/xXwyGW130jHG636zV1LD4ZeZqDZsuuaHw+qK90 May 8 00:06:07.381410 sshd-session[1633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:06:07.396060 systemd-logind[1463]: New session 6 of user core. May 8 00:06:07.404805 systemd[1]: Started session-6.scope - Session 6 of User core. May 8 00:06:07.481487 sudo[1639]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 8 00:06:07.482726 sudo[1639]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:06:07.489857 sudo[1639]: pam_unix(sudo:session): session closed for user root May 8 00:06:07.498677 sudo[1637]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 8 00:06:07.499649 sudo[1637]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:06:07.529583 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 8 00:06:07.569377 kubelet[1592]: E0508 00:06:07.569315 1592 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:06:07.575775 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:06:07.576145 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:06:07.577826 systemd[1]: kubelet.service: Consumed 1.345s CPU time, 247.5M memory peak. May 8 00:06:07.608127 augenrules[1662]: No rules May 8 00:06:07.609555 systemd[1]: audit-rules.service: Deactivated successfully. May 8 00:06:07.609961 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 8 00:06:07.611771 sudo[1637]: pam_unix(sudo:session): session closed for user root May 8 00:06:07.615639 sshd[1636]: Connection closed by 139.178.68.195 port 53594 May 8 00:06:07.617671 sshd-session[1633]: pam_unix(sshd:session): session closed for user core May 8 00:06:07.629511 systemd[1]: sshd@5-146.190.122.31:22-139.178.68.195:53594.service: Deactivated successfully. May 8 00:06:07.633605 systemd[1]: session-6.scope: Deactivated successfully. May 8 00:06:07.637619 systemd-logind[1463]: Session 6 logged out. Waiting for processes to exit. May 8 00:06:07.649878 systemd[1]: Started sshd@6-146.190.122.31:22-139.178.68.195:53608.service - OpenSSH per-connection server daemon (139.178.68.195:53608). May 8 00:06:07.651519 systemd-logind[1463]: Removed session 6. May 8 00:06:07.715932 sshd[1670]: Accepted publickey for core from 139.178.68.195 port 53608 ssh2: RSA SHA256:EMQK/xXwyGW130jHG636zV1LD4ZeZqDZsuuaHw+qK90 May 8 00:06:07.719448 sshd-session[1670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:06:07.729596 systemd-logind[1463]: New session 7 of user core. May 8 00:06:07.740816 systemd[1]: Started session-7.scope - Session 7 of User core. May 8 00:06:07.811663 sudo[1674]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 8 00:06:07.813067 sudo[1674]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:06:08.432874 systemd[1]: Starting docker.service - Docker Application Container Engine... May 8 00:06:08.446124 (dockerd)[1692]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 8 00:06:09.040344 dockerd[1692]: time="2025-05-08T00:06:09.040243711Z" level=info msg="Starting up" May 8 00:06:09.293539 dockerd[1692]: time="2025-05-08T00:06:09.293315517Z" level=info msg="Loading containers: start." May 8 00:06:09.550415 kernel: Initializing XFRM netlink socket May 8 00:06:09.589603 systemd-timesyncd[1365]: Network configuration changed, trying to establish connection. May 8 00:06:09.592822 systemd-timesyncd[1365]: Network configuration changed, trying to establish connection. May 8 00:06:09.615236 systemd-timesyncd[1365]: Network configuration changed, trying to establish connection. May 8 00:06:09.697903 systemd-networkd[1379]: docker0: Link UP May 8 00:06:09.698914 systemd-timesyncd[1365]: Network configuration changed, trying to establish connection. May 8 00:06:09.743330 dockerd[1692]: time="2025-05-08T00:06:09.743239391Z" level=info msg="Loading containers: done." May 8 00:06:09.768504 dockerd[1692]: time="2025-05-08T00:06:09.767793919Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 8 00:06:09.768504 dockerd[1692]: time="2025-05-08T00:06:09.767986915Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 May 8 00:06:09.768504 dockerd[1692]: time="2025-05-08T00:06:09.768191201Z" level=info msg="Daemon has completed initialization" May 8 00:06:09.827824 dockerd[1692]: time="2025-05-08T00:06:09.826904656Z" level=info msg="API listen on /run/docker.sock" May 8 00:06:09.827209 systemd[1]: Started docker.service - Docker Application Container Engine. May 8 00:06:11.095345 containerd[1485]: time="2025-05-08T00:06:11.094830186Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 8 00:06:11.857045 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2695350390.mount: Deactivated successfully. May 8 00:06:13.649345 containerd[1485]: time="2025-05-08T00:06:13.647675976Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:06:13.650546 containerd[1485]: time="2025-05-08T00:06:13.650503630Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=32674873" May 8 00:06:13.650923 containerd[1485]: time="2025-05-08T00:06:13.650895592Z" level=info msg="ImageCreate event name:\"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:06:13.655692 containerd[1485]: time="2025-05-08T00:06:13.655632085Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:06:13.660949 containerd[1485]: time="2025-05-08T00:06:13.660782267Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"32671673\" in 2.565894955s" May 8 00:06:13.661172 containerd[1485]: time="2025-05-08T00:06:13.661150063Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\"" May 8 00:06:13.703052 containerd[1485]: time="2025-05-08T00:06:13.703005718Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 8 00:06:15.488993 containerd[1485]: time="2025-05-08T00:06:15.488850017Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:06:15.491221 containerd[1485]: time="2025-05-08T00:06:15.491143009Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=29617534" May 8 00:06:15.493570 containerd[1485]: time="2025-05-08T00:06:15.492012670Z" level=info msg="ImageCreate event name:\"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:06:15.497693 containerd[1485]: time="2025-05-08T00:06:15.497638234Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:06:15.500111 containerd[1485]: time="2025-05-08T00:06:15.500060231Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"31105907\" in 1.797003768s" May 8 00:06:15.500320 containerd[1485]: time="2025-05-08T00:06:15.500280781Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\"" May 8 00:06:15.543145 containerd[1485]: time="2025-05-08T00:06:15.543080352Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 8 00:06:16.808640 containerd[1485]: time="2025-05-08T00:06:16.808575088Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:06:16.811159 containerd[1485]: time="2025-05-08T00:06:16.811080976Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=17903682" May 8 00:06:16.812732 containerd[1485]: time="2025-05-08T00:06:16.812691445Z" level=info msg="ImageCreate event name:\"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:06:16.817867 containerd[1485]: time="2025-05-08T00:06:16.817778909Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:06:16.821347 containerd[1485]: time="2025-05-08T00:06:16.820277849Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"19392073\" in 1.276767199s" May 8 00:06:16.821347 containerd[1485]: time="2025-05-08T00:06:16.820344560Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\"" May 8 00:06:16.855677 containerd[1485]: time="2025-05-08T00:06:16.855601347Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 8 00:06:17.652144 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 8 00:06:17.661001 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:06:17.885674 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:06:17.894662 (kubelet)[1982]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:06:18.019904 kubelet[1982]: E0508 00:06:18.019612 1982 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:06:18.025635 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:06:18.025852 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:06:18.026732 systemd[1]: kubelet.service: Consumed 262ms CPU time, 98M memory peak. May 8 00:06:18.117948 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount808226523.mount: Deactivated successfully. May 8 00:06:18.792505 containerd[1485]: time="2025-05-08T00:06:18.792433024Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:06:18.794705 containerd[1485]: time="2025-05-08T00:06:18.794636610Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=29185817" May 8 00:06:18.796324 containerd[1485]: time="2025-05-08T00:06:18.796227623Z" level=info msg="ImageCreate event name:\"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:06:18.800450 containerd[1485]: time="2025-05-08T00:06:18.800390170Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:06:18.801591 containerd[1485]: time="2025-05-08T00:06:18.801358023Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"29184836\" in 1.945493337s" May 8 00:06:18.801591 containerd[1485]: time="2025-05-08T00:06:18.801412384Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\"" May 8 00:06:18.842465 containerd[1485]: time="2025-05-08T00:06:18.842393773Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 8 00:06:18.845187 systemd-resolved[1333]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. May 8 00:06:19.435769 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount320837871.mount: Deactivated successfully. May 8 00:06:20.480337 containerd[1485]: time="2025-05-08T00:06:20.479975575Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:06:20.482332 containerd[1485]: time="2025-05-08T00:06:20.482017210Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" May 8 00:06:20.485316 containerd[1485]: time="2025-05-08T00:06:20.483308008Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:06:20.497604 containerd[1485]: time="2025-05-08T00:06:20.497534212Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:06:20.500381 containerd[1485]: time="2025-05-08T00:06:20.500156098Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.657697027s" May 8 00:06:20.500381 containerd[1485]: time="2025-05-08T00:06:20.500222069Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 8 00:06:20.551135 containerd[1485]: time="2025-05-08T00:06:20.550798686Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 8 00:06:21.109411 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1514762558.mount: Deactivated successfully. May 8 00:06:21.119348 containerd[1485]: time="2025-05-08T00:06:21.118510868Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:06:21.120927 containerd[1485]: time="2025-05-08T00:06:21.120846476Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" May 8 00:06:21.122557 containerd[1485]: time="2025-05-08T00:06:21.122503687Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:06:21.129328 containerd[1485]: time="2025-05-08T00:06:21.127785874Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:06:21.129868 containerd[1485]: time="2025-05-08T00:06:21.129818975Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 578.96116ms" May 8 00:06:21.130057 containerd[1485]: time="2025-05-08T00:06:21.130032783Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" May 8 00:06:21.167126 containerd[1485]: time="2025-05-08T00:06:21.167086722Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 8 00:06:21.760298 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount994777415.mount: Deactivated successfully. May 8 00:06:21.938521 systemd-resolved[1333]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. May 8 00:06:23.671211 containerd[1485]: time="2025-05-08T00:06:23.669244705Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:06:23.671211 containerd[1485]: time="2025-05-08T00:06:23.671123477Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" May 8 00:06:23.672056 containerd[1485]: time="2025-05-08T00:06:23.672016598Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:06:23.676765 containerd[1485]: time="2025-05-08T00:06:23.676653574Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:06:23.679054 containerd[1485]: time="2025-05-08T00:06:23.678998904Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.511631442s" May 8 00:06:23.679254 containerd[1485]: time="2025-05-08T00:06:23.679232368Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" May 8 00:06:27.336261 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:06:27.337197 systemd[1]: kubelet.service: Consumed 262ms CPU time, 98M memory peak. May 8 00:06:27.355835 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:06:27.395301 systemd[1]: Reload requested from client PID 2166 ('systemctl') (unit session-7.scope)... May 8 00:06:27.395347 systemd[1]: Reloading... May 8 00:06:27.589369 zram_generator::config[2213]: No configuration found. May 8 00:06:27.763444 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:06:27.954357 systemd[1]: Reloading finished in 558 ms. May 8 00:06:28.035091 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:06:28.041124 systemd[1]: kubelet.service: Deactivated successfully. May 8 00:06:28.041497 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:06:28.041585 systemd[1]: kubelet.service: Consumed 130ms CPU time, 83.6M memory peak. May 8 00:06:28.045866 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:06:28.211743 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:06:28.226176 (kubelet)[2266]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 8 00:06:28.296719 kubelet[2266]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:06:28.297190 kubelet[2266]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 8 00:06:28.297275 kubelet[2266]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:06:28.301317 kubelet[2266]: I0508 00:06:28.300396 2266 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 00:06:28.712545 kubelet[2266]: I0508 00:06:28.712494 2266 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 8 00:06:28.712908 kubelet[2266]: I0508 00:06:28.712872 2266 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 00:06:28.713352 kubelet[2266]: I0508 00:06:28.713332 2266 server.go:927] "Client rotation is on, will bootstrap in background" May 8 00:06:28.736870 kubelet[2266]: I0508 00:06:28.736824 2266 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 00:06:28.738666 kubelet[2266]: E0508 00:06:28.738395 2266 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://146.190.122.31:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 146.190.122.31:6443: connect: connection refused May 8 00:06:28.757280 kubelet[2266]: I0508 00:06:28.757187 2266 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 00:06:28.757565 kubelet[2266]: I0508 00:06:28.757515 2266 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 00:06:28.757779 kubelet[2266]: I0508 00:06:28.757561 2266 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.1.1-n-e3439e552d","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 8 00:06:28.758632 kubelet[2266]: I0508 00:06:28.758581 2266 topology_manager.go:138] "Creating topology manager with none policy" May 8 00:06:28.758632 kubelet[2266]: I0508 00:06:28.758625 2266 container_manager_linux.go:301] "Creating device plugin manager" May 8 00:06:28.758821 kubelet[2266]: I0508 00:06:28.758808 2266 state_mem.go:36] "Initialized new in-memory state store" May 8 00:06:28.759755 kubelet[2266]: I0508 00:06:28.759717 2266 kubelet.go:400] "Attempting to sync node with API server" May 8 00:06:28.759755 kubelet[2266]: I0508 00:06:28.759742 2266 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 00:06:28.759917 kubelet[2266]: I0508 00:06:28.759768 2266 kubelet.go:312] "Adding apiserver pod source" May 8 00:06:28.759917 kubelet[2266]: I0508 00:06:28.759787 2266 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 00:06:28.764327 kubelet[2266]: W0508 00:06:28.763964 2266 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://146.190.122.31:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 146.190.122.31:6443: connect: connection refused May 8 00:06:28.764327 kubelet[2266]: E0508 00:06:28.764048 2266 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://146.190.122.31:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 146.190.122.31:6443: connect: connection refused May 8 00:06:28.764327 kubelet[2266]: W0508 00:06:28.764151 2266 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://146.190.122.31:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.1-n-e3439e552d&limit=500&resourceVersion=0": dial tcp 146.190.122.31:6443: connect: connection refused May 8 00:06:28.764327 kubelet[2266]: E0508 00:06:28.764204 2266 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://146.190.122.31:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.1-n-e3439e552d&limit=500&resourceVersion=0": dial tcp 146.190.122.31:6443: connect: connection refused May 8 00:06:28.765626 kubelet[2266]: I0508 00:06:28.765578 2266 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 8 00:06:28.768797 kubelet[2266]: I0508 00:06:28.767897 2266 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 00:06:28.768797 kubelet[2266]: W0508 00:06:28.768022 2266 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 8 00:06:28.769031 kubelet[2266]: I0508 00:06:28.769005 2266 server.go:1264] "Started kubelet" May 8 00:06:28.774224 kubelet[2266]: I0508 00:06:28.774162 2266 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 8 00:06:28.776056 kubelet[2266]: I0508 00:06:28.776019 2266 server.go:455] "Adding debug handlers to kubelet server" May 8 00:06:28.776419 kubelet[2266]: I0508 00:06:28.776345 2266 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 00:06:28.777490 kubelet[2266]: I0508 00:06:28.777458 2266 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 00:06:28.782109 kubelet[2266]: E0508 00:06:28.781891 2266 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://146.190.122.31:6443/api/v1/namespaces/default/events\": dial tcp 146.190.122.31:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.1.1-n-e3439e552d.183d648d5af79ad0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.1.1-n-e3439e552d,UID:ci-4230.1.1-n-e3439e552d,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.1.1-n-e3439e552d,},FirstTimestamp:2025-05-08 00:06:28.76896328 +0000 UTC m=+0.536469952,LastTimestamp:2025-05-08 00:06:28.76896328 +0000 UTC m=+0.536469952,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.1.1-n-e3439e552d,}" May 8 00:06:28.782908 kubelet[2266]: I0508 00:06:28.782721 2266 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 00:06:28.794544 kubelet[2266]: I0508 00:06:28.794496 2266 volume_manager.go:291] "Starting Kubelet Volume Manager" May 8 00:06:28.796327 kubelet[2266]: E0508 00:06:28.795894 2266 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.122.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.1-n-e3439e552d?timeout=10s\": dial tcp 146.190.122.31:6443: connect: connection refused" interval="200ms" May 8 00:06:28.796327 kubelet[2266]: I0508 00:06:28.796180 2266 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 8 00:06:28.796327 kubelet[2266]: I0508 00:06:28.796244 2266 reconciler.go:26] "Reconciler: start to sync state" May 8 00:06:28.797312 kubelet[2266]: W0508 00:06:28.796661 2266 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://146.190.122.31:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 146.190.122.31:6443: connect: connection refused May 8 00:06:28.797312 kubelet[2266]: E0508 00:06:28.796732 2266 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://146.190.122.31:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 146.190.122.31:6443: connect: connection refused May 8 00:06:28.797312 kubelet[2266]: I0508 00:06:28.797154 2266 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 00:06:28.802172 kubelet[2266]: I0508 00:06:28.802136 2266 factory.go:221] Registration of the containerd container factory successfully May 8 00:06:28.802172 kubelet[2266]: I0508 00:06:28.802156 2266 factory.go:221] Registration of the systemd container factory successfully May 8 00:06:28.821829 kubelet[2266]: I0508 00:06:28.821746 2266 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 00:06:28.824700 kubelet[2266]: I0508 00:06:28.823551 2266 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 00:06:28.824700 kubelet[2266]: I0508 00:06:28.823595 2266 status_manager.go:217] "Starting to sync pod status with apiserver" May 8 00:06:28.824700 kubelet[2266]: I0508 00:06:28.823624 2266 kubelet.go:2337] "Starting kubelet main sync loop" May 8 00:06:28.824700 kubelet[2266]: E0508 00:06:28.823673 2266 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 00:06:28.833874 kubelet[2266]: E0508 00:06:28.833524 2266 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 8 00:06:28.833874 kubelet[2266]: W0508 00:06:28.833691 2266 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://146.190.122.31:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 146.190.122.31:6443: connect: connection refused May 8 00:06:28.833874 kubelet[2266]: E0508 00:06:28.833755 2266 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://146.190.122.31:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 146.190.122.31:6443: connect: connection refused May 8 00:06:28.841194 kubelet[2266]: I0508 00:06:28.840848 2266 cpu_manager.go:214] "Starting CPU manager" policy="none" May 8 00:06:28.841194 kubelet[2266]: I0508 00:06:28.840870 2266 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 8 00:06:28.841194 kubelet[2266]: I0508 00:06:28.840895 2266 state_mem.go:36] "Initialized new in-memory state store" May 8 00:06:28.843560 kubelet[2266]: I0508 00:06:28.843532 2266 policy_none.go:49] "None policy: Start" May 8 00:06:28.844592 kubelet[2266]: I0508 00:06:28.844547 2266 memory_manager.go:170] "Starting memorymanager" policy="None" May 8 00:06:28.844592 kubelet[2266]: I0508 00:06:28.844580 2266 state_mem.go:35] "Initializing new in-memory state store" May 8 00:06:28.853170 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 8 00:06:28.870201 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 8 00:06:28.876981 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 8 00:06:28.890553 kubelet[2266]: I0508 00:06:28.889854 2266 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 00:06:28.890553 kubelet[2266]: I0508 00:06:28.890103 2266 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 00:06:28.890553 kubelet[2266]: I0508 00:06:28.890256 2266 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 00:06:28.895169 kubelet[2266]: E0508 00:06:28.894684 2266 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230.1.1-n-e3439e552d\" not found" May 8 00:06:28.897619 kubelet[2266]: I0508 00:06:28.897558 2266 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230.1.1-n-e3439e552d" May 8 00:06:28.898092 kubelet[2266]: E0508 00:06:28.898046 2266 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://146.190.122.31:6443/api/v1/nodes\": dial tcp 146.190.122.31:6443: connect: connection refused" node="ci-4230.1.1-n-e3439e552d" May 8 00:06:28.925499 kubelet[2266]: I0508 00:06:28.924799 2266 topology_manager.go:215] "Topology Admit Handler" podUID="cd1b56cefa0e7dfb2a791661984898fa" podNamespace="kube-system" podName="kube-apiserver-ci-4230.1.1-n-e3439e552d" May 8 00:06:28.926385 kubelet[2266]: I0508 00:06:28.926345 2266 topology_manager.go:215] "Topology Admit Handler" podUID="739f15413157f5514ee441602267e44a" podNamespace="kube-system" podName="kube-controller-manager-ci-4230.1.1-n-e3439e552d" May 8 00:06:28.927959 kubelet[2266]: I0508 00:06:28.927090 2266 topology_manager.go:215] "Topology Admit Handler" podUID="45bbb4d6189e505886784ac3b4cbd684" podNamespace="kube-system" podName="kube-scheduler-ci-4230.1.1-n-e3439e552d" May 8 00:06:28.939170 systemd[1]: Created slice kubepods-burstable-podcd1b56cefa0e7dfb2a791661984898fa.slice - libcontainer container kubepods-burstable-podcd1b56cefa0e7dfb2a791661984898fa.slice. May 8 00:06:28.958143 systemd[1]: Created slice kubepods-burstable-pod739f15413157f5514ee441602267e44a.slice - libcontainer container kubepods-burstable-pod739f15413157f5514ee441602267e44a.slice. May 8 00:06:28.975867 systemd[1]: Created slice kubepods-burstable-pod45bbb4d6189e505886784ac3b4cbd684.slice - libcontainer container kubepods-burstable-pod45bbb4d6189e505886784ac3b4cbd684.slice. May 8 00:06:28.996854 kubelet[2266]: I0508 00:06:28.996751 2266 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cd1b56cefa0e7dfb2a791661984898fa-ca-certs\") pod \"kube-apiserver-ci-4230.1.1-n-e3439e552d\" (UID: \"cd1b56cefa0e7dfb2a791661984898fa\") " pod="kube-system/kube-apiserver-ci-4230.1.1-n-e3439e552d" May 8 00:06:28.997094 kubelet[2266]: E0508 00:06:28.996927 2266 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.122.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.1-n-e3439e552d?timeout=10s\": dial tcp 146.190.122.31:6443: connect: connection refused" interval="400ms" May 8 00:06:29.097614 kubelet[2266]: I0508 00:06:29.097548 2266 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cd1b56cefa0e7dfb2a791661984898fa-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.1.1-n-e3439e552d\" (UID: \"cd1b56cefa0e7dfb2a791661984898fa\") " pod="kube-system/kube-apiserver-ci-4230.1.1-n-e3439e552d" May 8 00:06:29.097614 kubelet[2266]: I0508 00:06:29.097610 2266 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/739f15413157f5514ee441602267e44a-k8s-certs\") pod \"kube-controller-manager-ci-4230.1.1-n-e3439e552d\" (UID: \"739f15413157f5514ee441602267e44a\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-e3439e552d" May 8 00:06:29.097887 kubelet[2266]: I0508 00:06:29.097636 2266 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/739f15413157f5514ee441602267e44a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.1.1-n-e3439e552d\" (UID: \"739f15413157f5514ee441602267e44a\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-e3439e552d" May 8 00:06:29.097887 kubelet[2266]: I0508 00:06:29.097673 2266 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cd1b56cefa0e7dfb2a791661984898fa-k8s-certs\") pod \"kube-apiserver-ci-4230.1.1-n-e3439e552d\" (UID: \"cd1b56cefa0e7dfb2a791661984898fa\") " pod="kube-system/kube-apiserver-ci-4230.1.1-n-e3439e552d" May 8 00:06:29.097887 kubelet[2266]: I0508 00:06:29.097699 2266 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/739f15413157f5514ee441602267e44a-ca-certs\") pod \"kube-controller-manager-ci-4230.1.1-n-e3439e552d\" (UID: \"739f15413157f5514ee441602267e44a\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-e3439e552d" May 8 00:06:29.097887 kubelet[2266]: I0508 00:06:29.097723 2266 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/739f15413157f5514ee441602267e44a-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.1.1-n-e3439e552d\" (UID: \"739f15413157f5514ee441602267e44a\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-e3439e552d" May 8 00:06:29.097887 kubelet[2266]: I0508 00:06:29.097752 2266 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/739f15413157f5514ee441602267e44a-kubeconfig\") pod \"kube-controller-manager-ci-4230.1.1-n-e3439e552d\" (UID: \"739f15413157f5514ee441602267e44a\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-e3439e552d" May 8 00:06:29.098180 kubelet[2266]: I0508 00:06:29.097775 2266 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/45bbb4d6189e505886784ac3b4cbd684-kubeconfig\") pod \"kube-scheduler-ci-4230.1.1-n-e3439e552d\" (UID: \"45bbb4d6189e505886784ac3b4cbd684\") " pod="kube-system/kube-scheduler-ci-4230.1.1-n-e3439e552d" May 8 00:06:29.099611 kubelet[2266]: I0508 00:06:29.099562 2266 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230.1.1-n-e3439e552d" May 8 00:06:29.100018 kubelet[2266]: E0508 00:06:29.099983 2266 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://146.190.122.31:6443/api/v1/nodes\": dial tcp 146.190.122.31:6443: connect: connection refused" node="ci-4230.1.1-n-e3439e552d" May 8 00:06:29.253211 kubelet[2266]: E0508 00:06:29.253002 2266 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 8 00:06:29.254045 containerd[1485]: time="2025-05-08T00:06:29.253998636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.1.1-n-e3439e552d,Uid:cd1b56cefa0e7dfb2a791661984898fa,Namespace:kube-system,Attempt:0,}" May 8 00:06:29.261201 systemd-resolved[1333]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. May 8 00:06:29.262874 kubelet[2266]: E0508 00:06:29.262823 2266 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 8 00:06:29.263554 containerd[1485]: time="2025-05-08T00:06:29.263379904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.1.1-n-e3439e552d,Uid:739f15413157f5514ee441602267e44a,Namespace:kube-system,Attempt:0,}" May 8 00:06:29.280507 kubelet[2266]: E0508 00:06:29.280336 2266 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 8 00:06:29.281700 containerd[1485]: time="2025-05-08T00:06:29.281098308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.1.1-n-e3439e552d,Uid:45bbb4d6189e505886784ac3b4cbd684,Namespace:kube-system,Attempt:0,}" May 8 00:06:29.365405 kubelet[2266]: E0508 00:06:29.365245 2266 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://146.190.122.31:6443/api/v1/namespaces/default/events\": dial tcp 146.190.122.31:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.1.1-n-e3439e552d.183d648d5af79ad0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.1.1-n-e3439e552d,UID:ci-4230.1.1-n-e3439e552d,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.1.1-n-e3439e552d,},FirstTimestamp:2025-05-08 00:06:28.76896328 +0000 UTC m=+0.536469952,LastTimestamp:2025-05-08 00:06:28.76896328 +0000 UTC m=+0.536469952,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.1.1-n-e3439e552d,}" May 8 00:06:29.397986 kubelet[2266]: E0508 00:06:29.397926 2266 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.122.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.1-n-e3439e552d?timeout=10s\": dial tcp 146.190.122.31:6443: connect: connection refused" interval="800ms" May 8 00:06:29.502128 kubelet[2266]: I0508 00:06:29.502081 2266 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230.1.1-n-e3439e552d" May 8 00:06:29.502517 kubelet[2266]: E0508 00:06:29.502486 2266 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://146.190.122.31:6443/api/v1/nodes\": dial tcp 146.190.122.31:6443: connect: connection refused" node="ci-4230.1.1-n-e3439e552d" May 8 00:06:29.687126 kubelet[2266]: W0508 00:06:29.687036 2266 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://146.190.122.31:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 146.190.122.31:6443: connect: connection refused May 8 00:06:29.687126 kubelet[2266]: E0508 00:06:29.687126 2266 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://146.190.122.31:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 146.190.122.31:6443: connect: connection refused May 8 00:06:29.760281 kubelet[2266]: W0508 00:06:29.760176 2266 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://146.190.122.31:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 146.190.122.31:6443: connect: connection refused May 8 00:06:29.760281 kubelet[2266]: E0508 00:06:29.760243 2266 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://146.190.122.31:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 146.190.122.31:6443: connect: connection refused May 8 00:06:29.826435 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount593799413.mount: Deactivated successfully. May 8 00:06:29.841390 containerd[1485]: time="2025-05-08T00:06:29.841325414Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:06:29.845778 containerd[1485]: time="2025-05-08T00:06:29.845695713Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" May 8 00:06:29.848711 containerd[1485]: time="2025-05-08T00:06:29.848572058Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:06:29.850214 containerd[1485]: time="2025-05-08T00:06:29.850158515Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:06:29.852372 containerd[1485]: time="2025-05-08T00:06:29.852282288Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:06:29.855321 containerd[1485]: time="2025-05-08T00:06:29.854078795Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 8 00:06:29.855321 containerd[1485]: time="2025-05-08T00:06:29.854490269Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 8 00:06:29.859580 containerd[1485]: time="2025-05-08T00:06:29.859510834Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:06:29.861329 containerd[1485]: time="2025-05-08T00:06:29.861244260Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 597.736884ms" May 8 00:06:29.866104 containerd[1485]: time="2025-05-08T00:06:29.866015673Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 584.789854ms" May 8 00:06:29.867448 containerd[1485]: time="2025-05-08T00:06:29.867351984Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 609.567524ms" May 8 00:06:30.041572 kubelet[2266]: W0508 00:06:30.041154 2266 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://146.190.122.31:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 146.190.122.31:6443: connect: connection refused May 8 00:06:30.041572 kubelet[2266]: E0508 00:06:30.041205 2266 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://146.190.122.31:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 146.190.122.31:6443: connect: connection refused May 8 00:06:30.068336 containerd[1485]: time="2025-05-08T00:06:30.057264984Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:06:30.068336 containerd[1485]: time="2025-05-08T00:06:30.057368229Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:06:30.068336 containerd[1485]: time="2025-05-08T00:06:30.057395244Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:06:30.068336 containerd[1485]: time="2025-05-08T00:06:30.057505174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:06:30.073789 containerd[1485]: time="2025-05-08T00:06:30.072004395Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:06:30.073789 containerd[1485]: time="2025-05-08T00:06:30.072974981Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:06:30.073789 containerd[1485]: time="2025-05-08T00:06:30.073018146Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:06:30.082462 containerd[1485]: time="2025-05-08T00:06:30.081441112Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:06:30.082462 containerd[1485]: time="2025-05-08T00:06:30.081498270Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:06:30.082462 containerd[1485]: time="2025-05-08T00:06:30.081514874Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:06:30.082462 containerd[1485]: time="2025-05-08T00:06:30.081662017Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:06:30.082462 containerd[1485]: time="2025-05-08T00:06:30.078182819Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:06:30.106610 systemd[1]: Started cri-containerd-36c0be030418c20365de4670174f45e72c77c4e372958410bfedee8cf8fe4d18.scope - libcontainer container 36c0be030418c20365de4670174f45e72c77c4e372958410bfedee8cf8fe4d18. May 8 00:06:30.131619 systemd[1]: Started cri-containerd-97013285fb43f5910e9805030fc17d2c160bd2867a04b4c4f1c12f88271280b1.scope - libcontainer container 97013285fb43f5910e9805030fc17d2c160bd2867a04b4c4f1c12f88271280b1. May 8 00:06:30.134360 systemd[1]: Started cri-containerd-f32277a332178e80ecf17c41b8e2dee579db5b202d8daa34d6a7d96f9c84b363.scope - libcontainer container f32277a332178e80ecf17c41b8e2dee579db5b202d8daa34d6a7d96f9c84b363. May 8 00:06:30.198911 kubelet[2266]: E0508 00:06:30.198803 2266 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.122.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.1-n-e3439e552d?timeout=10s\": dial tcp 146.190.122.31:6443: connect: connection refused" interval="1.6s" May 8 00:06:30.218456 containerd[1485]: time="2025-05-08T00:06:30.218407692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.1.1-n-e3439e552d,Uid:cd1b56cefa0e7dfb2a791661984898fa,Namespace:kube-system,Attempt:0,} returns sandbox id \"f32277a332178e80ecf17c41b8e2dee579db5b202d8daa34d6a7d96f9c84b363\"" May 8 00:06:30.220660 kubelet[2266]: E0508 00:06:30.220598 2266 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 8 00:06:30.229556 containerd[1485]: time="2025-05-08T00:06:30.229095046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.1.1-n-e3439e552d,Uid:45bbb4d6189e505886784ac3b4cbd684,Namespace:kube-system,Attempt:0,} returns sandbox id \"36c0be030418c20365de4670174f45e72c77c4e372958410bfedee8cf8fe4d18\"" May 8 00:06:30.230646 containerd[1485]: time="2025-05-08T00:06:30.229533007Z" level=info msg="CreateContainer within sandbox \"f32277a332178e80ecf17c41b8e2dee579db5b202d8daa34d6a7d96f9c84b363\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 8 00:06:30.231251 kubelet[2266]: E0508 00:06:30.231211 2266 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 8 00:06:30.234701 containerd[1485]: time="2025-05-08T00:06:30.234641004Z" level=info msg="CreateContainer within sandbox \"36c0be030418c20365de4670174f45e72c77c4e372958410bfedee8cf8fe4d18\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 8 00:06:30.244662 containerd[1485]: time="2025-05-08T00:06:30.244503432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.1.1-n-e3439e552d,Uid:739f15413157f5514ee441602267e44a,Namespace:kube-system,Attempt:0,} returns sandbox id \"97013285fb43f5910e9805030fc17d2c160bd2867a04b4c4f1c12f88271280b1\"" May 8 00:06:30.245878 kubelet[2266]: E0508 00:06:30.245837 2266 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 8 00:06:30.249464 containerd[1485]: time="2025-05-08T00:06:30.249193664Z" level=info msg="CreateContainer within sandbox \"97013285fb43f5910e9805030fc17d2c160bd2867a04b4c4f1c12f88271280b1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 8 00:06:30.265162 containerd[1485]: time="2025-05-08T00:06:30.264375111Z" level=info msg="CreateContainer within sandbox \"36c0be030418c20365de4670174f45e72c77c4e372958410bfedee8cf8fe4d18\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"bbe3fec600285a5be48ccd22208264e27e45524feb2fff8da2db4f0ec76ea6d6\"" May 8 00:06:30.266037 containerd[1485]: time="2025-05-08T00:06:30.265998731Z" level=info msg="StartContainer for \"bbe3fec600285a5be48ccd22208264e27e45524feb2fff8da2db4f0ec76ea6d6\"" May 8 00:06:30.268411 kubelet[2266]: W0508 00:06:30.268263 2266 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://146.190.122.31:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.1-n-e3439e552d&limit=500&resourceVersion=0": dial tcp 146.190.122.31:6443: connect: connection refused May 8 00:06:30.268411 kubelet[2266]: E0508 00:06:30.268355 2266 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://146.190.122.31:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.1-n-e3439e552d&limit=500&resourceVersion=0": dial tcp 146.190.122.31:6443: connect: connection refused May 8 00:06:30.269840 containerd[1485]: time="2025-05-08T00:06:30.269798716Z" level=info msg="CreateContainer within sandbox \"f32277a332178e80ecf17c41b8e2dee579db5b202d8daa34d6a7d96f9c84b363\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"006819c0c78b06cb55441299066fe429c265155c3c713db118f459a3e9519887\"" May 8 00:06:30.270606 containerd[1485]: time="2025-05-08T00:06:30.270571467Z" level=info msg="StartContainer for \"006819c0c78b06cb55441299066fe429c265155c3c713db118f459a3e9519887\"" May 8 00:06:30.305419 kubelet[2266]: I0508 00:06:30.304493 2266 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230.1.1-n-e3439e552d" May 8 00:06:30.305419 kubelet[2266]: E0508 00:06:30.304911 2266 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://146.190.122.31:6443/api/v1/nodes\": dial tcp 146.190.122.31:6443: connect: connection refused" node="ci-4230.1.1-n-e3439e552d" May 8 00:06:30.314879 systemd[1]: Started cri-containerd-bbe3fec600285a5be48ccd22208264e27e45524feb2fff8da2db4f0ec76ea6d6.scope - libcontainer container bbe3fec600285a5be48ccd22208264e27e45524feb2fff8da2db4f0ec76ea6d6. May 8 00:06:30.328708 systemd[1]: Started cri-containerd-006819c0c78b06cb55441299066fe429c265155c3c713db118f459a3e9519887.scope - libcontainer container 006819c0c78b06cb55441299066fe429c265155c3c713db118f459a3e9519887. May 8 00:06:30.336088 containerd[1485]: time="2025-05-08T00:06:30.333162104Z" level=info msg="CreateContainer within sandbox \"97013285fb43f5910e9805030fc17d2c160bd2867a04b4c4f1c12f88271280b1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0e4ad02799b24845ed0379063099907f045e0ebc3165e0fbdc486cf9a9e3cd0b\"" May 8 00:06:30.338396 containerd[1485]: time="2025-05-08T00:06:30.338349031Z" level=info msg="StartContainer for \"0e4ad02799b24845ed0379063099907f045e0ebc3165e0fbdc486cf9a9e3cd0b\"" May 8 00:06:30.393171 systemd[1]: Started cri-containerd-0e4ad02799b24845ed0379063099907f045e0ebc3165e0fbdc486cf9a9e3cd0b.scope - libcontainer container 0e4ad02799b24845ed0379063099907f045e0ebc3165e0fbdc486cf9a9e3cd0b. May 8 00:06:30.419205 containerd[1485]: time="2025-05-08T00:06:30.419036801Z" level=info msg="StartContainer for \"006819c0c78b06cb55441299066fe429c265155c3c713db118f459a3e9519887\" returns successfully" May 8 00:06:30.430766 containerd[1485]: time="2025-05-08T00:06:30.430727123Z" level=info msg="StartContainer for \"bbe3fec600285a5be48ccd22208264e27e45524feb2fff8da2db4f0ec76ea6d6\" returns successfully" May 8 00:06:30.495902 containerd[1485]: time="2025-05-08T00:06:30.495854252Z" level=info msg="StartContainer for \"0e4ad02799b24845ed0379063099907f045e0ebc3165e0fbdc486cf9a9e3cd0b\" returns successfully" May 8 00:06:30.849721 kubelet[2266]: E0508 00:06:30.849673 2266 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 8 00:06:30.854387 kubelet[2266]: E0508 00:06:30.854349 2266 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 8 00:06:30.856184 kubelet[2266]: E0508 00:06:30.856138 2266 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 8 00:06:31.858107 kubelet[2266]: E0508 00:06:31.858072 2266 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 8 00:06:31.906674 kubelet[2266]: I0508 00:06:31.906623 2266 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230.1.1-n-e3439e552d" May 8 00:06:33.030137 kubelet[2266]: E0508 00:06:33.030083 2266 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230.1.1-n-e3439e552d\" not found" node="ci-4230.1.1-n-e3439e552d" May 8 00:06:33.058879 kubelet[2266]: I0508 00:06:33.058659 2266 kubelet_node_status.go:76] "Successfully registered node" node="ci-4230.1.1-n-e3439e552d" May 8 00:06:33.203635 kubelet[2266]: E0508 00:06:33.203590 2266 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4230.1.1-n-e3439e552d\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4230.1.1-n-e3439e552d" May 8 00:06:33.204061 kubelet[2266]: E0508 00:06:33.204006 2266 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 8 00:06:33.217558 kubelet[2266]: E0508 00:06:33.217515 2266 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4230.1.1-n-e3439e552d\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4230.1.1-n-e3439e552d" May 8 00:06:33.217951 kubelet[2266]: E0508 00:06:33.217931 2266 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 8 00:06:33.766096 kubelet[2266]: I0508 00:06:33.765744 2266 apiserver.go:52] "Watching apiserver" May 8 00:06:33.797310 kubelet[2266]: I0508 00:06:33.797203 2266 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 8 00:06:35.171971 systemd[1]: Reload requested from client PID 2543 ('systemctl') (unit session-7.scope)... May 8 00:06:35.171990 systemd[1]: Reloading... May 8 00:06:35.323341 zram_generator::config[2590]: No configuration found. May 8 00:06:35.485850 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:06:35.667431 systemd[1]: Reloading finished in 494 ms. May 8 00:06:35.710241 kubelet[2266]: I0508 00:06:35.709481 2266 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 00:06:35.709925 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:06:35.726203 systemd[1]: kubelet.service: Deactivated successfully. May 8 00:06:35.726522 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:06:35.726605 systemd[1]: kubelet.service: Consumed 1.013s CPU time, 111.3M memory peak. May 8 00:06:35.744797 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:06:35.910030 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:06:35.924949 (kubelet)[2638]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 8 00:06:36.030103 kubelet[2638]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:06:36.030103 kubelet[2638]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 8 00:06:36.030103 kubelet[2638]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:06:36.031361 kubelet[2638]: I0508 00:06:36.030646 2638 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 00:06:36.038137 kubelet[2638]: I0508 00:06:36.038094 2638 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 8 00:06:36.038377 kubelet[2638]: I0508 00:06:36.038362 2638 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 00:06:36.038763 kubelet[2638]: I0508 00:06:36.038738 2638 server.go:927] "Client rotation is on, will bootstrap in background" May 8 00:06:36.041386 kubelet[2638]: I0508 00:06:36.041029 2638 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 8 00:06:36.044677 kubelet[2638]: I0508 00:06:36.044634 2638 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 00:06:36.054927 kubelet[2638]: I0508 00:06:36.054890 2638 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 00:06:36.055438 kubelet[2638]: I0508 00:06:36.055391 2638 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 00:06:36.057375 kubelet[2638]: I0508 00:06:36.055603 2638 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.1.1-n-e3439e552d","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 8 00:06:36.057375 kubelet[2638]: I0508 00:06:36.055925 2638 topology_manager.go:138] "Creating topology manager with none policy" May 8 00:06:36.057375 kubelet[2638]: I0508 00:06:36.055940 2638 container_manager_linux.go:301] "Creating device plugin manager" May 8 00:06:36.057375 kubelet[2638]: I0508 00:06:36.055997 2638 state_mem.go:36] "Initialized new in-memory state store" May 8 00:06:36.057375 kubelet[2638]: I0508 00:06:36.056119 2638 kubelet.go:400] "Attempting to sync node with API server" May 8 00:06:36.057797 kubelet[2638]: I0508 00:06:36.056134 2638 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 00:06:36.057797 kubelet[2638]: I0508 00:06:36.056160 2638 kubelet.go:312] "Adding apiserver pod source" May 8 00:06:36.057797 kubelet[2638]: I0508 00:06:36.056177 2638 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 00:06:36.060252 kubelet[2638]: I0508 00:06:36.060221 2638 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 8 00:06:36.062277 kubelet[2638]: I0508 00:06:36.062245 2638 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 00:06:36.072440 kubelet[2638]: I0508 00:06:36.072411 2638 server.go:1264] "Started kubelet" May 8 00:06:36.080350 kubelet[2638]: I0508 00:06:36.079081 2638 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 00:06:36.093251 kubelet[2638]: I0508 00:06:36.093203 2638 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 8 00:06:36.095122 kubelet[2638]: I0508 00:06:36.095088 2638 server.go:455] "Adding debug handlers to kubelet server" May 8 00:06:36.097654 kubelet[2638]: I0508 00:06:36.097571 2638 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 00:06:36.098121 kubelet[2638]: I0508 00:06:36.098095 2638 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 00:06:36.099615 kubelet[2638]: I0508 00:06:36.099549 2638 volume_manager.go:291] "Starting Kubelet Volume Manager" May 8 00:06:36.099761 kubelet[2638]: I0508 00:06:36.099671 2638 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 8 00:06:36.099861 kubelet[2638]: I0508 00:06:36.099844 2638 reconciler.go:26] "Reconciler: start to sync state" May 8 00:06:36.102922 kubelet[2638]: I0508 00:06:36.102876 2638 factory.go:221] Registration of the systemd container factory successfully May 8 00:06:36.103098 kubelet[2638]: I0508 00:06:36.103055 2638 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 00:06:36.103853 kubelet[2638]: E0508 00:06:36.103676 2638 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 8 00:06:36.110336 kubelet[2638]: I0508 00:06:36.107582 2638 factory.go:221] Registration of the containerd container factory successfully May 8 00:06:36.114420 kubelet[2638]: I0508 00:06:36.114178 2638 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 00:06:36.117165 kubelet[2638]: I0508 00:06:36.117120 2638 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 00:06:36.117165 kubelet[2638]: I0508 00:06:36.117172 2638 status_manager.go:217] "Starting to sync pod status with apiserver" May 8 00:06:36.117429 kubelet[2638]: I0508 00:06:36.117196 2638 kubelet.go:2337] "Starting kubelet main sync loop" May 8 00:06:36.117429 kubelet[2638]: E0508 00:06:36.117254 2638 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 00:06:36.200826 sudo[2667]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 8 00:06:36.201343 sudo[2667]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 8 00:06:36.202863 kubelet[2638]: I0508 00:06:36.202750 2638 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230.1.1-n-e3439e552d" May 8 00:06:36.226058 kubelet[2638]: E0508 00:06:36.225003 2638 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 8 00:06:36.226058 kubelet[2638]: I0508 00:06:36.225454 2638 kubelet_node_status.go:112] "Node was previously registered" node="ci-4230.1.1-n-e3439e552d" May 8 00:06:36.226058 kubelet[2638]: I0508 00:06:36.225548 2638 kubelet_node_status.go:76] "Successfully registered node" node="ci-4230.1.1-n-e3439e552d" May 8 00:06:36.246540 kubelet[2638]: I0508 00:06:36.246155 2638 cpu_manager.go:214] "Starting CPU manager" policy="none" May 8 00:06:36.246540 kubelet[2638]: I0508 00:06:36.246180 2638 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 8 00:06:36.246540 kubelet[2638]: I0508 00:06:36.246209 2638 state_mem.go:36] "Initialized new in-memory state store" May 8 00:06:36.246540 kubelet[2638]: I0508 00:06:36.246413 2638 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 8 00:06:36.246540 kubelet[2638]: I0508 00:06:36.246425 2638 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 8 00:06:36.246540 kubelet[2638]: I0508 00:06:36.246451 2638 policy_none.go:49] "None policy: Start" May 8 00:06:36.248649 kubelet[2638]: I0508 00:06:36.247415 2638 memory_manager.go:170] "Starting memorymanager" policy="None" May 8 00:06:36.248649 kubelet[2638]: I0508 00:06:36.247445 2638 state_mem.go:35] "Initializing new in-memory state store" May 8 00:06:36.250340 kubelet[2638]: I0508 00:06:36.250222 2638 state_mem.go:75] "Updated machine memory state" May 8 00:06:36.261073 kubelet[2638]: I0508 00:06:36.259432 2638 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 00:06:36.261073 kubelet[2638]: I0508 00:06:36.259632 2638 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 00:06:36.261073 kubelet[2638]: I0508 00:06:36.260091 2638 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 00:06:36.425495 kubelet[2638]: I0508 00:06:36.425417 2638 topology_manager.go:215] "Topology Admit Handler" podUID="cd1b56cefa0e7dfb2a791661984898fa" podNamespace="kube-system" podName="kube-apiserver-ci-4230.1.1-n-e3439e552d" May 8 00:06:36.425910 kubelet[2638]: I0508 00:06:36.425880 2638 topology_manager.go:215] "Topology Admit Handler" podUID="739f15413157f5514ee441602267e44a" podNamespace="kube-system" podName="kube-controller-manager-ci-4230.1.1-n-e3439e552d" May 8 00:06:36.426868 kubelet[2638]: I0508 00:06:36.425996 2638 topology_manager.go:215] "Topology Admit Handler" podUID="45bbb4d6189e505886784ac3b4cbd684" podNamespace="kube-system" podName="kube-scheduler-ci-4230.1.1-n-e3439e552d" May 8 00:06:36.445623 kubelet[2638]: W0508 00:06:36.445316 2638 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 8 00:06:36.447150 kubelet[2638]: W0508 00:06:36.447112 2638 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 8 00:06:36.448219 kubelet[2638]: W0508 00:06:36.448190 2638 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 8 00:06:36.504628 kubelet[2638]: I0508 00:06:36.504574 2638 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cd1b56cefa0e7dfb2a791661984898fa-ca-certs\") pod \"kube-apiserver-ci-4230.1.1-n-e3439e552d\" (UID: \"cd1b56cefa0e7dfb2a791661984898fa\") " pod="kube-system/kube-apiserver-ci-4230.1.1-n-e3439e552d" May 8 00:06:36.504789 kubelet[2638]: I0508 00:06:36.504646 2638 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/739f15413157f5514ee441602267e44a-ca-certs\") pod \"kube-controller-manager-ci-4230.1.1-n-e3439e552d\" (UID: \"739f15413157f5514ee441602267e44a\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-e3439e552d" May 8 00:06:36.504789 kubelet[2638]: I0508 00:06:36.504687 2638 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/739f15413157f5514ee441602267e44a-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.1.1-n-e3439e552d\" (UID: \"739f15413157f5514ee441602267e44a\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-e3439e552d" May 8 00:06:36.504789 kubelet[2638]: I0508 00:06:36.504720 2638 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/739f15413157f5514ee441602267e44a-k8s-certs\") pod \"kube-controller-manager-ci-4230.1.1-n-e3439e552d\" (UID: \"739f15413157f5514ee441602267e44a\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-e3439e552d" May 8 00:06:36.505097 kubelet[2638]: I0508 00:06:36.505070 2638 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/739f15413157f5514ee441602267e44a-kubeconfig\") pod \"kube-controller-manager-ci-4230.1.1-n-e3439e552d\" (UID: \"739f15413157f5514ee441602267e44a\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-e3439e552d" May 8 00:06:36.505174 kubelet[2638]: I0508 00:06:36.505151 2638 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/45bbb4d6189e505886784ac3b4cbd684-kubeconfig\") pod \"kube-scheduler-ci-4230.1.1-n-e3439e552d\" (UID: \"45bbb4d6189e505886784ac3b4cbd684\") " pod="kube-system/kube-scheduler-ci-4230.1.1-n-e3439e552d" May 8 00:06:36.505268 kubelet[2638]: I0508 00:06:36.505185 2638 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cd1b56cefa0e7dfb2a791661984898fa-k8s-certs\") pod \"kube-apiserver-ci-4230.1.1-n-e3439e552d\" (UID: \"cd1b56cefa0e7dfb2a791661984898fa\") " pod="kube-system/kube-apiserver-ci-4230.1.1-n-e3439e552d" May 8 00:06:36.505268 kubelet[2638]: I0508 00:06:36.505210 2638 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cd1b56cefa0e7dfb2a791661984898fa-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.1.1-n-e3439e552d\" (UID: \"cd1b56cefa0e7dfb2a791661984898fa\") " pod="kube-system/kube-apiserver-ci-4230.1.1-n-e3439e552d" May 8 00:06:36.505268 kubelet[2638]: I0508 00:06:36.505240 2638 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/739f15413157f5514ee441602267e44a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.1.1-n-e3439e552d\" (UID: \"739f15413157f5514ee441602267e44a\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-e3439e552d" May 8 00:06:36.749567 kubelet[2638]: E0508 00:06:36.747224 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 8 00:06:36.749567 kubelet[2638]: E0508 00:06:36.747653 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 8 00:06:36.751134 kubelet[2638]: E0508 00:06:36.751072 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 8 00:06:36.939548 sudo[2667]: pam_unix(sudo:session): session closed for user root May 8 00:06:37.057504 kubelet[2638]: I0508 00:06:37.057355 2638 apiserver.go:52] "Watching apiserver" May 8 00:06:37.099909 kubelet[2638]: I0508 00:06:37.099856 2638 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 8 00:06:37.174261 kubelet[2638]: E0508 00:06:37.172661 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 8 00:06:37.174808 kubelet[2638]: E0508 00:06:37.174773 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 8 00:06:37.190569 kubelet[2638]: W0508 00:06:37.190533 2638 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 8 00:06:37.190948 kubelet[2638]: E0508 00:06:37.190839 2638 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4230.1.1-n-e3439e552d\" already exists" pod="kube-system/kube-apiserver-ci-4230.1.1-n-e3439e552d" May 8 00:06:37.193373 kubelet[2638]: E0508 00:06:37.193339 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 8 00:06:37.229808 kubelet[2638]: I0508 00:06:37.229550 2638 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230.1.1-n-e3439e552d" podStartSLOduration=1.2295069650000001 podStartE2EDuration="1.229506965s" podCreationTimestamp="2025-05-08 00:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:06:37.226135451 +0000 UTC m=+1.289897300" watchObservedRunningTime="2025-05-08 00:06:37.229506965 +0000 UTC m=+1.293268809" May 8 00:06:37.243231 kubelet[2638]: I0508 00:06:37.243163 2638 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230.1.1-n-e3439e552d" podStartSLOduration=1.243137912 podStartE2EDuration="1.243137912s" podCreationTimestamp="2025-05-08 00:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:06:37.241781965 +0000 UTC m=+1.305543816" watchObservedRunningTime="2025-05-08 00:06:37.243137912 +0000 UTC m=+1.306899765" May 8 00:06:37.269826 kubelet[2638]: I0508 00:06:37.269602 2638 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230.1.1-n-e3439e552d" podStartSLOduration=1.26957557 podStartE2EDuration="1.26957557s" podCreationTimestamp="2025-05-08 00:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:06:37.255813198 +0000 UTC m=+1.319575048" watchObservedRunningTime="2025-05-08 00:06:37.26957557 +0000 UTC m=+1.333337423" May 8 00:06:38.174525 kubelet[2638]: E0508 00:06:38.174394 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 8 00:06:38.589799 sudo[1674]: pam_unix(sudo:session): session closed for user root May 8 00:06:38.595297 sshd[1673]: Connection closed by 139.178.68.195 port 53608 May 8 00:06:38.594914 sshd-session[1670]: pam_unix(sshd:session): session closed for user core May 8 00:06:38.599759 systemd[1]: sshd@6-146.190.122.31:22-139.178.68.195:53608.service: Deactivated successfully. May 8 00:06:38.604032 systemd[1]: session-7.scope: Deactivated successfully. May 8 00:06:38.604523 systemd[1]: session-7.scope: Consumed 6.453s CPU time, 236.5M memory peak. May 8 00:06:38.608976 systemd-logind[1463]: Session 7 logged out. Waiting for processes to exit. May 8 00:06:38.610663 systemd-logind[1463]: Removed session 7. May 8 00:06:40.385874 systemd-resolved[1333]: Clock change detected. Flushing caches. May 8 00:06:40.385943 systemd-timesyncd[1365]: Contacted time server 142.202.190.19:123 (2.flatcar.pool.ntp.org). May 8 00:06:40.386038 systemd-timesyncd[1365]: Initial clock synchronization to Thu 2025-05-08 00:06:40.385486 UTC. May 8 00:06:41.254415 kubelet[2638]: E0508 00:06:41.254367 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 8 00:06:41.644277 kubelet[2638]: E0508 00:06:41.644229 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 8 00:06:43.304446 kubelet[2638]: E0508 00:06:43.304302 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 8 00:06:43.649516 kubelet[2638]: E0508 00:06:43.649309 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 8 00:06:44.251721 kubelet[2638]: E0508 00:06:44.251654 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 8 00:06:44.650109 kubelet[2638]: E0508 00:06:44.650063 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 8 00:06:49.854163 update_engine[1464]: I20250508 00:06:49.854038 1464 update_attempter.cc:509] Updating boot flags... May 8 00:06:49.907858 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (2713) May 8 00:06:50.052913 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (2712) May 8 00:06:50.173456 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (2712) May 8 00:06:50.951217 kubelet[2638]: I0508 00:06:50.951159 2638 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 8 00:06:50.954461 containerd[1485]: time="2025-05-08T00:06:50.953873267Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 8 00:06:50.954988 kubelet[2638]: I0508 00:06:50.954764 2638 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 8 00:06:51.040672 kubelet[2638]: I0508 00:06:51.040388 2638 topology_manager.go:215] "Topology Admit Handler" podUID="0b4f8b7c-b78a-460f-bd0c-3744876d5fd1" podNamespace="kube-system" podName="kube-proxy-9vt62" May 8 00:06:51.059815 systemd[1]: Created slice kubepods-besteffort-pod0b4f8b7c_b78a_460f_bd0c_3744876d5fd1.slice - libcontainer container kubepods-besteffort-pod0b4f8b7c_b78a_460f_bd0c_3744876d5fd1.slice. May 8 00:06:51.065478 kubelet[2638]: I0508 00:06:51.064471 2638 topology_manager.go:215] "Topology Admit Handler" podUID="bcd976d3-aff7-4b77-ad7c-18942b5d0979" podNamespace="kube-system" podName="cilium-rz4v2" May 8 00:06:51.088932 systemd[1]: Created slice kubepods-burstable-podbcd976d3_aff7_4b77_ad7c_18942b5d0979.slice - libcontainer container kubepods-burstable-podbcd976d3_aff7_4b77_ad7c_18942b5d0979.slice. May 8 00:06:51.102947 kubelet[2638]: W0508 00:06:51.102848 2638 reflector.go:547] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4230.1.1-n-e3439e552d" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230.1.1-n-e3439e552d' and this object May 8 00:06:51.102947 kubelet[2638]: E0508 00:06:51.102915 2638 reflector.go:150] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4230.1.1-n-e3439e552d" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230.1.1-n-e3439e552d' and this object May 8 00:06:51.103691 kubelet[2638]: W0508 00:06:51.103662 2638 reflector.go:547] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4230.1.1-n-e3439e552d" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230.1.1-n-e3439e552d' and this object May 8 00:06:51.103904 kubelet[2638]: E0508 00:06:51.103868 2638 reflector.go:150] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4230.1.1-n-e3439e552d" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230.1.1-n-e3439e552d' and this object May 8 00:06:51.129779 kubelet[2638]: W0508 00:06:51.128886 2638 reflector.go:547] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-4230.1.1-n-e3439e552d" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230.1.1-n-e3439e552d' and this object May 8 00:06:51.129779 kubelet[2638]: E0508 00:06:51.128953 2638 reflector.go:150] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-4230.1.1-n-e3439e552d" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230.1.1-n-e3439e552d' and this object May 8 00:06:51.174409 kubelet[2638]: I0508 00:06:51.173689 2638 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0b4f8b7c-b78a-460f-bd0c-3744876d5fd1-xtables-lock\") pod \"kube-proxy-9vt62\" (UID: \"0b4f8b7c-b78a-460f-bd0c-3744876d5fd1\") " pod="kube-system/kube-proxy-9vt62" May 8 00:06:51.174409 kubelet[2638]: I0508 00:06:51.173757 2638 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bcd976d3-aff7-4b77-ad7c-18942b5d0979-lib-modules\") pod \"cilium-rz4v2\" (UID: \"bcd976d3-aff7-4b77-ad7c-18942b5d0979\") " pod="kube-system/cilium-rz4v2" May 8 00:06:51.174409 kubelet[2638]: I0508 00:06:51.173797 2638 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bcd976d3-aff7-4b77-ad7c-18942b5d0979-xtables-lock\") pod \"cilium-rz4v2\" (UID: \"bcd976d3-aff7-4b77-ad7c-18942b5d0979\") " pod="kube-system/cilium-rz4v2" May 8 00:06:51.174409 kubelet[2638]: I0508 00:06:51.173827 2638 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bcd976d3-aff7-4b77-ad7c-18942b5d0979-cilium-config-path\") pod \"cilium-rz4v2\" (UID: \"bcd976d3-aff7-4b77-ad7c-18942b5d0979\") " pod="kube-system/cilium-rz4v2" May 8 00:06:51.174409 kubelet[2638]: I0508 00:06:51.173855 2638 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bcd976d3-aff7-4b77-ad7c-18942b5d0979-hubble-tls\") pod \"cilium-rz4v2\" (UID: \"bcd976d3-aff7-4b77-ad7c-18942b5d0979\") " pod="kube-system/cilium-rz4v2" May 8 00:06:51.174409 kubelet[2638]: I0508 00:06:51.173881 2638 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrbst\" (UniqueName: \"kubernetes.io/projected/bcd976d3-aff7-4b77-ad7c-18942b5d0979-kube-api-access-hrbst\") pod \"cilium-rz4v2\" (UID: \"bcd976d3-aff7-4b77-ad7c-18942b5d0979\") " pod="kube-system/cilium-rz4v2" May 8 00:06:51.175010 kubelet[2638]: I0508 00:06:51.173907 2638 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bcd976d3-aff7-4b77-ad7c-18942b5d0979-hostproc\") pod \"cilium-rz4v2\" (UID: \"bcd976d3-aff7-4b77-ad7c-18942b5d0979\") " pod="kube-system/cilium-rz4v2" May 8 00:06:51.175010 kubelet[2638]: I0508 00:06:51.173929 2638 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bcd976d3-aff7-4b77-ad7c-18942b5d0979-cni-path\") pod \"cilium-rz4v2\" (UID: \"bcd976d3-aff7-4b77-ad7c-18942b5d0979\") " pod="kube-system/cilium-rz4v2" May 8 00:06:51.175010 kubelet[2638]: I0508 00:06:51.173957 2638 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0b4f8b7c-b78a-460f-bd0c-3744876d5fd1-kube-proxy\") pod \"kube-proxy-9vt62\" (UID: \"0b4f8b7c-b78a-460f-bd0c-3744876d5fd1\") " pod="kube-system/kube-proxy-9vt62" May 8 00:06:51.175010 kubelet[2638]: I0508 00:06:51.173979 2638 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bcd976d3-aff7-4b77-ad7c-18942b5d0979-bpf-maps\") pod \"cilium-rz4v2\" (UID: \"bcd976d3-aff7-4b77-ad7c-18942b5d0979\") " pod="kube-system/cilium-rz4v2" May 8 00:06:51.175010 kubelet[2638]: I0508 00:06:51.174003 2638 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bcd976d3-aff7-4b77-ad7c-18942b5d0979-etc-cni-netd\") pod \"cilium-rz4v2\" (UID: \"bcd976d3-aff7-4b77-ad7c-18942b5d0979\") " pod="kube-system/cilium-rz4v2" May 8 00:06:51.175010 kubelet[2638]: I0508 00:06:51.174034 2638 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bcd976d3-aff7-4b77-ad7c-18942b5d0979-host-proc-sys-kernel\") pod \"cilium-rz4v2\" (UID: \"bcd976d3-aff7-4b77-ad7c-18942b5d0979\") " pod="kube-system/cilium-rz4v2" May 8 00:06:51.175404 kubelet[2638]: I0508 00:06:51.174058 2638 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0b4f8b7c-b78a-460f-bd0c-3744876d5fd1-lib-modules\") pod \"kube-proxy-9vt62\" (UID: \"0b4f8b7c-b78a-460f-bd0c-3744876d5fd1\") " pod="kube-system/kube-proxy-9vt62" May 8 00:06:51.175404 kubelet[2638]: I0508 00:06:51.174080 2638 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bcd976d3-aff7-4b77-ad7c-18942b5d0979-cilium-cgroup\") pod \"cilium-rz4v2\" (UID: \"bcd976d3-aff7-4b77-ad7c-18942b5d0979\") " pod="kube-system/cilium-rz4v2" May 8 00:06:51.175404 kubelet[2638]: I0508 00:06:51.174103 2638 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bcd976d3-aff7-4b77-ad7c-18942b5d0979-cilium-run\") pod \"cilium-rz4v2\" (UID: \"bcd976d3-aff7-4b77-ad7c-18942b5d0979\") " pod="kube-system/cilium-rz4v2" May 8 00:06:51.175404 kubelet[2638]: I0508 00:06:51.174127 2638 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gs8kd\" (UniqueName: \"kubernetes.io/projected/0b4f8b7c-b78a-460f-bd0c-3744876d5fd1-kube-api-access-gs8kd\") pod \"kube-proxy-9vt62\" (UID: \"0b4f8b7c-b78a-460f-bd0c-3744876d5fd1\") " pod="kube-system/kube-proxy-9vt62" May 8 00:06:51.175404 kubelet[2638]: I0508 00:06:51.174151 2638 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bcd976d3-aff7-4b77-ad7c-18942b5d0979-clustermesh-secrets\") pod \"cilium-rz4v2\" (UID: \"bcd976d3-aff7-4b77-ad7c-18942b5d0979\") " pod="kube-system/cilium-rz4v2" May 8 00:06:51.176980 kubelet[2638]: I0508 00:06:51.174179 2638 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bcd976d3-aff7-4b77-ad7c-18942b5d0979-host-proc-sys-net\") pod \"cilium-rz4v2\" (UID: \"bcd976d3-aff7-4b77-ad7c-18942b5d0979\") " pod="kube-system/cilium-rz4v2" May 8 00:06:51.199719 kubelet[2638]: I0508 00:06:51.199618 2638 topology_manager.go:215] "Topology Admit Handler" podUID="8b977ea8-db27-4db3-9fc5-0231afe77acd" podNamespace="kube-system" podName="cilium-operator-599987898-d4pdm" May 8 00:06:51.213393 systemd[1]: Created slice kubepods-besteffort-pod8b977ea8_db27_4db3_9fc5_0231afe77acd.slice - libcontainer container kubepods-besteffort-pod8b977ea8_db27_4db3_9fc5_0231afe77acd.slice. May 8 00:06:51.309297 kubelet[2638]: E0508 00:06:51.309254 2638 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 8 00:06:51.309297 kubelet[2638]: E0508 00:06:51.309304 2638 projected.go:200] Error preparing data for projected volume kube-api-access-gs8kd for pod kube-system/kube-proxy-9vt62: configmap "kube-root-ca.crt" not found May 8 00:06:51.309612 kubelet[2638]: E0508 00:06:51.309387 2638 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b4f8b7c-b78a-460f-bd0c-3744876d5fd1-kube-api-access-gs8kd podName:0b4f8b7c-b78a-460f-bd0c-3744876d5fd1 nodeName:}" failed. No retries permitted until 2025-05-08 00:06:51.809356454 +0000 UTC m=+15.411985759 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gs8kd" (UniqueName: "kubernetes.io/projected/0b4f8b7c-b78a-460f-bd0c-3744876d5fd1-kube-api-access-gs8kd") pod "kube-proxy-9vt62" (UID: "0b4f8b7c-b78a-460f-bd0c-3744876d5fd1") : configmap "kube-root-ca.crt" not found May 8 00:06:51.309745 kubelet[2638]: E0508 00:06:51.309251 2638 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 8 00:06:51.309745 kubelet[2638]: E0508 00:06:51.309713 2638 projected.go:200] Error preparing data for projected volume kube-api-access-hrbst for pod kube-system/cilium-rz4v2: configmap "kube-root-ca.crt" not found May 8 00:06:51.310001 kubelet[2638]: E0508 00:06:51.309758 2638 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bcd976d3-aff7-4b77-ad7c-18942b5d0979-kube-api-access-hrbst podName:bcd976d3-aff7-4b77-ad7c-18942b5d0979 nodeName:}" failed. No retries permitted until 2025-05-08 00:06:51.809743293 +0000 UTC m=+15.412372608 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hrbst" (UniqueName: "kubernetes.io/projected/bcd976d3-aff7-4b77-ad7c-18942b5d0979-kube-api-access-hrbst") pod "cilium-rz4v2" (UID: "bcd976d3-aff7-4b77-ad7c-18942b5d0979") : configmap "kube-root-ca.crt" not found May 8 00:06:51.375690 kubelet[2638]: I0508 00:06:51.375487 2638 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8b977ea8-db27-4db3-9fc5-0231afe77acd-cilium-config-path\") pod \"cilium-operator-599987898-d4pdm\" (UID: \"8b977ea8-db27-4db3-9fc5-0231afe77acd\") " pod="kube-system/cilium-operator-599987898-d4pdm" May 8 00:06:51.375690 kubelet[2638]: I0508 00:06:51.375604 2638 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmg25\" (UniqueName: \"kubernetes.io/projected/8b977ea8-db27-4db3-9fc5-0231afe77acd-kube-api-access-mmg25\") pod \"cilium-operator-599987898-d4pdm\" (UID: \"8b977ea8-db27-4db3-9fc5-0231afe77acd\") " pod="kube-system/cilium-operator-599987898-d4pdm" May 8 00:06:51.974804 kubelet[2638]: E0508 00:06:51.974648 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 8 00:06:51.976260 containerd[1485]: time="2025-05-08T00:06:51.976208664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9vt62,Uid:0b4f8b7c-b78a-460f-bd0c-3744876d5fd1,Namespace:kube-system,Attempt:0,}" May 8 00:06:52.019378 containerd[1485]: time="2025-05-08T00:06:52.018845551Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:06:52.019378 containerd[1485]: time="2025-05-08T00:06:52.019049047Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:06:52.019378 containerd[1485]: time="2025-05-08T00:06:52.019077424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:06:52.019855 containerd[1485]: time="2025-05-08T00:06:52.019706545Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:06:52.074757 systemd[1]: Started cri-containerd-d251811fb3d14d61981484f7e4392a6018f143d8e24e94b4bc2913d4ac51a0ea.scope - libcontainer container d251811fb3d14d61981484f7e4392a6018f143d8e24e94b4bc2913d4ac51a0ea. May 8 00:06:52.116464 containerd[1485]: time="2025-05-08T00:06:52.116397274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9vt62,Uid:0b4f8b7c-b78a-460f-bd0c-3744876d5fd1,Namespace:kube-system,Attempt:0,} returns sandbox id \"d251811fb3d14d61981484f7e4392a6018f143d8e24e94b4bc2913d4ac51a0ea\"" May 8 00:06:52.117596 kubelet[2638]: E0508 00:06:52.117456 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 8 00:06:52.122283 kubelet[2638]: E0508 00:06:52.122140 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 8 00:06:52.123793 containerd[1485]: time="2025-05-08T00:06:52.123737851Z" level=info msg="CreateContainer within sandbox \"d251811fb3d14d61981484f7e4392a6018f143d8e24e94b4bc2913d4ac51a0ea\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 8 00:06:52.124763 containerd[1485]: time="2025-05-08T00:06:52.124033158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-d4pdm,Uid:8b977ea8-db27-4db3-9fc5-0231afe77acd,Namespace:kube-system,Attempt:0,}" May 8 00:06:52.170019 containerd[1485]: time="2025-05-08T00:06:52.169930770Z" level=info msg="CreateContainer within sandbox \"d251811fb3d14d61981484f7e4392a6018f143d8e24e94b4bc2913d4ac51a0ea\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f5937a1db087faa585e634e7744ca3abf26869a2ce83caeea2ec99ab5fcaca1d\"" May 8 00:06:52.173584 containerd[1485]: time="2025-05-08T00:06:52.172328449Z" level=info msg="StartContainer for \"f5937a1db087faa585e634e7744ca3abf26869a2ce83caeea2ec99ab5fcaca1d\"" May 8 00:06:52.181558 containerd[1485]: time="2025-05-08T00:06:52.181407384Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:06:52.181953 containerd[1485]: time="2025-05-08T00:06:52.181917403Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:06:52.182268 containerd[1485]: time="2025-05-08T00:06:52.182221937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:06:52.183568 containerd[1485]: time="2025-05-08T00:06:52.183514654Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:06:52.221861 systemd[1]: Started cri-containerd-0eaff26c0b5d99843678c0d439bdac0477dd3c460bc1198e3cae119a642f73f2.scope - libcontainer container 0eaff26c0b5d99843678c0d439bdac0477dd3c460bc1198e3cae119a642f73f2. May 8 00:06:52.233797 systemd[1]: Started cri-containerd-f5937a1db087faa585e634e7744ca3abf26869a2ce83caeea2ec99ab5fcaca1d.scope - libcontainer container f5937a1db087faa585e634e7744ca3abf26869a2ce83caeea2ec99ab5fcaca1d. May 8 00:06:52.288876 kubelet[2638]: E0508 00:06:52.287401 2638 projected.go:269] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition May 8 00:06:52.288876 kubelet[2638]: E0508 00:06:52.287483 2638 projected.go:200] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-rz4v2: failed to sync secret cache: timed out waiting for the condition May 8 00:06:52.288876 kubelet[2638]: E0508 00:06:52.287598 2638 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bcd976d3-aff7-4b77-ad7c-18942b5d0979-hubble-tls podName:bcd976d3-aff7-4b77-ad7c-18942b5d0979 nodeName:}" failed. No retries permitted until 2025-05-08 00:06:52.787571655 +0000 UTC m=+16.390200978 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/bcd976d3-aff7-4b77-ad7c-18942b5d0979-hubble-tls") pod "cilium-rz4v2" (UID: "bcd976d3-aff7-4b77-ad7c-18942b5d0979") : failed to sync secret cache: timed out waiting for the condition May 8 00:06:52.314081 containerd[1485]: time="2025-05-08T00:06:52.312819108Z" level=info msg="StartContainer for \"f5937a1db087faa585e634e7744ca3abf26869a2ce83caeea2ec99ab5fcaca1d\" returns successfully" May 8 00:06:52.330084 containerd[1485]: time="2025-05-08T00:06:52.329773941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-d4pdm,Uid:8b977ea8-db27-4db3-9fc5-0231afe77acd,Namespace:kube-system,Attempt:0,} returns sandbox id \"0eaff26c0b5d99843678c0d439bdac0477dd3c460bc1198e3cae119a642f73f2\"" May 8 00:06:52.331719 kubelet[2638]: E0508 00:06:52.331680 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 8 00:06:52.347014 containerd[1485]: time="2025-05-08T00:06:52.346792468Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 8 00:06:52.668209 kubelet[2638]: E0508 00:06:52.667732 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 8 00:06:52.898462 kubelet[2638]: E0508 00:06:52.896280 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 8 00:06:52.900214 containerd[1485]: time="2025-05-08T00:06:52.899187566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rz4v2,Uid:bcd976d3-aff7-4b77-ad7c-18942b5d0979,Namespace:kube-system,Attempt:0,}" May 8 00:06:52.959080 containerd[1485]: time="2025-05-08T00:06:52.957967233Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:06:52.959080 containerd[1485]: time="2025-05-08T00:06:52.958077346Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:06:52.959080 containerd[1485]: time="2025-05-08T00:06:52.958100774Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:06:52.959080 containerd[1485]: time="2025-05-08T00:06:52.958309536Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:06:53.011387 systemd[1]: Started cri-containerd-fb3b4d9d241c65a0fe4af5a4e0b0d2917a9c7c902add9b864d5a982c216d38f1.scope - libcontainer container fb3b4d9d241c65a0fe4af5a4e0b0d2917a9c7c902add9b864d5a982c216d38f1. May 8 00:06:53.066828 containerd[1485]: time="2025-05-08T00:06:53.066771193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rz4v2,Uid:bcd976d3-aff7-4b77-ad7c-18942b5d0979,Namespace:kube-system,Attempt:0,} returns sandbox id \"fb3b4d9d241c65a0fe4af5a4e0b0d2917a9c7c902add9b864d5a982c216d38f1\"" May 8 00:06:53.073170 kubelet[2638]: E0508 00:06:53.073092 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 8 00:06:53.793168 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3978531201.mount: Deactivated successfully. May 8 00:06:54.344958 containerd[1485]: time="2025-05-08T00:06:54.343931476Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:06:54.346195 containerd[1485]: time="2025-05-08T00:06:54.346128455Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 8 00:06:54.346630 containerd[1485]: time="2025-05-08T00:06:54.346593423Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:06:54.349192 containerd[1485]: time="2025-05-08T00:06:54.349148039Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.002307747s" May 8 00:06:54.349388 containerd[1485]: time="2025-05-08T00:06:54.349362489Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 8 00:06:54.351874 containerd[1485]: time="2025-05-08T00:06:54.351839880Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 8 00:06:54.359527 containerd[1485]: time="2025-05-08T00:06:54.359291310Z" level=info msg="CreateContainer within sandbox \"0eaff26c0b5d99843678c0d439bdac0477dd3c460bc1198e3cae119a642f73f2\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 8 00:06:54.386327 containerd[1485]: time="2025-05-08T00:06:54.386152969Z" level=info msg="CreateContainer within sandbox \"0eaff26c0b5d99843678c0d439bdac0477dd3c460bc1198e3cae119a642f73f2\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"121f79592338f09c91d98259c2d07ff87fa87a401e914c07830327afdf026070\"" May 8 00:06:54.388945 containerd[1485]: time="2025-05-08T00:06:54.388897177Z" level=info msg="StartContainer for \"121f79592338f09c91d98259c2d07ff87fa87a401e914c07830327afdf026070\"" May 8 00:06:54.442724 systemd[1]: Started cri-containerd-121f79592338f09c91d98259c2d07ff87fa87a401e914c07830327afdf026070.scope - libcontainer container 121f79592338f09c91d98259c2d07ff87fa87a401e914c07830327afdf026070. May 8 00:06:54.490764 containerd[1485]: time="2025-05-08T00:06:54.490696659Z" level=info msg="StartContainer for \"121f79592338f09c91d98259c2d07ff87fa87a401e914c07830327afdf026070\" returns successfully" May 8 00:06:54.686496 kubelet[2638]: E0508 00:06:54.686178 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 8 00:06:54.733632 kubelet[2638]: I0508 00:06:54.733037 2638 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9vt62" podStartSLOduration=4.7330145 podStartE2EDuration="4.7330145s" podCreationTimestamp="2025-05-08 00:06:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:06:52.685629235 +0000 UTC m=+16.288258559" watchObservedRunningTime="2025-05-08 00:06:54.7330145 +0000 UTC m=+18.335643824" May 8 00:06:54.733632 kubelet[2638]: I0508 00:06:54.733372 2638 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-d4pdm" podStartSLOduration=1.7156338930000001 podStartE2EDuration="3.733362912s" podCreationTimestamp="2025-05-08 00:06:51 +0000 UTC" firstStartedPulling="2025-05-08 00:06:52.333798708 +0000 UTC m=+15.936428010" lastFinishedPulling="2025-05-08 00:06:54.351527725 +0000 UTC m=+17.954157029" observedRunningTime="2025-05-08 00:06:54.732889064 +0000 UTC m=+18.335518389" watchObservedRunningTime="2025-05-08 00:06:54.733362912 +0000 UTC m=+18.335992237" May 8 00:06:55.693545 kubelet[2638]: E0508 00:06:55.690792 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 8 00:06:59.416802 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount526165628.mount: Deactivated successfully. May 8 00:07:02.495378 containerd[1485]: time="2025-05-08T00:07:02.494713227Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 8 00:07:02.495378 containerd[1485]: time="2025-05-08T00:07:02.494916277Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:07:02.510690 containerd[1485]: time="2025-05-08T00:07:02.510623664Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:07:02.514443 containerd[1485]: time="2025-05-08T00:07:02.514072336Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.162186471s" May 8 00:07:02.514443 containerd[1485]: time="2025-05-08T00:07:02.514131261Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 8 00:07:02.521050 containerd[1485]: time="2025-05-08T00:07:02.520990807Z" level=info msg="CreateContainer within sandbox \"fb3b4d9d241c65a0fe4af5a4e0b0d2917a9c7c902add9b864d5a982c216d38f1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 8 00:07:02.585134 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount269838523.mount: Deactivated successfully. May 8 00:07:02.589820 containerd[1485]: time="2025-05-08T00:07:02.589759378Z" level=info msg="CreateContainer within sandbox \"fb3b4d9d241c65a0fe4af5a4e0b0d2917a9c7c902add9b864d5a982c216d38f1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8c55aefc4bd031cf7b1b4d375b07cad5fdfd5d99279813f88eecc8a0d1c5b99f\"" May 8 00:07:02.591630 containerd[1485]: time="2025-05-08T00:07:02.591583181Z" level=info msg="StartContainer for \"8c55aefc4bd031cf7b1b4d375b07cad5fdfd5d99279813f88eecc8a0d1c5b99f\"" May 8 00:07:02.840796 systemd[1]: Started cri-containerd-8c55aefc4bd031cf7b1b4d375b07cad5fdfd5d99279813f88eecc8a0d1c5b99f.scope - libcontainer container 8c55aefc4bd031cf7b1b4d375b07cad5fdfd5d99279813f88eecc8a0d1c5b99f. May 8 00:07:02.896614 containerd[1485]: time="2025-05-08T00:07:02.896552973Z" level=info msg="StartContainer for \"8c55aefc4bd031cf7b1b4d375b07cad5fdfd5d99279813f88eecc8a0d1c5b99f\" returns successfully" May 8 00:07:02.919835 systemd[1]: cri-containerd-8c55aefc4bd031cf7b1b4d375b07cad5fdfd5d99279813f88eecc8a0d1c5b99f.scope: Deactivated successfully. May 8 00:07:03.030353 containerd[1485]: time="2025-05-08T00:07:03.016553257Z" level=info msg="shim disconnected" id=8c55aefc4bd031cf7b1b4d375b07cad5fdfd5d99279813f88eecc8a0d1c5b99f namespace=k8s.io May 8 00:07:03.030353 containerd[1485]: time="2025-05-08T00:07:03.030118386Z" level=warning msg="cleaning up after shim disconnected" id=8c55aefc4bd031cf7b1b4d375b07cad5fdfd5d99279813f88eecc8a0d1c5b99f namespace=k8s.io May 8 00:07:03.030353 containerd[1485]: time="2025-05-08T00:07:03.030137620Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:07:03.575351 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8c55aefc4bd031cf7b1b4d375b07cad5fdfd5d99279813f88eecc8a0d1c5b99f-rootfs.mount: Deactivated successfully. May 8 00:07:03.729911 kubelet[2638]: E0508 00:07:03.729870 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 8 00:07:03.738132 containerd[1485]: time="2025-05-08T00:07:03.737927528Z" level=info msg="CreateContainer within sandbox \"fb3b4d9d241c65a0fe4af5a4e0b0d2917a9c7c902add9b864d5a982c216d38f1\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 8 00:07:03.760316 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2983536177.mount: Deactivated successfully. May 8 00:07:03.762114 containerd[1485]: time="2025-05-08T00:07:03.762012771Z" level=info msg="CreateContainer within sandbox \"fb3b4d9d241c65a0fe4af5a4e0b0d2917a9c7c902add9b864d5a982c216d38f1\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f6985f367e52bf85c29cd20dc7dae73a628f87ed349649deee01c75f71d991c5\"" May 8 00:07:03.763652 containerd[1485]: time="2025-05-08T00:07:03.762884094Z" level=info msg="StartContainer for \"f6985f367e52bf85c29cd20dc7dae73a628f87ed349649deee01c75f71d991c5\"" May 8 00:07:03.834781 systemd[1]: Started cri-containerd-f6985f367e52bf85c29cd20dc7dae73a628f87ed349649deee01c75f71d991c5.scope - libcontainer container f6985f367e52bf85c29cd20dc7dae73a628f87ed349649deee01c75f71d991c5. May 8 00:07:03.869918 containerd[1485]: time="2025-05-08T00:07:03.869634118Z" level=info msg="StartContainer for \"f6985f367e52bf85c29cd20dc7dae73a628f87ed349649deee01c75f71d991c5\" returns successfully" May 8 00:07:03.890239 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 8 00:07:03.890640 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 8 00:07:03.890907 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 8 00:07:03.898073 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 00:07:03.899452 systemd[1]: cri-containerd-f6985f367e52bf85c29cd20dc7dae73a628f87ed349649deee01c75f71d991c5.scope: Deactivated successfully. May 8 00:07:03.934381 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 00:07:03.945602 containerd[1485]: time="2025-05-08T00:07:03.945535076Z" level=info msg="shim disconnected" id=f6985f367e52bf85c29cd20dc7dae73a628f87ed349649deee01c75f71d991c5 namespace=k8s.io May 8 00:07:03.945602 containerd[1485]: time="2025-05-08T00:07:03.945592796Z" level=warning msg="cleaning up after shim disconnected" id=f6985f367e52bf85c29cd20dc7dae73a628f87ed349649deee01c75f71d991c5 namespace=k8s.io May 8 00:07:03.945602 containerd[1485]: time="2025-05-08T00:07:03.945604254Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:07:04.576533 systemd[1]: run-containerd-runc-k8s.io-f6985f367e52bf85c29cd20dc7dae73a628f87ed349649deee01c75f71d991c5-runc.L1zzC5.mount: Deactivated successfully. May 8 00:07:04.576982 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f6985f367e52bf85c29cd20dc7dae73a628f87ed349649deee01c75f71d991c5-rootfs.mount: Deactivated successfully. May 8 00:07:04.734081 kubelet[2638]: E0508 00:07:04.734008 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 8 00:07:04.736715 containerd[1485]: time="2025-05-08T00:07:04.736661133Z" level=info msg="CreateContainer within sandbox \"fb3b4d9d241c65a0fe4af5a4e0b0d2917a9c7c902add9b864d5a982c216d38f1\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 8 00:07:04.785485 containerd[1485]: time="2025-05-08T00:07:04.784945788Z" level=info msg="CreateContainer within sandbox \"fb3b4d9d241c65a0fe4af5a4e0b0d2917a9c7c902add9b864d5a982c216d38f1\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"04f2a6a4d3c538ebad4f9a4943687d937d623172c595c9479cfd48510ce1295b\"" May 8 00:07:04.789514 containerd[1485]: time="2025-05-08T00:07:04.788567513Z" level=info msg="StartContainer for \"04f2a6a4d3c538ebad4f9a4943687d937d623172c595c9479cfd48510ce1295b\"" May 8 00:07:04.845992 systemd[1]: Started cri-containerd-04f2a6a4d3c538ebad4f9a4943687d937d623172c595c9479cfd48510ce1295b.scope - libcontainer container 04f2a6a4d3c538ebad4f9a4943687d937d623172c595c9479cfd48510ce1295b. May 8 00:07:04.893603 containerd[1485]: time="2025-05-08T00:07:04.893488896Z" level=info msg="StartContainer for \"04f2a6a4d3c538ebad4f9a4943687d937d623172c595c9479cfd48510ce1295b\" returns successfully" May 8 00:07:04.902264 systemd[1]: cri-containerd-04f2a6a4d3c538ebad4f9a4943687d937d623172c595c9479cfd48510ce1295b.scope: Deactivated successfully. May 8 00:07:04.939914 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-04f2a6a4d3c538ebad4f9a4943687d937d623172c595c9479cfd48510ce1295b-rootfs.mount: Deactivated successfully. May 8 00:07:04.942562 containerd[1485]: time="2025-05-08T00:07:04.942270404Z" level=info msg="shim disconnected" id=04f2a6a4d3c538ebad4f9a4943687d937d623172c595c9479cfd48510ce1295b namespace=k8s.io May 8 00:07:04.942562 containerd[1485]: time="2025-05-08T00:07:04.942328460Z" level=warning msg="cleaning up after shim disconnected" id=04f2a6a4d3c538ebad4f9a4943687d937d623172c595c9479cfd48510ce1295b namespace=k8s.io May 8 00:07:04.942562 containerd[1485]: time="2025-05-08T00:07:04.942337196Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:07:05.744857 kubelet[2638]: E0508 00:07:05.744817 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 8 00:07:05.751047 containerd[1485]: time="2025-05-08T00:07:05.749788331Z" level=info msg="CreateContainer within sandbox \"fb3b4d9d241c65a0fe4af5a4e0b0d2917a9c7c902add9b864d5a982c216d38f1\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 8 00:07:05.775009 containerd[1485]: time="2025-05-08T00:07:05.774695382Z" level=info msg="CreateContainer within sandbox \"fb3b4d9d241c65a0fe4af5a4e0b0d2917a9c7c902add9b864d5a982c216d38f1\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a4e5a53528f89e863ae05712fdb960f36c9fe850bbc18cdb6d89a03056634507\"" May 8 00:07:05.780262 containerd[1485]: time="2025-05-08T00:07:05.778369028Z" level=info msg="StartContainer for \"a4e5a53528f89e863ae05712fdb960f36c9fe850bbc18cdb6d89a03056634507\"" May 8 00:07:05.832679 systemd[1]: Started cri-containerd-a4e5a53528f89e863ae05712fdb960f36c9fe850bbc18cdb6d89a03056634507.scope - libcontainer container a4e5a53528f89e863ae05712fdb960f36c9fe850bbc18cdb6d89a03056634507. May 8 00:07:05.874698 systemd[1]: cri-containerd-a4e5a53528f89e863ae05712fdb960f36c9fe850bbc18cdb6d89a03056634507.scope: Deactivated successfully. May 8 00:07:05.878196 containerd[1485]: time="2025-05-08T00:07:05.877110766Z" level=info msg="StartContainer for \"a4e5a53528f89e863ae05712fdb960f36c9fe850bbc18cdb6d89a03056634507\" returns successfully" May 8 00:07:05.906553 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a4e5a53528f89e863ae05712fdb960f36c9fe850bbc18cdb6d89a03056634507-rootfs.mount: Deactivated successfully. May 8 00:07:05.911056 containerd[1485]: time="2025-05-08T00:07:05.910956741Z" level=info msg="shim disconnected" id=a4e5a53528f89e863ae05712fdb960f36c9fe850bbc18cdb6d89a03056634507 namespace=k8s.io May 8 00:07:05.911056 containerd[1485]: time="2025-05-08T00:07:05.911047103Z" level=warning msg="cleaning up after shim disconnected" id=a4e5a53528f89e863ae05712fdb960f36c9fe850bbc18cdb6d89a03056634507 namespace=k8s.io May 8 00:07:05.911056 containerd[1485]: time="2025-05-08T00:07:05.911061044Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:07:06.750089 kubelet[2638]: E0508 00:07:06.750022 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 8 00:07:06.763468 containerd[1485]: time="2025-05-08T00:07:06.763208712Z" level=info msg="CreateContainer within sandbox \"fb3b4d9d241c65a0fe4af5a4e0b0d2917a9c7c902add9b864d5a982c216d38f1\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 8 00:07:06.803093 containerd[1485]: time="2025-05-08T00:07:06.802946519Z" level=info msg="CreateContainer within sandbox \"fb3b4d9d241c65a0fe4af5a4e0b0d2917a9c7c902add9b864d5a982c216d38f1\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"63550b1a08b6612b5835da886d179b96c92e011af8a8240f98292c11a4f97ba5\"" May 8 00:07:06.807458 containerd[1485]: time="2025-05-08T00:07:06.807381158Z" level=info msg="StartContainer for \"63550b1a08b6612b5835da886d179b96c92e011af8a8240f98292c11a4f97ba5\"" May 8 00:07:06.863784 systemd[1]: Started cri-containerd-63550b1a08b6612b5835da886d179b96c92e011af8a8240f98292c11a4f97ba5.scope - libcontainer container 63550b1a08b6612b5835da886d179b96c92e011af8a8240f98292c11a4f97ba5. May 8 00:07:06.918920 containerd[1485]: time="2025-05-08T00:07:06.918778699Z" level=info msg="StartContainer for \"63550b1a08b6612b5835da886d179b96c92e011af8a8240f98292c11a4f97ba5\" returns successfully" May 8 00:07:07.119951 kubelet[2638]: I0508 00:07:07.119892 2638 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 8 00:07:07.166884 kubelet[2638]: I0508 00:07:07.165436 2638 topology_manager.go:215] "Topology Admit Handler" podUID="19e21d9a-e9d2-433a-8825-898ac9f46ff8" podNamespace="kube-system" podName="coredns-7db6d8ff4d-t88zx" May 8 00:07:07.174615 kubelet[2638]: I0508 00:07:07.174568 2638 topology_manager.go:215] "Topology Admit Handler" podUID="6d1bbc15-65e5-49cd-a810-c5f2a13dbc84" podNamespace="kube-system" podName="coredns-7db6d8ff4d-pgtvq" May 8 00:07:07.184560 systemd[1]: Created slice kubepods-burstable-pod19e21d9a_e9d2_433a_8825_898ac9f46ff8.slice - libcontainer container kubepods-burstable-pod19e21d9a_e9d2_433a_8825_898ac9f46ff8.slice. May 8 00:07:07.199629 systemd[1]: Created slice kubepods-burstable-pod6d1bbc15_65e5_49cd_a810_c5f2a13dbc84.slice - libcontainer container kubepods-burstable-pod6d1bbc15_65e5_49cd_a810_c5f2a13dbc84.slice. May 8 00:07:07.207891 kubelet[2638]: I0508 00:07:07.207158 2638 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/19e21d9a-e9d2-433a-8825-898ac9f46ff8-config-volume\") pod \"coredns-7db6d8ff4d-t88zx\" (UID: \"19e21d9a-e9d2-433a-8825-898ac9f46ff8\") " pod="kube-system/coredns-7db6d8ff4d-t88zx" May 8 00:07:07.207891 kubelet[2638]: I0508 00:07:07.207202 2638 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnrrp\" (UniqueName: \"kubernetes.io/projected/6d1bbc15-65e5-49cd-a810-c5f2a13dbc84-kube-api-access-xnrrp\") pod \"coredns-7db6d8ff4d-pgtvq\" (UID: \"6d1bbc15-65e5-49cd-a810-c5f2a13dbc84\") " pod="kube-system/coredns-7db6d8ff4d-pgtvq" May 8 00:07:07.207891 kubelet[2638]: I0508 00:07:07.207226 2638 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwc9q\" (UniqueName: \"kubernetes.io/projected/19e21d9a-e9d2-433a-8825-898ac9f46ff8-kube-api-access-hwc9q\") pod \"coredns-7db6d8ff4d-t88zx\" (UID: \"19e21d9a-e9d2-433a-8825-898ac9f46ff8\") " pod="kube-system/coredns-7db6d8ff4d-t88zx" May 8 00:07:07.207891 kubelet[2638]: I0508 00:07:07.207244 2638 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6d1bbc15-65e5-49cd-a810-c5f2a13dbc84-config-volume\") pod \"coredns-7db6d8ff4d-pgtvq\" (UID: \"6d1bbc15-65e5-49cd-a810-c5f2a13dbc84\") " pod="kube-system/coredns-7db6d8ff4d-pgtvq" May 8 00:07:07.493465 kubelet[2638]: E0508 00:07:07.493029 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 8 00:07:07.497698 containerd[1485]: time="2025-05-08T00:07:07.497272141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-t88zx,Uid:19e21d9a-e9d2-433a-8825-898ac9f46ff8,Namespace:kube-system,Attempt:0,}" May 8 00:07:07.506190 kubelet[2638]: E0508 00:07:07.505571 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 8 00:07:07.508052 containerd[1485]: time="2025-05-08T00:07:07.507672254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-pgtvq,Uid:6d1bbc15-65e5-49cd-a810-c5f2a13dbc84,Namespace:kube-system,Attempt:0,}" May 8 00:07:07.756886 kubelet[2638]: E0508 00:07:07.756754 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 8 00:07:08.759588 kubelet[2638]: E0508 00:07:08.759532 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 8 00:07:09.362836 systemd-networkd[1379]: cilium_host: Link UP May 8 00:07:09.365143 systemd-networkd[1379]: cilium_net: Link UP May 8 00:07:09.366834 systemd-networkd[1379]: cilium_net: Gained carrier May 8 00:07:09.367056 systemd-networkd[1379]: cilium_host: Gained carrier May 8 00:07:09.534360 systemd-networkd[1379]: cilium_vxlan: Link UP May 8 00:07:09.534371 systemd-networkd[1379]: cilium_vxlan: Gained carrier May 8 00:07:09.763884 kubelet[2638]: E0508 00:07:09.763370 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 8 00:07:09.864015 systemd-networkd[1379]: cilium_net: Gained IPv6LL May 8 00:07:10.015993 systemd-networkd[1379]: cilium_host: Gained IPv6LL May 8 00:07:10.068916 kernel: NET: Registered PF_ALG protocol family May 8 00:07:11.169337 systemd-networkd[1379]: cilium_vxlan: Gained IPv6LL May 8 00:07:11.230967 systemd-networkd[1379]: lxc_health: Link UP May 8 00:07:11.239856 systemd-networkd[1379]: lxc_health: Gained carrier May 8 00:07:11.611462 kernel: eth0: renamed from tmp359ac May 8 00:07:11.620953 systemd-networkd[1379]: lxc00027d7b32e5: Link UP May 8 00:07:11.623204 systemd-networkd[1379]: lxc00027d7b32e5: Gained carrier May 8 00:07:11.674591 kernel: eth0: renamed from tmp1c053 May 8 00:07:11.678138 systemd-networkd[1379]: lxcbfdfb9988172: Link UP May 8 00:07:11.689137 systemd-networkd[1379]: lxcbfdfb9988172: Gained carrier May 8 00:07:12.768710 systemd-networkd[1379]: lxc_health: Gained IPv6LL May 8 00:07:12.901861 kubelet[2638]: E0508 00:07:12.899281 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 8 00:07:12.940242 kubelet[2638]: I0508 00:07:12.938942 2638 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-rz4v2" podStartSLOduration=13.514565433 podStartE2EDuration="22.93890651s" podCreationTimestamp="2025-05-08 00:06:50 +0000 UTC" firstStartedPulling="2025-05-08 00:06:53.091628359 +0000 UTC m=+16.694257664" lastFinishedPulling="2025-05-08 00:07:02.515969426 +0000 UTC m=+26.118598741" observedRunningTime="2025-05-08 00:07:07.788346909 +0000 UTC m=+31.390976233" watchObservedRunningTime="2025-05-08 00:07:12.93890651 +0000 UTC m=+36.541535842" May 8 00:07:12.960080 systemd-networkd[1379]: lxcbfdfb9988172: Gained IPv6LL May 8 00:07:13.600467 systemd-networkd[1379]: lxc00027d7b32e5: Gained IPv6LL May 8 00:07:13.780989 kubelet[2638]: E0508 00:07:13.780944 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 8 00:07:14.786117 kubelet[2638]: E0508 00:07:14.786071 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 8 00:07:18.139270 containerd[1485]: time="2025-05-08T00:07:18.139115233Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:07:18.139270 containerd[1485]: time="2025-05-08T00:07:18.139214956Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:07:18.143395 containerd[1485]: time="2025-05-08T00:07:18.140904897Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:07:18.143395 containerd[1485]: time="2025-05-08T00:07:18.141176943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:07:18.176144 containerd[1485]: time="2025-05-08T00:07:18.175663313Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:07:18.176144 containerd[1485]: time="2025-05-08T00:07:18.175740572Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:07:18.176144 containerd[1485]: time="2025-05-08T00:07:18.175754364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:07:18.176144 containerd[1485]: time="2025-05-08T00:07:18.175846076Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:07:18.244792 systemd[1]: Started cri-containerd-1c0538e43d1afe1faeb69c400278db849452a19d114cd13c1c751f57a161de8b.scope - libcontainer container 1c0538e43d1afe1faeb69c400278db849452a19d114cd13c1c751f57a161de8b. May 8 00:07:18.249287 systemd[1]: Started cri-containerd-359ac4aeeac3b63fd52b6f9b895e09716fbcf615e278a25080485cc1a17b25ca.scope - libcontainer container 359ac4aeeac3b63fd52b6f9b895e09716fbcf615e278a25080485cc1a17b25ca. May 8 00:07:18.369209 containerd[1485]: time="2025-05-08T00:07:18.369161003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-t88zx,Uid:19e21d9a-e9d2-433a-8825-898ac9f46ff8,Namespace:kube-system,Attempt:0,} returns sandbox id \"359ac4aeeac3b63fd52b6f9b895e09716fbcf615e278a25080485cc1a17b25ca\"" May 8 00:07:18.370972 kubelet[2638]: E0508 00:07:18.370939 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 8 00:07:18.379192 containerd[1485]: time="2025-05-08T00:07:18.379142500Z" level=info msg="CreateContainer within sandbox \"359ac4aeeac3b63fd52b6f9b895e09716fbcf615e278a25080485cc1a17b25ca\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 8 00:07:18.383629 containerd[1485]: time="2025-05-08T00:07:18.383581569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-pgtvq,Uid:6d1bbc15-65e5-49cd-a810-c5f2a13dbc84,Namespace:kube-system,Attempt:0,} returns sandbox id \"1c0538e43d1afe1faeb69c400278db849452a19d114cd13c1c751f57a161de8b\"" May 8 00:07:18.386687 kubelet[2638]: E0508 00:07:18.385913 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 8 00:07:18.394065 containerd[1485]: time="2025-05-08T00:07:18.393909292Z" level=info msg="CreateContainer within sandbox \"1c0538e43d1afe1faeb69c400278db849452a19d114cd13c1c751f57a161de8b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 8 00:07:18.428776 containerd[1485]: time="2025-05-08T00:07:18.428501225Z" level=info msg="CreateContainer within sandbox \"1c0538e43d1afe1faeb69c400278db849452a19d114cd13c1c751f57a161de8b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7c841286d1017fbe442a6b441f5f755556745800a945580b1406a5258b5501b5\"" May 8 00:07:18.429746 containerd[1485]: time="2025-05-08T00:07:18.429238418Z" level=info msg="StartContainer for \"7c841286d1017fbe442a6b441f5f755556745800a945580b1406a5258b5501b5\"" May 8 00:07:18.431229 containerd[1485]: time="2025-05-08T00:07:18.430893643Z" level=info msg="CreateContainer within sandbox \"359ac4aeeac3b63fd52b6f9b895e09716fbcf615e278a25080485cc1a17b25ca\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"34d5065b3c14ea6e67da94e351170babb6d9e8dd4c1e8735bc351cce3f9c3d83\"" May 8 00:07:18.433232 containerd[1485]: time="2025-05-08T00:07:18.431693895Z" level=info msg="StartContainer for \"34d5065b3c14ea6e67da94e351170babb6d9e8dd4c1e8735bc351cce3f9c3d83\"" May 8 00:07:18.481765 systemd[1]: Started cri-containerd-34d5065b3c14ea6e67da94e351170babb6d9e8dd4c1e8735bc351cce3f9c3d83.scope - libcontainer container 34d5065b3c14ea6e67da94e351170babb6d9e8dd4c1e8735bc351cce3f9c3d83. May 8 00:07:18.486591 systemd[1]: Started cri-containerd-7c841286d1017fbe442a6b441f5f755556745800a945580b1406a5258b5501b5.scope - libcontainer container 7c841286d1017fbe442a6b441f5f755556745800a945580b1406a5258b5501b5. May 8 00:07:18.561356 containerd[1485]: time="2025-05-08T00:07:18.561280785Z" level=info msg="StartContainer for \"34d5065b3c14ea6e67da94e351170babb6d9e8dd4c1e8735bc351cce3f9c3d83\" returns successfully" May 8 00:07:18.561738 containerd[1485]: time="2025-05-08T00:07:18.561598477Z" level=info msg="StartContainer for \"7c841286d1017fbe442a6b441f5f755556745800a945580b1406a5258b5501b5\" returns successfully" May 8 00:07:18.799078 kubelet[2638]: E0508 00:07:18.798901 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 8 00:07:18.803873 kubelet[2638]: E0508 00:07:18.803785 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 8 00:07:18.854839 kubelet[2638]: I0508 00:07:18.854763 2638 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-t88zx" podStartSLOduration=27.854740048 podStartE2EDuration="27.854740048s" podCreationTimestamp="2025-05-08 00:06:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:07:18.853504585 +0000 UTC m=+42.456133914" watchObservedRunningTime="2025-05-08 00:07:18.854740048 +0000 UTC m=+42.457369373" May 8 00:07:18.855162 kubelet[2638]: I0508 00:07:18.854862 2638 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-pgtvq" podStartSLOduration=27.854856762 podStartE2EDuration="27.854856762s" podCreationTimestamp="2025-05-08 00:06:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:07:18.838177851 +0000 UTC m=+42.440807175" watchObservedRunningTime="2025-05-08 00:07:18.854856762 +0000 UTC m=+42.457486085" May 8 00:07:19.160325 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2683500742.mount: Deactivated successfully. May 8 00:07:19.807026 kubelet[2638]: E0508 00:07:19.806507 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 8 00:07:19.807676 kubelet[2638]: E0508 00:07:19.807623 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 8 00:07:20.808886 kubelet[2638]: E0508 00:07:20.808562 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 8 00:07:20.808886 kubelet[2638]: E0508 00:07:20.808705 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 8 00:07:23.168505 systemd[1]: Started sshd@7-146.190.122.31:22-139.178.68.195:41718.service - OpenSSH per-connection server daemon (139.178.68.195:41718). May 8 00:07:23.302529 sshd[4017]: Accepted publickey for core from 139.178.68.195 port 41718 ssh2: RSA SHA256:EMQK/xXwyGW130jHG636zV1LD4ZeZqDZsuuaHw+qK90 May 8 00:07:23.304850 sshd-session[4017]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:07:23.311458 systemd-logind[1463]: New session 8 of user core. May 8 00:07:23.318866 systemd[1]: Started session-8.scope - Session 8 of User core. May 8 00:07:23.984448 sshd[4019]: Connection closed by 139.178.68.195 port 41718 May 8 00:07:23.985505 sshd-session[4017]: pam_unix(sshd:session): session closed for user core May 8 00:07:23.989818 systemd-logind[1463]: Session 8 logged out. Waiting for processes to exit. May 8 00:07:23.990465 systemd[1]: sshd@7-146.190.122.31:22-139.178.68.195:41718.service: Deactivated successfully. May 8 00:07:23.993645 systemd[1]: session-8.scope: Deactivated successfully. May 8 00:07:23.998963 systemd-logind[1463]: Removed session 8. May 8 00:07:29.007146 systemd[1]: Started sshd@8-146.190.122.31:22-139.178.68.195:39972.service - OpenSSH per-connection server daemon (139.178.68.195:39972). May 8 00:07:29.085304 sshd[4032]: Accepted publickey for core from 139.178.68.195 port 39972 ssh2: RSA SHA256:EMQK/xXwyGW130jHG636zV1LD4ZeZqDZsuuaHw+qK90 May 8 00:07:29.087567 sshd-session[4032]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:07:29.096817 systemd-logind[1463]: New session 9 of user core. May 8 00:07:29.102764 systemd[1]: Started session-9.scope - Session 9 of User core. May 8 00:07:29.291652 sshd[4034]: Connection closed by 139.178.68.195 port 39972 May 8 00:07:29.292861 sshd-session[4032]: pam_unix(sshd:session): session closed for user core May 8 00:07:29.299247 systemd[1]: sshd@8-146.190.122.31:22-139.178.68.195:39972.service: Deactivated successfully. May 8 00:07:29.302413 systemd[1]: session-9.scope: Deactivated successfully. May 8 00:07:29.303837 systemd-logind[1463]: Session 9 logged out. Waiting for processes to exit. May 8 00:07:29.306297 systemd-logind[1463]: Removed session 9. May 8 00:07:34.322858 systemd[1]: Started sshd@9-146.190.122.31:22-139.178.68.195:39974.service - OpenSSH per-connection server daemon (139.178.68.195:39974). May 8 00:07:34.384369 sshd[4048]: Accepted publickey for core from 139.178.68.195 port 39974 ssh2: RSA SHA256:EMQK/xXwyGW130jHG636zV1LD4ZeZqDZsuuaHw+qK90 May 8 00:07:34.387105 sshd-session[4048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:07:34.395445 systemd-logind[1463]: New session 10 of user core. May 8 00:07:34.402755 systemd[1]: Started session-10.scope - Session 10 of User core. May 8 00:07:34.582536 sshd[4050]: Connection closed by 139.178.68.195 port 39974 May 8 00:07:34.581805 sshd-session[4048]: pam_unix(sshd:session): session closed for user core May 8 00:07:34.589852 systemd[1]: sshd@9-146.190.122.31:22-139.178.68.195:39974.service: Deactivated successfully. May 8 00:07:34.595991 systemd[1]: session-10.scope: Deactivated successfully. May 8 00:07:34.600275 systemd-logind[1463]: Session 10 logged out. Waiting for processes to exit. May 8 00:07:34.602552 systemd-logind[1463]: Removed session 10. May 8 00:07:39.603980 systemd[1]: Started sshd@10-146.190.122.31:22-139.178.68.195:38002.service - OpenSSH per-connection server daemon (139.178.68.195:38002). May 8 00:07:39.665036 sshd[4065]: Accepted publickey for core from 139.178.68.195 port 38002 ssh2: RSA SHA256:EMQK/xXwyGW130jHG636zV1LD4ZeZqDZsuuaHw+qK90 May 8 00:07:39.667520 sshd-session[4065]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:07:39.677655 systemd-logind[1463]: New session 11 of user core. May 8 00:07:39.680803 systemd[1]: Started session-11.scope - Session 11 of User core. May 8 00:07:39.844206 sshd[4067]: Connection closed by 139.178.68.195 port 38002 May 8 00:07:39.845168 sshd-session[4065]: pam_unix(sshd:session): session closed for user core May 8 00:07:39.850287 systemd[1]: sshd@10-146.190.122.31:22-139.178.68.195:38002.service: Deactivated successfully. May 8 00:07:39.854396 systemd[1]: session-11.scope: Deactivated successfully. May 8 00:07:39.857363 systemd-logind[1463]: Session 11 logged out. Waiting for processes to exit. May 8 00:07:39.861011 systemd-logind[1463]: Removed session 11. May 8 00:07:44.875997 systemd[1]: Started sshd@11-146.190.122.31:22-139.178.68.195:38004.service - OpenSSH per-connection server daemon (139.178.68.195:38004). May 8 00:07:44.941898 sshd[4080]: Accepted publickey for core from 139.178.68.195 port 38004 ssh2: RSA SHA256:EMQK/xXwyGW130jHG636zV1LD4ZeZqDZsuuaHw+qK90 May 8 00:07:44.944491 sshd-session[4080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:07:44.951998 systemd-logind[1463]: New session 12 of user core. May 8 00:07:44.962825 systemd[1]: Started session-12.scope - Session 12 of User core. May 8 00:07:45.174490 sshd[4082]: Connection closed by 139.178.68.195 port 38004 May 8 00:07:45.175376 sshd-session[4080]: pam_unix(sshd:session): session closed for user core May 8 00:07:45.190353 systemd[1]: sshd@11-146.190.122.31:22-139.178.68.195:38004.service: Deactivated successfully. May 8 00:07:45.195146 systemd[1]: session-12.scope: Deactivated successfully. May 8 00:07:45.198400 systemd-logind[1463]: Session 12 logged out. Waiting for processes to exit. May 8 00:07:45.206856 systemd[1]: Started sshd@12-146.190.122.31:22-139.178.68.195:50808.service - OpenSSH per-connection server daemon (139.178.68.195:50808). May 8 00:07:45.210483 systemd-logind[1463]: Removed session 12. May 8 00:07:45.266259 sshd[4094]: Accepted publickey for core from 139.178.68.195 port 50808 ssh2: RSA SHA256:EMQK/xXwyGW130jHG636zV1LD4ZeZqDZsuuaHw+qK90 May 8 00:07:45.268695 sshd-session[4094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:07:45.277483 systemd-logind[1463]: New session 13 of user core. May 8 00:07:45.284766 systemd[1]: Started session-13.scope - Session 13 of User core. May 8 00:07:45.521997 sshd[4097]: Connection closed by 139.178.68.195 port 50808 May 8 00:07:45.523480 sshd-session[4094]: pam_unix(sshd:session): session closed for user core May 8 00:07:45.544364 systemd[1]: sshd@12-146.190.122.31:22-139.178.68.195:50808.service: Deactivated successfully. May 8 00:07:45.548030 systemd[1]: session-13.scope: Deactivated successfully. May 8 00:07:45.550659 systemd-logind[1463]: Session 13 logged out. Waiting for processes to exit. May 8 00:07:45.562056 systemd[1]: Started sshd@13-146.190.122.31:22-139.178.68.195:50816.service - OpenSSH per-connection server daemon (139.178.68.195:50816). May 8 00:07:45.567005 systemd-logind[1463]: Removed session 13. May 8 00:07:45.641235 sshd[4106]: Accepted publickey for core from 139.178.68.195 port 50816 ssh2: RSA SHA256:EMQK/xXwyGW130jHG636zV1LD4ZeZqDZsuuaHw+qK90 May 8 00:07:45.643615 sshd-session[4106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:07:45.654214 systemd-logind[1463]: New session 14 of user core. May 8 00:07:45.660751 systemd[1]: Started session-14.scope - Session 14 of User core. May 8 00:07:45.835592 sshd[4109]: Connection closed by 139.178.68.195 port 50816 May 8 00:07:45.837116 sshd-session[4106]: pam_unix(sshd:session): session closed for user core May 8 00:07:45.844713 systemd[1]: sshd@13-146.190.122.31:22-139.178.68.195:50816.service: Deactivated successfully. May 8 00:07:45.847994 systemd[1]: session-14.scope: Deactivated successfully. May 8 00:07:45.849988 systemd-logind[1463]: Session 14 logged out. Waiting for processes to exit. May 8 00:07:45.851865 systemd-logind[1463]: Removed session 14. May 8 00:07:50.860125 systemd[1]: Started sshd@14-146.190.122.31:22-139.178.68.195:50818.service - OpenSSH per-connection server daemon (139.178.68.195:50818). May 8 00:07:50.927271 sshd[4123]: Accepted publickey for core from 139.178.68.195 port 50818 ssh2: RSA SHA256:EMQK/xXwyGW130jHG636zV1LD4ZeZqDZsuuaHw+qK90 May 8 00:07:50.929663 sshd-session[4123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:07:50.938409 systemd-logind[1463]: New session 15 of user core. May 8 00:07:50.943731 systemd[1]: Started session-15.scope - Session 15 of User core. May 8 00:07:51.121742 sshd[4125]: Connection closed by 139.178.68.195 port 50818 May 8 00:07:51.123839 sshd-session[4123]: pam_unix(sshd:session): session closed for user core May 8 00:07:51.130541 systemd[1]: sshd@14-146.190.122.31:22-139.178.68.195:50818.service: Deactivated successfully. May 8 00:07:51.133840 systemd[1]: session-15.scope: Deactivated successfully. May 8 00:07:51.135538 systemd-logind[1463]: Session 15 logged out. Waiting for processes to exit. May 8 00:07:51.137920 systemd-logind[1463]: Removed session 15. May 8 00:07:55.580474 kubelet[2638]: E0508 00:07:55.579956 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 8 00:07:56.145042 systemd[1]: Started sshd@15-146.190.122.31:22-139.178.68.195:45902.service - OpenSSH per-connection server daemon (139.178.68.195:45902). May 8 00:07:56.217496 sshd[4140]: Accepted publickey for core from 139.178.68.195 port 45902 ssh2: RSA SHA256:EMQK/xXwyGW130jHG636zV1LD4ZeZqDZsuuaHw+qK90 May 8 00:07:56.220701 sshd-session[4140]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:07:56.230093 systemd-logind[1463]: New session 16 of user core. May 8 00:07:56.237808 systemd[1]: Started session-16.scope - Session 16 of User core. May 8 00:07:56.415587 sshd[4142]: Connection closed by 139.178.68.195 port 45902 May 8 00:07:56.418817 sshd-session[4140]: pam_unix(sshd:session): session closed for user core May 8 00:07:56.433021 systemd[1]: sshd@15-146.190.122.31:22-139.178.68.195:45902.service: Deactivated successfully. May 8 00:07:56.436262 systemd[1]: session-16.scope: Deactivated successfully. May 8 00:07:56.438451 systemd-logind[1463]: Session 16 logged out. Waiting for processes to exit. May 8 00:07:56.450203 systemd[1]: Started sshd@16-146.190.122.31:22-139.178.68.195:45904.service - OpenSSH per-connection server daemon (139.178.68.195:45904). May 8 00:07:56.457191 systemd-logind[1463]: Removed session 16. May 8 00:07:56.523945 sshd[4152]: Accepted publickey for core from 139.178.68.195 port 45904 ssh2: RSA SHA256:EMQK/xXwyGW130jHG636zV1LD4ZeZqDZsuuaHw+qK90 May 8 00:07:56.527184 sshd-session[4152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:07:56.537329 systemd-logind[1463]: New session 17 of user core. May 8 00:07:56.542786 systemd[1]: Started session-17.scope - Session 17 of User core. May 8 00:07:56.988599 sshd[4155]: Connection closed by 139.178.68.195 port 45904 May 8 00:07:56.990093 sshd-session[4152]: pam_unix(sshd:session): session closed for user core May 8 00:07:57.009554 systemd[1]: sshd@16-146.190.122.31:22-139.178.68.195:45904.service: Deactivated successfully. May 8 00:07:57.012853 systemd[1]: session-17.scope: Deactivated successfully. May 8 00:07:57.016172 systemd-logind[1463]: Session 17 logged out. Waiting for processes to exit. May 8 00:07:57.023229 systemd[1]: Started sshd@17-146.190.122.31:22-139.178.68.195:45906.service - OpenSSH per-connection server daemon (139.178.68.195:45906). May 8 00:07:57.027757 systemd-logind[1463]: Removed session 17. May 8 00:07:57.113670 sshd[4165]: Accepted publickey for core from 139.178.68.195 port 45906 ssh2: RSA SHA256:EMQK/xXwyGW130jHG636zV1LD4ZeZqDZsuuaHw+qK90 May 8 00:07:57.116288 sshd-session[4165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:07:57.126953 systemd-logind[1463]: New session 18 of user core. May 8 00:07:57.136844 systemd[1]: Started session-18.scope - Session 18 of User core. May 8 00:07:59.549390 sshd[4168]: Connection closed by 139.178.68.195 port 45906 May 8 00:07:59.549998 sshd-session[4165]: pam_unix(sshd:session): session closed for user core May 8 00:07:59.572199 systemd[1]: sshd@17-146.190.122.31:22-139.178.68.195:45906.service: Deactivated successfully. May 8 00:07:59.579693 systemd[1]: session-18.scope: Deactivated successfully. May 8 00:07:59.584971 systemd-logind[1463]: Session 18 logged out. Waiting for processes to exit. May 8 00:07:59.599631 systemd[1]: Started sshd@18-146.190.122.31:22-139.178.68.195:45908.service - OpenSSH per-connection server daemon (139.178.68.195:45908). May 8 00:07:59.607171 systemd-logind[1463]: Removed session 18. May 8 00:07:59.705142 sshd[4183]: Accepted publickey for core from 139.178.68.195 port 45908 ssh2: RSA SHA256:EMQK/xXwyGW130jHG636zV1LD4ZeZqDZsuuaHw+qK90 May 8 00:07:59.708066 sshd-session[4183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:07:59.717528 systemd-logind[1463]: New session 19 of user core. May 8 00:07:59.723786 systemd[1]: Started session-19.scope - Session 19 of User core. May 8 00:08:00.154170 sshd[4187]: Connection closed by 139.178.68.195 port 45908 May 8 00:08:00.152732 sshd-session[4183]: pam_unix(sshd:session): session closed for user core May 8 00:08:00.172065 systemd[1]: sshd@18-146.190.122.31:22-139.178.68.195:45908.service: Deactivated successfully. May 8 00:08:00.178229 systemd[1]: session-19.scope: Deactivated successfully. May 8 00:08:00.181109 systemd-logind[1463]: Session 19 logged out. Waiting for processes to exit. May 8 00:08:00.195221 systemd[1]: Started sshd@19-146.190.122.31:22-139.178.68.195:45912.service - OpenSSH per-connection server daemon (139.178.68.195:45912). May 8 00:08:00.203059 systemd-logind[1463]: Removed session 19. May 8 00:08:00.275249 sshd[4195]: Accepted publickey for core from 139.178.68.195 port 45912 ssh2: RSA SHA256:EMQK/xXwyGW130jHG636zV1LD4ZeZqDZsuuaHw+qK90 May 8 00:08:00.281977 sshd-session[4195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:08:00.297618 systemd-logind[1463]: New session 20 of user core. May 8 00:08:00.305356 systemd[1]: Started session-20.scope - Session 20 of User core. May 8 00:08:00.499344 sshd[4198]: Connection closed by 139.178.68.195 port 45912 May 8 00:08:00.501106 sshd-session[4195]: pam_unix(sshd:session): session closed for user core May 8 00:08:00.507795 systemd-logind[1463]: Session 20 logged out. Waiting for processes to exit. May 8 00:08:00.510162 systemd[1]: sshd@19-146.190.122.31:22-139.178.68.195:45912.service: Deactivated successfully. May 8 00:08:00.514221 systemd[1]: session-20.scope: Deactivated successfully. May 8 00:08:00.518021 systemd-logind[1463]: Removed session 20. May 8 00:08:02.582023 kubelet[2638]: E0508 00:08:02.581734 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 8 00:08:05.529183 systemd[1]: Started sshd@20-146.190.122.31:22-139.178.68.195:55432.service - OpenSSH per-connection server daemon (139.178.68.195:55432). May 8 00:08:05.592735 sshd[4210]: Accepted publickey for core from 139.178.68.195 port 55432 ssh2: RSA SHA256:EMQK/xXwyGW130jHG636zV1LD4ZeZqDZsuuaHw+qK90 May 8 00:08:05.595337 sshd-session[4210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:08:05.605526 systemd-logind[1463]: New session 21 of user core. May 8 00:08:05.614904 systemd[1]: Started session-21.scope - Session 21 of User core. May 8 00:08:05.788936 sshd[4212]: Connection closed by 139.178.68.195 port 55432 May 8 00:08:05.789902 sshd-session[4210]: pam_unix(sshd:session): session closed for user core May 8 00:08:05.796937 systemd[1]: sshd@20-146.190.122.31:22-139.178.68.195:55432.service: Deactivated successfully. May 8 00:08:05.801326 systemd[1]: session-21.scope: Deactivated successfully. May 8 00:08:05.803225 systemd-logind[1463]: Session 21 logged out. Waiting for processes to exit. May 8 00:08:05.805378 systemd-logind[1463]: Removed session 21. May 8 00:08:07.580576 kubelet[2638]: E0508 00:08:07.580525 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 8 00:08:10.816693 systemd[1]: Started sshd@21-146.190.122.31:22-139.178.68.195:55434.service - OpenSSH per-connection server daemon (139.178.68.195:55434). May 8 00:08:10.883097 sshd[4226]: Accepted publickey for core from 139.178.68.195 port 55434 ssh2: RSA SHA256:EMQK/xXwyGW130jHG636zV1LD4ZeZqDZsuuaHw+qK90 May 8 00:08:10.885585 sshd-session[4226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:08:10.894309 systemd-logind[1463]: New session 22 of user core. May 8 00:08:10.899912 systemd[1]: Started session-22.scope - Session 22 of User core. May 8 00:08:11.065643 sshd[4228]: Connection closed by 139.178.68.195 port 55434 May 8 00:08:11.066151 sshd-session[4226]: pam_unix(sshd:session): session closed for user core May 8 00:08:11.072399 systemd[1]: sshd@21-146.190.122.31:22-139.178.68.195:55434.service: Deactivated successfully. May 8 00:08:11.072580 systemd-logind[1463]: Session 22 logged out. Waiting for processes to exit. May 8 00:08:11.076260 systemd[1]: session-22.scope: Deactivated successfully. May 8 00:08:11.079575 systemd-logind[1463]: Removed session 22. May 8 00:08:14.580476 kubelet[2638]: E0508 00:08:14.579656 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 8 00:08:16.091339 systemd[1]: Started sshd@22-146.190.122.31:22-139.178.68.195:47732.service - OpenSSH per-connection server daemon (139.178.68.195:47732). May 8 00:08:16.163918 sshd[4241]: Accepted publickey for core from 139.178.68.195 port 47732 ssh2: RSA SHA256:EMQK/xXwyGW130jHG636zV1LD4ZeZqDZsuuaHw+qK90 May 8 00:08:16.166162 sshd-session[4241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:08:16.174281 systemd-logind[1463]: New session 23 of user core. May 8 00:08:16.181770 systemd[1]: Started session-23.scope - Session 23 of User core. May 8 00:08:16.343572 sshd[4243]: Connection closed by 139.178.68.195 port 47732 May 8 00:08:16.344892 sshd-session[4241]: pam_unix(sshd:session): session closed for user core May 8 00:08:16.350220 systemd[1]: sshd@22-146.190.122.31:22-139.178.68.195:47732.service: Deactivated successfully. May 8 00:08:16.354081 systemd[1]: session-23.scope: Deactivated successfully. May 8 00:08:16.357541 systemd-logind[1463]: Session 23 logged out. Waiting for processes to exit. May 8 00:08:16.359754 systemd-logind[1463]: Removed session 23. May 8 00:08:16.581766 kubelet[2638]: E0508 00:08:16.580613 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 8 00:08:21.366139 systemd[1]: Started sshd@23-146.190.122.31:22-139.178.68.195:47748.service - OpenSSH per-connection server daemon (139.178.68.195:47748). May 8 00:08:21.438682 sshd[4255]: Accepted publickey for core from 139.178.68.195 port 47748 ssh2: RSA SHA256:EMQK/xXwyGW130jHG636zV1LD4ZeZqDZsuuaHw+qK90 May 8 00:08:21.444713 sshd-session[4255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:08:21.458688 systemd-logind[1463]: New session 24 of user core. May 8 00:08:21.465796 systemd[1]: Started session-24.scope - Session 24 of User core. May 8 00:08:21.652116 sshd[4257]: Connection closed by 139.178.68.195 port 47748 May 8 00:08:21.652679 sshd-session[4255]: pam_unix(sshd:session): session closed for user core May 8 00:08:21.674618 systemd[1]: sshd@23-146.190.122.31:22-139.178.68.195:47748.service: Deactivated successfully. May 8 00:08:21.678639 systemd[1]: session-24.scope: Deactivated successfully. May 8 00:08:21.682464 systemd-logind[1463]: Session 24 logged out. Waiting for processes to exit. May 8 00:08:21.690804 systemd[1]: Started sshd@24-146.190.122.31:22-139.178.68.195:47760.service - OpenSSH per-connection server daemon (139.178.68.195:47760). May 8 00:08:21.695536 systemd-logind[1463]: Removed session 24. May 8 00:08:21.785374 sshd[4268]: Accepted publickey for core from 139.178.68.195 port 47760 ssh2: RSA SHA256:EMQK/xXwyGW130jHG636zV1LD4ZeZqDZsuuaHw+qK90 May 8 00:08:21.788260 sshd-session[4268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:08:21.797163 systemd-logind[1463]: New session 25 of user core. May 8 00:08:21.802910 systemd[1]: Started session-25.scope - Session 25 of User core. May 8 00:08:23.588672 kubelet[2638]: E0508 00:08:23.587413 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 8 00:08:23.639661 containerd[1485]: time="2025-05-08T00:08:23.639517752Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 8 00:08:23.670464 containerd[1485]: time="2025-05-08T00:08:23.669606033Z" level=info msg="StopContainer for \"121f79592338f09c91d98259c2d07ff87fa87a401e914c07830327afdf026070\" with timeout 30 (s)" May 8 00:08:23.670464 containerd[1485]: time="2025-05-08T00:08:23.669642011Z" level=info msg="StopContainer for \"63550b1a08b6612b5835da886d179b96c92e011af8a8240f98292c11a4f97ba5\" with timeout 2 (s)" May 8 00:08:23.672046 containerd[1485]: time="2025-05-08T00:08:23.671544948Z" level=info msg="Stop container \"63550b1a08b6612b5835da886d179b96c92e011af8a8240f98292c11a4f97ba5\" with signal terminated" May 8 00:08:23.672264 containerd[1485]: time="2025-05-08T00:08:23.671556911Z" level=info msg="Stop container \"121f79592338f09c91d98259c2d07ff87fa87a401e914c07830327afdf026070\" with signal terminated" May 8 00:08:23.692366 systemd-networkd[1379]: lxc_health: Link DOWN May 8 00:08:23.692396 systemd-networkd[1379]: lxc_health: Lost carrier May 8 00:08:23.717056 systemd[1]: cri-containerd-63550b1a08b6612b5835da886d179b96c92e011af8a8240f98292c11a4f97ba5.scope: Deactivated successfully. May 8 00:08:23.717512 systemd[1]: cri-containerd-63550b1a08b6612b5835da886d179b96c92e011af8a8240f98292c11a4f97ba5.scope: Consumed 11.082s CPU time, 166.6M memory peak, 42.4M read from disk, 13.3M written to disk. May 8 00:08:23.727709 systemd[1]: cri-containerd-121f79592338f09c91d98259c2d07ff87fa87a401e914c07830327afdf026070.scope: Deactivated successfully. May 8 00:08:23.728485 systemd[1]: cri-containerd-121f79592338f09c91d98259c2d07ff87fa87a401e914c07830327afdf026070.scope: Consumed 505ms CPU time, 28.8M memory peak, 5.4M read from disk, 4K written to disk. May 8 00:08:23.769944 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-63550b1a08b6612b5835da886d179b96c92e011af8a8240f98292c11a4f97ba5-rootfs.mount: Deactivated successfully. May 8 00:08:23.782712 containerd[1485]: time="2025-05-08T00:08:23.782626821Z" level=info msg="shim disconnected" id=63550b1a08b6612b5835da886d179b96c92e011af8a8240f98292c11a4f97ba5 namespace=k8s.io May 8 00:08:23.783491 containerd[1485]: time="2025-05-08T00:08:23.783183733Z" level=warning msg="cleaning up after shim disconnected" id=63550b1a08b6612b5835da886d179b96c92e011af8a8240f98292c11a4f97ba5 namespace=k8s.io May 8 00:08:23.783491 containerd[1485]: time="2025-05-08T00:08:23.783217102Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:08:23.791481 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-121f79592338f09c91d98259c2d07ff87fa87a401e914c07830327afdf026070-rootfs.mount: Deactivated successfully. May 8 00:08:23.793941 containerd[1485]: time="2025-05-08T00:08:23.793560414Z" level=info msg="shim disconnected" id=121f79592338f09c91d98259c2d07ff87fa87a401e914c07830327afdf026070 namespace=k8s.io May 8 00:08:23.793941 containerd[1485]: time="2025-05-08T00:08:23.793686960Z" level=warning msg="cleaning up after shim disconnected" id=121f79592338f09c91d98259c2d07ff87fa87a401e914c07830327afdf026070 namespace=k8s.io May 8 00:08:23.793941 containerd[1485]: time="2025-05-08T00:08:23.793705274Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:08:23.822904 containerd[1485]: time="2025-05-08T00:08:23.822848326Z" level=info msg="StopContainer for \"121f79592338f09c91d98259c2d07ff87fa87a401e914c07830327afdf026070\" returns successfully" May 8 00:08:23.824604 containerd[1485]: time="2025-05-08T00:08:23.823688731Z" level=info msg="StopPodSandbox for \"0eaff26c0b5d99843678c0d439bdac0477dd3c460bc1198e3cae119a642f73f2\"" May 8 00:08:23.830731 containerd[1485]: time="2025-05-08T00:08:23.825407547Z" level=info msg="Container to stop \"121f79592338f09c91d98259c2d07ff87fa87a401e914c07830327afdf026070\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:08:23.842156 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0eaff26c0b5d99843678c0d439bdac0477dd3c460bc1198e3cae119a642f73f2-shm.mount: Deactivated successfully. May 8 00:08:23.845702 containerd[1485]: time="2025-05-08T00:08:23.843296191Z" level=info msg="StopContainer for \"63550b1a08b6612b5835da886d179b96c92e011af8a8240f98292c11a4f97ba5\" returns successfully" May 8 00:08:23.848173 containerd[1485]: time="2025-05-08T00:08:23.847934105Z" level=info msg="StopPodSandbox for \"fb3b4d9d241c65a0fe4af5a4e0b0d2917a9c7c902add9b864d5a982c216d38f1\"" May 8 00:08:23.849783 containerd[1485]: time="2025-05-08T00:08:23.849245638Z" level=info msg="Container to stop \"8c55aefc4bd031cf7b1b4d375b07cad5fdfd5d99279813f88eecc8a0d1c5b99f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:08:23.849783 containerd[1485]: time="2025-05-08T00:08:23.849338098Z" level=info msg="Container to stop \"f6985f367e52bf85c29cd20dc7dae73a628f87ed349649deee01c75f71d991c5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:08:23.849783 containerd[1485]: time="2025-05-08T00:08:23.849352437Z" level=info msg="Container to stop \"04f2a6a4d3c538ebad4f9a4943687d937d623172c595c9479cfd48510ce1295b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:08:23.849783 containerd[1485]: time="2025-05-08T00:08:23.849365656Z" level=info msg="Container to stop \"63550b1a08b6612b5835da886d179b96c92e011af8a8240f98292c11a4f97ba5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:08:23.849783 containerd[1485]: time="2025-05-08T00:08:23.849387069Z" level=info msg="Container to stop \"a4e5a53528f89e863ae05712fdb960f36c9fe850bbc18cdb6d89a03056634507\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:08:23.856715 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fb3b4d9d241c65a0fe4af5a4e0b0d2917a9c7c902add9b864d5a982c216d38f1-shm.mount: Deactivated successfully. May 8 00:08:23.866715 systemd[1]: cri-containerd-0eaff26c0b5d99843678c0d439bdac0477dd3c460bc1198e3cae119a642f73f2.scope: Deactivated successfully. May 8 00:08:23.881663 systemd[1]: cri-containerd-fb3b4d9d241c65a0fe4af5a4e0b0d2917a9c7c902add9b864d5a982c216d38f1.scope: Deactivated successfully. May 8 00:08:23.933122 containerd[1485]: time="2025-05-08T00:08:23.932836743Z" level=info msg="shim disconnected" id=fb3b4d9d241c65a0fe4af5a4e0b0d2917a9c7c902add9b864d5a982c216d38f1 namespace=k8s.io May 8 00:08:23.933122 containerd[1485]: time="2025-05-08T00:08:23.932901264Z" level=warning msg="cleaning up after shim disconnected" id=fb3b4d9d241c65a0fe4af5a4e0b0d2917a9c7c902add9b864d5a982c216d38f1 namespace=k8s.io May 8 00:08:23.933122 containerd[1485]: time="2025-05-08T00:08:23.932910262Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:08:23.933122 containerd[1485]: time="2025-05-08T00:08:23.932932576Z" level=info msg="shim disconnected" id=0eaff26c0b5d99843678c0d439bdac0477dd3c460bc1198e3cae119a642f73f2 namespace=k8s.io May 8 00:08:23.933122 containerd[1485]: time="2025-05-08T00:08:23.932979133Z" level=warning msg="cleaning up after shim disconnected" id=0eaff26c0b5d99843678c0d439bdac0477dd3c460bc1198e3cae119a642f73f2 namespace=k8s.io May 8 00:08:23.933122 containerd[1485]: time="2025-05-08T00:08:23.932992246Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:08:23.964606 containerd[1485]: time="2025-05-08T00:08:23.964535273Z" level=info msg="TearDown network for sandbox \"fb3b4d9d241c65a0fe4af5a4e0b0d2917a9c7c902add9b864d5a982c216d38f1\" successfully" May 8 00:08:23.964606 containerd[1485]: time="2025-05-08T00:08:23.964607887Z" level=info msg="StopPodSandbox for \"fb3b4d9d241c65a0fe4af5a4e0b0d2917a9c7c902add9b864d5a982c216d38f1\" returns successfully" May 8 00:08:23.965680 containerd[1485]: time="2025-05-08T00:08:23.965629359Z" level=info msg="TearDown network for sandbox \"0eaff26c0b5d99843678c0d439bdac0477dd3c460bc1198e3cae119a642f73f2\" successfully" May 8 00:08:23.965680 containerd[1485]: time="2025-05-08T00:08:23.965671701Z" level=info msg="StopPodSandbox for \"0eaff26c0b5d99843678c0d439bdac0477dd3c460bc1198e3cae119a642f73f2\" returns successfully" May 8 00:08:23.977287 kubelet[2638]: I0508 00:08:23.977254 2638 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0eaff26c0b5d99843678c0d439bdac0477dd3c460bc1198e3cae119a642f73f2" May 8 00:08:23.991355 kubelet[2638]: I0508 00:08:23.991209 2638 scope.go:117] "RemoveContainer" containerID="63550b1a08b6612b5835da886d179b96c92e011af8a8240f98292c11a4f97ba5" May 8 00:08:24.004049 containerd[1485]: time="2025-05-08T00:08:24.003329074Z" level=info msg="RemoveContainer for \"63550b1a08b6612b5835da886d179b96c92e011af8a8240f98292c11a4f97ba5\"" May 8 00:08:24.016042 containerd[1485]: time="2025-05-08T00:08:24.015133931Z" level=info msg="RemoveContainer for \"63550b1a08b6612b5835da886d179b96c92e011af8a8240f98292c11a4f97ba5\" returns successfully" May 8 00:08:24.016211 kubelet[2638]: I0508 00:08:24.015495 2638 scope.go:117] "RemoveContainer" containerID="a4e5a53528f89e863ae05712fdb960f36c9fe850bbc18cdb6d89a03056634507" May 8 00:08:24.021431 containerd[1485]: time="2025-05-08T00:08:24.020177923Z" level=info msg="RemoveContainer for \"a4e5a53528f89e863ae05712fdb960f36c9fe850bbc18cdb6d89a03056634507\"" May 8 00:08:24.030441 containerd[1485]: time="2025-05-08T00:08:24.030358073Z" level=info msg="RemoveContainer for \"a4e5a53528f89e863ae05712fdb960f36c9fe850bbc18cdb6d89a03056634507\" returns successfully" May 8 00:08:24.030968 kubelet[2638]: I0508 00:08:24.030939 2638 scope.go:117] "RemoveContainer" containerID="04f2a6a4d3c538ebad4f9a4943687d937d623172c595c9479cfd48510ce1295b" May 8 00:08:24.035652 containerd[1485]: time="2025-05-08T00:08:24.035501752Z" level=info msg="RemoveContainer for \"04f2a6a4d3c538ebad4f9a4943687d937d623172c595c9479cfd48510ce1295b\"" May 8 00:08:24.045867 containerd[1485]: time="2025-05-08T00:08:24.045816679Z" level=info msg="RemoveContainer for \"04f2a6a4d3c538ebad4f9a4943687d937d623172c595c9479cfd48510ce1295b\" returns successfully" May 8 00:08:24.046394 kubelet[2638]: I0508 00:08:24.046369 2638 scope.go:117] "RemoveContainer" containerID="f6985f367e52bf85c29cd20dc7dae73a628f87ed349649deee01c75f71d991c5" May 8 00:08:24.048728 containerd[1485]: time="2025-05-08T00:08:24.048684522Z" level=info msg="RemoveContainer for \"f6985f367e52bf85c29cd20dc7dae73a628f87ed349649deee01c75f71d991c5\"" May 8 00:08:24.054021 containerd[1485]: time="2025-05-08T00:08:24.053671699Z" level=info msg="RemoveContainer for \"f6985f367e52bf85c29cd20dc7dae73a628f87ed349649deee01c75f71d991c5\" returns successfully" May 8 00:08:24.054480 kubelet[2638]: I0508 00:08:24.054404 2638 scope.go:117] "RemoveContainer" containerID="8c55aefc4bd031cf7b1b4d375b07cad5fdfd5d99279813f88eecc8a0d1c5b99f" May 8 00:08:24.056930 containerd[1485]: time="2025-05-08T00:08:24.056519578Z" level=info msg="RemoveContainer for \"8c55aefc4bd031cf7b1b4d375b07cad5fdfd5d99279813f88eecc8a0d1c5b99f\"" May 8 00:08:24.060777 containerd[1485]: time="2025-05-08T00:08:24.060609838Z" level=info msg="RemoveContainer for \"8c55aefc4bd031cf7b1b4d375b07cad5fdfd5d99279813f88eecc8a0d1c5b99f\" returns successfully" May 8 00:08:24.061117 kubelet[2638]: I0508 00:08:24.061079 2638 scope.go:117] "RemoveContainer" containerID="63550b1a08b6612b5835da886d179b96c92e011af8a8240f98292c11a4f97ba5" May 8 00:08:24.061592 containerd[1485]: time="2025-05-08T00:08:24.061547076Z" level=error msg="ContainerStatus for \"63550b1a08b6612b5835da886d179b96c92e011af8a8240f98292c11a4f97ba5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"63550b1a08b6612b5835da886d179b96c92e011af8a8240f98292c11a4f97ba5\": not found" May 8 00:08:24.071279 kubelet[2638]: E0508 00:08:24.071159 2638 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"63550b1a08b6612b5835da886d179b96c92e011af8a8240f98292c11a4f97ba5\": not found" containerID="63550b1a08b6612b5835da886d179b96c92e011af8a8240f98292c11a4f97ba5" May 8 00:08:24.075813 kubelet[2638]: I0508 00:08:24.071290 2638 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"63550b1a08b6612b5835da886d179b96c92e011af8a8240f98292c11a4f97ba5"} err="failed to get container status \"63550b1a08b6612b5835da886d179b96c92e011af8a8240f98292c11a4f97ba5\": rpc error: code = NotFound desc = an error occurred when try to find container \"63550b1a08b6612b5835da886d179b96c92e011af8a8240f98292c11a4f97ba5\": not found" May 8 00:08:24.075813 kubelet[2638]: I0508 00:08:24.075661 2638 scope.go:117] "RemoveContainer" containerID="a4e5a53528f89e863ae05712fdb960f36c9fe850bbc18cdb6d89a03056634507" May 8 00:08:24.076928 containerd[1485]: time="2025-05-08T00:08:24.076574823Z" level=error msg="ContainerStatus for \"a4e5a53528f89e863ae05712fdb960f36c9fe850bbc18cdb6d89a03056634507\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a4e5a53528f89e863ae05712fdb960f36c9fe850bbc18cdb6d89a03056634507\": not found" May 8 00:08:24.077060 kubelet[2638]: E0508 00:08:24.076820 2638 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a4e5a53528f89e863ae05712fdb960f36c9fe850bbc18cdb6d89a03056634507\": not found" containerID="a4e5a53528f89e863ae05712fdb960f36c9fe850bbc18cdb6d89a03056634507" May 8 00:08:24.077060 kubelet[2638]: I0508 00:08:24.076858 2638 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a4e5a53528f89e863ae05712fdb960f36c9fe850bbc18cdb6d89a03056634507"} err="failed to get container status \"a4e5a53528f89e863ae05712fdb960f36c9fe850bbc18cdb6d89a03056634507\": rpc error: code = NotFound desc = an error occurred when try to find container \"a4e5a53528f89e863ae05712fdb960f36c9fe850bbc18cdb6d89a03056634507\": not found" May 8 00:08:24.077060 kubelet[2638]: I0508 00:08:24.076893 2638 scope.go:117] "RemoveContainer" containerID="04f2a6a4d3c538ebad4f9a4943687d937d623172c595c9479cfd48510ce1295b" May 8 00:08:24.078077 containerd[1485]: time="2025-05-08T00:08:24.077788462Z" level=error msg="ContainerStatus for \"04f2a6a4d3c538ebad4f9a4943687d937d623172c595c9479cfd48510ce1295b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"04f2a6a4d3c538ebad4f9a4943687d937d623172c595c9479cfd48510ce1295b\": not found" May 8 00:08:24.078305 kubelet[2638]: E0508 00:08:24.077969 2638 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"04f2a6a4d3c538ebad4f9a4943687d937d623172c595c9479cfd48510ce1295b\": not found" containerID="04f2a6a4d3c538ebad4f9a4943687d937d623172c595c9479cfd48510ce1295b" May 8 00:08:24.078305 kubelet[2638]: I0508 00:08:24.078020 2638 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"04f2a6a4d3c538ebad4f9a4943687d937d623172c595c9479cfd48510ce1295b"} err="failed to get container status \"04f2a6a4d3c538ebad4f9a4943687d937d623172c595c9479cfd48510ce1295b\": rpc error: code = NotFound desc = an error occurred when try to find container \"04f2a6a4d3c538ebad4f9a4943687d937d623172c595c9479cfd48510ce1295b\": not found" May 8 00:08:24.078305 kubelet[2638]: I0508 00:08:24.078046 2638 scope.go:117] "RemoveContainer" containerID="f6985f367e52bf85c29cd20dc7dae73a628f87ed349649deee01c75f71d991c5" May 8 00:08:24.078305 kubelet[2638]: E0508 00:08:24.078633 2638 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f6985f367e52bf85c29cd20dc7dae73a628f87ed349649deee01c75f71d991c5\": not found" containerID="f6985f367e52bf85c29cd20dc7dae73a628f87ed349649deee01c75f71d991c5" May 8 00:08:24.078305 kubelet[2638]: I0508 00:08:24.078671 2638 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f6985f367e52bf85c29cd20dc7dae73a628f87ed349649deee01c75f71d991c5"} err="failed to get container status \"f6985f367e52bf85c29cd20dc7dae73a628f87ed349649deee01c75f71d991c5\": rpc error: code = NotFound desc = an error occurred when try to find container \"f6985f367e52bf85c29cd20dc7dae73a628f87ed349649deee01c75f71d991c5\": not found" May 8 00:08:24.078305 kubelet[2638]: I0508 00:08:24.078738 2638 scope.go:117] "RemoveContainer" containerID="8c55aefc4bd031cf7b1b4d375b07cad5fdfd5d99279813f88eecc8a0d1c5b99f" May 8 00:08:24.079272 containerd[1485]: time="2025-05-08T00:08:24.078486904Z" level=error msg="ContainerStatus for \"f6985f367e52bf85c29cd20dc7dae73a628f87ed349649deee01c75f71d991c5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f6985f367e52bf85c29cd20dc7dae73a628f87ed349649deee01c75f71d991c5\": not found" May 8 00:08:24.079869 containerd[1485]: time="2025-05-08T00:08:24.079573231Z" level=error msg="ContainerStatus for \"8c55aefc4bd031cf7b1b4d375b07cad5fdfd5d99279813f88eecc8a0d1c5b99f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8c55aefc4bd031cf7b1b4d375b07cad5fdfd5d99279813f88eecc8a0d1c5b99f\": not found" May 8 00:08:24.080095 kubelet[2638]: E0508 00:08:24.079791 2638 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8c55aefc4bd031cf7b1b4d375b07cad5fdfd5d99279813f88eecc8a0d1c5b99f\": not found" containerID="8c55aefc4bd031cf7b1b4d375b07cad5fdfd5d99279813f88eecc8a0d1c5b99f" May 8 00:08:24.080095 kubelet[2638]: I0508 00:08:24.079827 2638 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8c55aefc4bd031cf7b1b4d375b07cad5fdfd5d99279813f88eecc8a0d1c5b99f"} err="failed to get container status \"8c55aefc4bd031cf7b1b4d375b07cad5fdfd5d99279813f88eecc8a0d1c5b99f\": rpc error: code = NotFound desc = an error occurred when try to find container \"8c55aefc4bd031cf7b1b4d375b07cad5fdfd5d99279813f88eecc8a0d1c5b99f\": not found" May 8 00:08:24.086193 kubelet[2638]: I0508 00:08:24.085546 2638 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmg25\" (UniqueName: \"kubernetes.io/projected/8b977ea8-db27-4db3-9fc5-0231afe77acd-kube-api-access-mmg25\") pod \"8b977ea8-db27-4db3-9fc5-0231afe77acd\" (UID: \"8b977ea8-db27-4db3-9fc5-0231afe77acd\") " May 8 00:08:24.086193 kubelet[2638]: I0508 00:08:24.086096 2638 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bcd976d3-aff7-4b77-ad7c-18942b5d0979-hostproc\") pod \"bcd976d3-aff7-4b77-ad7c-18942b5d0979\" (UID: \"bcd976d3-aff7-4b77-ad7c-18942b5d0979\") " May 8 00:08:24.086193 kubelet[2638]: I0508 00:08:24.086161 2638 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bcd976d3-aff7-4b77-ad7c-18942b5d0979-bpf-maps\") pod \"bcd976d3-aff7-4b77-ad7c-18942b5d0979\" (UID: \"bcd976d3-aff7-4b77-ad7c-18942b5d0979\") " May 8 00:08:24.086193 kubelet[2638]: I0508 00:08:24.086191 2638 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bcd976d3-aff7-4b77-ad7c-18942b5d0979-host-proc-sys-kernel\") pod \"bcd976d3-aff7-4b77-ad7c-18942b5d0979\" (UID: \"bcd976d3-aff7-4b77-ad7c-18942b5d0979\") " May 8 00:08:24.086592 kubelet[2638]: I0508 00:08:24.086224 2638 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bcd976d3-aff7-4b77-ad7c-18942b5d0979-cilium-config-path\") pod \"bcd976d3-aff7-4b77-ad7c-18942b5d0979\" (UID: \"bcd976d3-aff7-4b77-ad7c-18942b5d0979\") " May 8 00:08:24.086592 kubelet[2638]: I0508 00:08:24.086247 2638 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bcd976d3-aff7-4b77-ad7c-18942b5d0979-cni-path\") pod \"bcd976d3-aff7-4b77-ad7c-18942b5d0979\" (UID: \"bcd976d3-aff7-4b77-ad7c-18942b5d0979\") " May 8 00:08:24.086592 kubelet[2638]: I0508 00:08:24.086267 2638 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bcd976d3-aff7-4b77-ad7c-18942b5d0979-etc-cni-netd\") pod \"bcd976d3-aff7-4b77-ad7c-18942b5d0979\" (UID: \"bcd976d3-aff7-4b77-ad7c-18942b5d0979\") " May 8 00:08:24.086592 kubelet[2638]: I0508 00:08:24.086290 2638 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bcd976d3-aff7-4b77-ad7c-18942b5d0979-lib-modules\") pod \"bcd976d3-aff7-4b77-ad7c-18942b5d0979\" (UID: \"bcd976d3-aff7-4b77-ad7c-18942b5d0979\") " May 8 00:08:24.086592 kubelet[2638]: I0508 00:08:24.086312 2638 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bcd976d3-aff7-4b77-ad7c-18942b5d0979-xtables-lock\") pod \"bcd976d3-aff7-4b77-ad7c-18942b5d0979\" (UID: \"bcd976d3-aff7-4b77-ad7c-18942b5d0979\") " May 8 00:08:24.086592 kubelet[2638]: I0508 00:08:24.086339 2638 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrbst\" (UniqueName: \"kubernetes.io/projected/bcd976d3-aff7-4b77-ad7c-18942b5d0979-kube-api-access-hrbst\") pod \"bcd976d3-aff7-4b77-ad7c-18942b5d0979\" (UID: \"bcd976d3-aff7-4b77-ad7c-18942b5d0979\") " May 8 00:08:24.086960 kubelet[2638]: I0508 00:08:24.086392 2638 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bcd976d3-aff7-4b77-ad7c-18942b5d0979-cilium-run\") pod \"bcd976d3-aff7-4b77-ad7c-18942b5d0979\" (UID: \"bcd976d3-aff7-4b77-ad7c-18942b5d0979\") " May 8 00:08:24.086960 kubelet[2638]: I0508 00:08:24.086416 2638 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bcd976d3-aff7-4b77-ad7c-18942b5d0979-host-proc-sys-net\") pod \"bcd976d3-aff7-4b77-ad7c-18942b5d0979\" (UID: \"bcd976d3-aff7-4b77-ad7c-18942b5d0979\") " May 8 00:08:24.086960 kubelet[2638]: I0508 00:08:24.086576 2638 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bcd976d3-aff7-4b77-ad7c-18942b5d0979-cilium-cgroup\") pod \"bcd976d3-aff7-4b77-ad7c-18942b5d0979\" (UID: \"bcd976d3-aff7-4b77-ad7c-18942b5d0979\") " May 8 00:08:24.086960 kubelet[2638]: I0508 00:08:24.086603 2638 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8b977ea8-db27-4db3-9fc5-0231afe77acd-cilium-config-path\") pod \"8b977ea8-db27-4db3-9fc5-0231afe77acd\" (UID: \"8b977ea8-db27-4db3-9fc5-0231afe77acd\") " May 8 00:08:24.086960 kubelet[2638]: I0508 00:08:24.086629 2638 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bcd976d3-aff7-4b77-ad7c-18942b5d0979-hubble-tls\") pod \"bcd976d3-aff7-4b77-ad7c-18942b5d0979\" (UID: \"bcd976d3-aff7-4b77-ad7c-18942b5d0979\") " May 8 00:08:24.086960 kubelet[2638]: I0508 00:08:24.086666 2638 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bcd976d3-aff7-4b77-ad7c-18942b5d0979-clustermesh-secrets\") pod \"bcd976d3-aff7-4b77-ad7c-18942b5d0979\" (UID: \"bcd976d3-aff7-4b77-ad7c-18942b5d0979\") " May 8 00:08:24.089371 kubelet[2638]: I0508 00:08:24.087392 2638 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bcd976d3-aff7-4b77-ad7c-18942b5d0979-hostproc" (OuterVolumeSpecName: "hostproc") pod "bcd976d3-aff7-4b77-ad7c-18942b5d0979" (UID: "bcd976d3-aff7-4b77-ad7c-18942b5d0979"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:08:24.089371 kubelet[2638]: I0508 00:08:24.088862 2638 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bcd976d3-aff7-4b77-ad7c-18942b5d0979-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "bcd976d3-aff7-4b77-ad7c-18942b5d0979" (UID: "bcd976d3-aff7-4b77-ad7c-18942b5d0979"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:08:24.089371 kubelet[2638]: I0508 00:08:24.088901 2638 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bcd976d3-aff7-4b77-ad7c-18942b5d0979-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "bcd976d3-aff7-4b77-ad7c-18942b5d0979" (UID: "bcd976d3-aff7-4b77-ad7c-18942b5d0979"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:08:24.092837 kubelet[2638]: I0508 00:08:24.092709 2638 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bcd976d3-aff7-4b77-ad7c-18942b5d0979-cni-path" (OuterVolumeSpecName: "cni-path") pod "bcd976d3-aff7-4b77-ad7c-18942b5d0979" (UID: "bcd976d3-aff7-4b77-ad7c-18942b5d0979"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:08:24.092837 kubelet[2638]: I0508 00:08:24.092771 2638 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bcd976d3-aff7-4b77-ad7c-18942b5d0979-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "bcd976d3-aff7-4b77-ad7c-18942b5d0979" (UID: "bcd976d3-aff7-4b77-ad7c-18942b5d0979"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:08:24.092837 kubelet[2638]: I0508 00:08:24.092792 2638 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bcd976d3-aff7-4b77-ad7c-18942b5d0979-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "bcd976d3-aff7-4b77-ad7c-18942b5d0979" (UID: "bcd976d3-aff7-4b77-ad7c-18942b5d0979"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:08:24.092837 kubelet[2638]: I0508 00:08:24.092819 2638 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bcd976d3-aff7-4b77-ad7c-18942b5d0979-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "bcd976d3-aff7-4b77-ad7c-18942b5d0979" (UID: "bcd976d3-aff7-4b77-ad7c-18942b5d0979"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:08:24.093751 kubelet[2638]: I0508 00:08:24.093543 2638 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bcd976d3-aff7-4b77-ad7c-18942b5d0979-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "bcd976d3-aff7-4b77-ad7c-18942b5d0979" (UID: "bcd976d3-aff7-4b77-ad7c-18942b5d0979"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:08:24.093751 kubelet[2638]: I0508 00:08:24.093714 2638 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bcd976d3-aff7-4b77-ad7c-18942b5d0979-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "bcd976d3-aff7-4b77-ad7c-18942b5d0979" (UID: "bcd976d3-aff7-4b77-ad7c-18942b5d0979"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:08:24.094849 kubelet[2638]: I0508 00:08:24.094220 2638 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bcd976d3-aff7-4b77-ad7c-18942b5d0979-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "bcd976d3-aff7-4b77-ad7c-18942b5d0979" (UID: "bcd976d3-aff7-4b77-ad7c-18942b5d0979"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:08:24.098383 kubelet[2638]: I0508 00:08:24.098068 2638 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcd976d3-aff7-4b77-ad7c-18942b5d0979-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "bcd976d3-aff7-4b77-ad7c-18942b5d0979" (UID: "bcd976d3-aff7-4b77-ad7c-18942b5d0979"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 8 00:08:24.106029 kubelet[2638]: I0508 00:08:24.105954 2638 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b977ea8-db27-4db3-9fc5-0231afe77acd-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8b977ea8-db27-4db3-9fc5-0231afe77acd" (UID: "8b977ea8-db27-4db3-9fc5-0231afe77acd"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 8 00:08:24.106373 kubelet[2638]: I0508 00:08:24.106304 2638 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcd976d3-aff7-4b77-ad7c-18942b5d0979-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "bcd976d3-aff7-4b77-ad7c-18942b5d0979" (UID: "bcd976d3-aff7-4b77-ad7c-18942b5d0979"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 8 00:08:24.109803 kubelet[2638]: I0508 00:08:24.109740 2638 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b977ea8-db27-4db3-9fc5-0231afe77acd-kube-api-access-mmg25" (OuterVolumeSpecName: "kube-api-access-mmg25") pod "8b977ea8-db27-4db3-9fc5-0231afe77acd" (UID: "8b977ea8-db27-4db3-9fc5-0231afe77acd"). InnerVolumeSpecName "kube-api-access-mmg25". PluginName "kubernetes.io/projected", VolumeGidValue "" May 8 00:08:24.110002 kubelet[2638]: I0508 00:08:24.109820 2638 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcd976d3-aff7-4b77-ad7c-18942b5d0979-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "bcd976d3-aff7-4b77-ad7c-18942b5d0979" (UID: "bcd976d3-aff7-4b77-ad7c-18942b5d0979"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 8 00:08:24.110002 kubelet[2638]: I0508 00:08:24.109837 2638 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcd976d3-aff7-4b77-ad7c-18942b5d0979-kube-api-access-hrbst" (OuterVolumeSpecName: "kube-api-access-hrbst") pod "bcd976d3-aff7-4b77-ad7c-18942b5d0979" (UID: "bcd976d3-aff7-4b77-ad7c-18942b5d0979"). InnerVolumeSpecName "kube-api-access-hrbst". PluginName "kubernetes.io/projected", VolumeGidValue "" May 8 00:08:24.190950 kubelet[2638]: I0508 00:08:24.190612 2638 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-hrbst\" (UniqueName: \"kubernetes.io/projected/bcd976d3-aff7-4b77-ad7c-18942b5d0979-kube-api-access-hrbst\") on node \"ci-4230.1.1-n-e3439e552d\" DevicePath \"\"" May 8 00:08:24.190950 kubelet[2638]: I0508 00:08:24.190682 2638 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bcd976d3-aff7-4b77-ad7c-18942b5d0979-cilium-run\") on node \"ci-4230.1.1-n-e3439e552d\" DevicePath \"\"" May 8 00:08:24.190950 kubelet[2638]: I0508 00:08:24.190702 2638 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bcd976d3-aff7-4b77-ad7c-18942b5d0979-host-proc-sys-net\") on node \"ci-4230.1.1-n-e3439e552d\" DevicePath \"\"" May 8 00:08:24.190950 kubelet[2638]: I0508 00:08:24.190716 2638 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bcd976d3-aff7-4b77-ad7c-18942b5d0979-cilium-cgroup\") on node \"ci-4230.1.1-n-e3439e552d\" DevicePath \"\"" May 8 00:08:24.190950 kubelet[2638]: I0508 00:08:24.190733 2638 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8b977ea8-db27-4db3-9fc5-0231afe77acd-cilium-config-path\") on node \"ci-4230.1.1-n-e3439e552d\" DevicePath \"\"" May 8 00:08:24.190950 kubelet[2638]: I0508 00:08:24.190750 2638 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bcd976d3-aff7-4b77-ad7c-18942b5d0979-hubble-tls\") on node \"ci-4230.1.1-n-e3439e552d\" DevicePath \"\"" May 8 00:08:24.190950 kubelet[2638]: I0508 00:08:24.190771 2638 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bcd976d3-aff7-4b77-ad7c-18942b5d0979-clustermesh-secrets\") on node \"ci-4230.1.1-n-e3439e552d\" DevicePath \"\"" May 8 00:08:24.190950 kubelet[2638]: I0508 00:08:24.190788 2638 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bcd976d3-aff7-4b77-ad7c-18942b5d0979-hostproc\") on node \"ci-4230.1.1-n-e3439e552d\" DevicePath \"\"" May 8 00:08:24.191529 kubelet[2638]: I0508 00:08:24.190805 2638 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bcd976d3-aff7-4b77-ad7c-18942b5d0979-bpf-maps\") on node \"ci-4230.1.1-n-e3439e552d\" DevicePath \"\"" May 8 00:08:24.191529 kubelet[2638]: I0508 00:08:24.190818 2638 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bcd976d3-aff7-4b77-ad7c-18942b5d0979-host-proc-sys-kernel\") on node \"ci-4230.1.1-n-e3439e552d\" DevicePath \"\"" May 8 00:08:24.191529 kubelet[2638]: I0508 00:08:24.190833 2638 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-mmg25\" (UniqueName: \"kubernetes.io/projected/8b977ea8-db27-4db3-9fc5-0231afe77acd-kube-api-access-mmg25\") on node \"ci-4230.1.1-n-e3439e552d\" DevicePath \"\"" May 8 00:08:24.191529 kubelet[2638]: I0508 00:08:24.190848 2638 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bcd976d3-aff7-4b77-ad7c-18942b5d0979-cilium-config-path\") on node \"ci-4230.1.1-n-e3439e552d\" DevicePath \"\"" May 8 00:08:24.191529 kubelet[2638]: I0508 00:08:24.190868 2638 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bcd976d3-aff7-4b77-ad7c-18942b5d0979-etc-cni-netd\") on node \"ci-4230.1.1-n-e3439e552d\" DevicePath \"\"" May 8 00:08:24.191529 kubelet[2638]: I0508 00:08:24.190883 2638 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bcd976d3-aff7-4b77-ad7c-18942b5d0979-lib-modules\") on node \"ci-4230.1.1-n-e3439e552d\" DevicePath \"\"" May 8 00:08:24.191529 kubelet[2638]: I0508 00:08:24.190897 2638 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bcd976d3-aff7-4b77-ad7c-18942b5d0979-xtables-lock\") on node \"ci-4230.1.1-n-e3439e552d\" DevicePath \"\"" May 8 00:08:24.191529 kubelet[2638]: I0508 00:08:24.190917 2638 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bcd976d3-aff7-4b77-ad7c-18942b5d0979-cni-path\") on node \"ci-4230.1.1-n-e3439e552d\" DevicePath \"\"" May 8 00:08:24.581604 kubelet[2638]: E0508 00:08:24.581525 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 8 00:08:24.597260 systemd[1]: Removed slice kubepods-burstable-podbcd976d3_aff7_4b77_ad7c_18942b5d0979.slice - libcontainer container kubepods-burstable-podbcd976d3_aff7_4b77_ad7c_18942b5d0979.slice. May 8 00:08:24.597577 systemd[1]: kubepods-burstable-podbcd976d3_aff7_4b77_ad7c_18942b5d0979.slice: Consumed 11.198s CPU time, 166.9M memory peak, 43.5M read from disk, 13.3M written to disk. May 8 00:08:24.600250 systemd[1]: Removed slice kubepods-besteffort-pod8b977ea8_db27_4db3_9fc5_0231afe77acd.slice - libcontainer container kubepods-besteffort-pod8b977ea8_db27_4db3_9fc5_0231afe77acd.slice. May 8 00:08:24.600888 systemd[1]: kubepods-besteffort-pod8b977ea8_db27_4db3_9fc5_0231afe77acd.slice: Consumed 545ms CPU time, 29.1M memory peak, 5.4M read from disk, 4K written to disk. May 8 00:08:24.622687 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fb3b4d9d241c65a0fe4af5a4e0b0d2917a9c7c902add9b864d5a982c216d38f1-rootfs.mount: Deactivated successfully. May 8 00:08:24.622889 systemd[1]: var-lib-kubelet-pods-bcd976d3\x2daff7\x2d4b77\x2dad7c\x2d18942b5d0979-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 8 00:08:24.623013 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0eaff26c0b5d99843678c0d439bdac0477dd3c460bc1198e3cae119a642f73f2-rootfs.mount: Deactivated successfully. May 8 00:08:24.623148 systemd[1]: var-lib-kubelet-pods-bcd976d3\x2daff7\x2d4b77\x2dad7c\x2d18942b5d0979-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 8 00:08:24.623262 systemd[1]: var-lib-kubelet-pods-bcd976d3\x2daff7\x2d4b77\x2dad7c\x2d18942b5d0979-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhrbst.mount: Deactivated successfully. May 8 00:08:24.623368 systemd[1]: var-lib-kubelet-pods-8b977ea8\x2ddb27\x2d4db3\x2d9fc5\x2d0231afe77acd-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmmg25.mount: Deactivated successfully. May 8 00:08:25.506827 sshd[4271]: Connection closed by 139.178.68.195 port 47760 May 8 00:08:25.508310 sshd-session[4268]: pam_unix(sshd:session): session closed for user core May 8 00:08:25.526142 systemd[1]: sshd@24-146.190.122.31:22-139.178.68.195:47760.service: Deactivated successfully. May 8 00:08:25.530818 systemd[1]: session-25.scope: Deactivated successfully. May 8 00:08:25.531599 systemd[1]: session-25.scope: Consumed 1.045s CPU time, 26.3M memory peak. May 8 00:08:25.533792 systemd-logind[1463]: Session 25 logged out. Waiting for processes to exit. May 8 00:08:25.542085 systemd[1]: Started sshd@25-146.190.122.31:22-139.178.68.195:58046.service - OpenSSH per-connection server daemon (139.178.68.195:58046). May 8 00:08:25.546229 systemd-logind[1463]: Removed session 25. May 8 00:08:25.626330 sshd[4435]: Accepted publickey for core from 139.178.68.195 port 58046 ssh2: RSA SHA256:EMQK/xXwyGW130jHG636zV1LD4ZeZqDZsuuaHw+qK90 May 8 00:08:25.629073 sshd-session[4435]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:08:25.640578 systemd-logind[1463]: New session 26 of user core. May 8 00:08:25.651777 systemd[1]: Started session-26.scope - Session 26 of User core. May 8 00:08:26.435645 sshd[4438]: Connection closed by 139.178.68.195 port 58046 May 8 00:08:26.437178 sshd-session[4435]: pam_unix(sshd:session): session closed for user core May 8 00:08:26.460198 systemd[1]: sshd@25-146.190.122.31:22-139.178.68.195:58046.service: Deactivated successfully. May 8 00:08:26.469696 systemd[1]: session-26.scope: Deactivated successfully. May 8 00:08:26.471679 kubelet[2638]: I0508 00:08:26.465043 2638 topology_manager.go:215] "Topology Admit Handler" podUID="976050a7-84bc-4e53-bc03-044ef5ae024a" podNamespace="kube-system" podName="cilium-n8k56" May 8 00:08:26.474042 systemd-logind[1463]: Session 26 logged out. Waiting for processes to exit. May 8 00:08:26.479565 kubelet[2638]: E0508 00:08:26.479513 2638 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bcd976d3-aff7-4b77-ad7c-18942b5d0979" containerName="mount-bpf-fs" May 8 00:08:26.479565 kubelet[2638]: E0508 00:08:26.479553 2638 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bcd976d3-aff7-4b77-ad7c-18942b5d0979" containerName="clean-cilium-state" May 8 00:08:26.479565 kubelet[2638]: E0508 00:08:26.479563 2638 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bcd976d3-aff7-4b77-ad7c-18942b5d0979" containerName="cilium-agent" May 8 00:08:26.479565 kubelet[2638]: E0508 00:08:26.479573 2638 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bcd976d3-aff7-4b77-ad7c-18942b5d0979" containerName="apply-sysctl-overwrites" May 8 00:08:26.479565 kubelet[2638]: E0508 00:08:26.479583 2638 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8b977ea8-db27-4db3-9fc5-0231afe77acd" containerName="cilium-operator" May 8 00:08:26.480069 kubelet[2638]: E0508 00:08:26.479591 2638 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bcd976d3-aff7-4b77-ad7c-18942b5d0979" containerName="mount-cgroup" May 8 00:08:26.480069 kubelet[2638]: I0508 00:08:26.479631 2638 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b977ea8-db27-4db3-9fc5-0231afe77acd" containerName="cilium-operator" May 8 00:08:26.480069 kubelet[2638]: I0508 00:08:26.479639 2638 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcd976d3-aff7-4b77-ad7c-18942b5d0979" containerName="cilium-agent" May 8 00:08:26.484252 systemd[1]: Started sshd@26-146.190.122.31:22-139.178.68.195:58056.service - OpenSSH per-connection server daemon (139.178.68.195:58056). May 8 00:08:26.491600 systemd-logind[1463]: Removed session 26. May 8 00:08:26.577506 systemd[1]: Created slice kubepods-burstable-pod976050a7_84bc_4e53_bc03_044ef5ae024a.slice - libcontainer container kubepods-burstable-pod976050a7_84bc_4e53_bc03_044ef5ae024a.slice. May 8 00:08:26.599159 kubelet[2638]: I0508 00:08:26.599108 2638 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b977ea8-db27-4db3-9fc5-0231afe77acd" path="/var/lib/kubelet/pods/8b977ea8-db27-4db3-9fc5-0231afe77acd/volumes" May 8 00:08:26.602586 kubelet[2638]: I0508 00:08:26.602523 2638 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcd976d3-aff7-4b77-ad7c-18942b5d0979" path="/var/lib/kubelet/pods/bcd976d3-aff7-4b77-ad7c-18942b5d0979/volumes" May 8 00:08:26.631708 sshd[4448]: Accepted publickey for core from 139.178.68.195 port 58056 ssh2: RSA SHA256:EMQK/xXwyGW130jHG636zV1LD4ZeZqDZsuuaHw+qK90 May 8 00:08:26.637376 kubelet[2638]: I0508 00:08:26.636246 2638 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/976050a7-84bc-4e53-bc03-044ef5ae024a-cni-path\") pod \"cilium-n8k56\" (UID: \"976050a7-84bc-4e53-bc03-044ef5ae024a\") " pod="kube-system/cilium-n8k56" May 8 00:08:26.637376 kubelet[2638]: I0508 00:08:26.636315 2638 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/976050a7-84bc-4e53-bc03-044ef5ae024a-etc-cni-netd\") pod \"cilium-n8k56\" (UID: \"976050a7-84bc-4e53-bc03-044ef5ae024a\") " pod="kube-system/cilium-n8k56" May 8 00:08:26.637376 kubelet[2638]: I0508 00:08:26.636353 2638 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwmzb\" (UniqueName: \"kubernetes.io/projected/976050a7-84bc-4e53-bc03-044ef5ae024a-kube-api-access-hwmzb\") pod \"cilium-n8k56\" (UID: \"976050a7-84bc-4e53-bc03-044ef5ae024a\") " pod="kube-system/cilium-n8k56" May 8 00:08:26.637376 kubelet[2638]: I0508 00:08:26.636386 2638 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/976050a7-84bc-4e53-bc03-044ef5ae024a-hostproc\") pod \"cilium-n8k56\" (UID: \"976050a7-84bc-4e53-bc03-044ef5ae024a\") " pod="kube-system/cilium-n8k56" May 8 00:08:26.637376 kubelet[2638]: I0508 00:08:26.636411 2638 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/976050a7-84bc-4e53-bc03-044ef5ae024a-cilium-ipsec-secrets\") pod \"cilium-n8k56\" (UID: \"976050a7-84bc-4e53-bc03-044ef5ae024a\") " pod="kube-system/cilium-n8k56" May 8 00:08:26.637376 kubelet[2638]: I0508 00:08:26.636467 2638 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/976050a7-84bc-4e53-bc03-044ef5ae024a-cilium-run\") pod \"cilium-n8k56\" (UID: \"976050a7-84bc-4e53-bc03-044ef5ae024a\") " pod="kube-system/cilium-n8k56" May 8 00:08:26.637123 sshd-session[4448]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:08:26.638163 kubelet[2638]: I0508 00:08:26.636495 2638 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/976050a7-84bc-4e53-bc03-044ef5ae024a-bpf-maps\") pod \"cilium-n8k56\" (UID: \"976050a7-84bc-4e53-bc03-044ef5ae024a\") " pod="kube-system/cilium-n8k56" May 8 00:08:26.638163 kubelet[2638]: I0508 00:08:26.636517 2638 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/976050a7-84bc-4e53-bc03-044ef5ae024a-cilium-cgroup\") pod \"cilium-n8k56\" (UID: \"976050a7-84bc-4e53-bc03-044ef5ae024a\") " pod="kube-system/cilium-n8k56" May 8 00:08:26.638163 kubelet[2638]: I0508 00:08:26.636539 2638 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/976050a7-84bc-4e53-bc03-044ef5ae024a-clustermesh-secrets\") pod \"cilium-n8k56\" (UID: \"976050a7-84bc-4e53-bc03-044ef5ae024a\") " pod="kube-system/cilium-n8k56" May 8 00:08:26.638163 kubelet[2638]: I0508 00:08:26.636566 2638 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/976050a7-84bc-4e53-bc03-044ef5ae024a-host-proc-sys-kernel\") pod \"cilium-n8k56\" (UID: \"976050a7-84bc-4e53-bc03-044ef5ae024a\") " pod="kube-system/cilium-n8k56" May 8 00:08:26.638163 kubelet[2638]: I0508 00:08:26.636590 2638 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/976050a7-84bc-4e53-bc03-044ef5ae024a-cilium-config-path\") pod \"cilium-n8k56\" (UID: \"976050a7-84bc-4e53-bc03-044ef5ae024a\") " pod="kube-system/cilium-n8k56" May 8 00:08:26.638577 kubelet[2638]: I0508 00:08:26.636617 2638 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/976050a7-84bc-4e53-bc03-044ef5ae024a-host-proc-sys-net\") pod \"cilium-n8k56\" (UID: \"976050a7-84bc-4e53-bc03-044ef5ae024a\") " pod="kube-system/cilium-n8k56" May 8 00:08:26.638577 kubelet[2638]: I0508 00:08:26.636643 2638 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/976050a7-84bc-4e53-bc03-044ef5ae024a-hubble-tls\") pod \"cilium-n8k56\" (UID: \"976050a7-84bc-4e53-bc03-044ef5ae024a\") " pod="kube-system/cilium-n8k56" May 8 00:08:26.638577 kubelet[2638]: I0508 00:08:26.636668 2638 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/976050a7-84bc-4e53-bc03-044ef5ae024a-lib-modules\") pod \"cilium-n8k56\" (UID: \"976050a7-84bc-4e53-bc03-044ef5ae024a\") " pod="kube-system/cilium-n8k56" May 8 00:08:26.638577 kubelet[2638]: I0508 00:08:26.636697 2638 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/976050a7-84bc-4e53-bc03-044ef5ae024a-xtables-lock\") pod \"cilium-n8k56\" (UID: \"976050a7-84bc-4e53-bc03-044ef5ae024a\") " pod="kube-system/cilium-n8k56" May 8 00:08:26.649925 systemd-logind[1463]: New session 27 of user core. May 8 00:08:26.665769 systemd[1]: Started session-27.scope - Session 27 of User core. May 8 00:08:26.732889 sshd[4451]: Connection closed by 139.178.68.195 port 58056 May 8 00:08:26.731796 sshd-session[4448]: pam_unix(sshd:session): session closed for user core May 8 00:08:26.767902 kubelet[2638]: E0508 00:08:26.764537 2638 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 8 00:08:26.787824 systemd[1]: sshd@26-146.190.122.31:22-139.178.68.195:58056.service: Deactivated successfully. May 8 00:08:26.793148 systemd[1]: session-27.scope: Deactivated successfully. May 8 00:08:26.804694 systemd-logind[1463]: Session 27 logged out. Waiting for processes to exit. May 8 00:08:26.821971 systemd[1]: Started sshd@27-146.190.122.31:22-139.178.68.195:58070.service - OpenSSH per-connection server daemon (139.178.68.195:58070). May 8 00:08:26.823919 systemd-logind[1463]: Removed session 27. May 8 00:08:26.895916 kubelet[2638]: E0508 00:08:26.895558 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 8 00:08:26.898466 containerd[1485]: time="2025-05-08T00:08:26.897562086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-n8k56,Uid:976050a7-84bc-4e53-bc03-044ef5ae024a,Namespace:kube-system,Attempt:0,}" May 8 00:08:26.902039 sshd[4461]: Accepted publickey for core from 139.178.68.195 port 58070 ssh2: RSA SHA256:EMQK/xXwyGW130jHG636zV1LD4ZeZqDZsuuaHw+qK90 May 8 00:08:26.905819 sshd-session[4461]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:08:26.929532 systemd-logind[1463]: New session 28 of user core. May 8 00:08:26.935220 systemd[1]: Started session-28.scope - Session 28 of User core. May 8 00:08:26.949300 containerd[1485]: time="2025-05-08T00:08:26.946241614Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:08:26.949300 containerd[1485]: time="2025-05-08T00:08:26.946341931Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:08:26.949300 containerd[1485]: time="2025-05-08T00:08:26.946359998Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:08:26.949300 containerd[1485]: time="2025-05-08T00:08:26.947569497Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:08:26.977755 systemd[1]: Started cri-containerd-b57a71eadb78252091926095614e53b17bd3822b57faf7722ee3b6de8004d4fb.scope - libcontainer container b57a71eadb78252091926095614e53b17bd3822b57faf7722ee3b6de8004d4fb. May 8 00:08:27.053534 containerd[1485]: time="2025-05-08T00:08:27.053307797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-n8k56,Uid:976050a7-84bc-4e53-bc03-044ef5ae024a,Namespace:kube-system,Attempt:0,} returns sandbox id \"b57a71eadb78252091926095614e53b17bd3822b57faf7722ee3b6de8004d4fb\"" May 8 00:08:27.057869 kubelet[2638]: E0508 00:08:27.055157 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 8 00:08:27.076173 containerd[1485]: time="2025-05-08T00:08:27.076094495Z" level=info msg="CreateContainer within sandbox \"b57a71eadb78252091926095614e53b17bd3822b57faf7722ee3b6de8004d4fb\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 8 00:08:27.096765 containerd[1485]: time="2025-05-08T00:08:27.096656806Z" level=info msg="CreateContainer within sandbox \"b57a71eadb78252091926095614e53b17bd3822b57faf7722ee3b6de8004d4fb\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7fd7f8b55ad1d2764a46af5c34891ee22ef6dacfc3ae894595717b655cee0566\"" May 8 00:08:27.098755 containerd[1485]: time="2025-05-08T00:08:27.098661263Z" level=info msg="StartContainer for \"7fd7f8b55ad1d2764a46af5c34891ee22ef6dacfc3ae894595717b655cee0566\"" May 8 00:08:27.206786 systemd[1]: Started cri-containerd-7fd7f8b55ad1d2764a46af5c34891ee22ef6dacfc3ae894595717b655cee0566.scope - libcontainer container 7fd7f8b55ad1d2764a46af5c34891ee22ef6dacfc3ae894595717b655cee0566. May 8 00:08:27.258670 containerd[1485]: time="2025-05-08T00:08:27.258312641Z" level=info msg="StartContainer for \"7fd7f8b55ad1d2764a46af5c34891ee22ef6dacfc3ae894595717b655cee0566\" returns successfully" May 8 00:08:27.276231 systemd[1]: cri-containerd-7fd7f8b55ad1d2764a46af5c34891ee22ef6dacfc3ae894595717b655cee0566.scope: Deactivated successfully. May 8 00:08:27.319695 containerd[1485]: time="2025-05-08T00:08:27.319391141Z" level=info msg="shim disconnected" id=7fd7f8b55ad1d2764a46af5c34891ee22ef6dacfc3ae894595717b655cee0566 namespace=k8s.io May 8 00:08:27.320455 containerd[1485]: time="2025-05-08T00:08:27.320134988Z" level=warning msg="cleaning up after shim disconnected" id=7fd7f8b55ad1d2764a46af5c34891ee22ef6dacfc3ae894595717b655cee0566 namespace=k8s.io May 8 00:08:27.320455 containerd[1485]: time="2025-05-08T00:08:27.320178323Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:08:28.015250 kubelet[2638]: E0508 00:08:28.015007 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 8 00:08:28.024097 containerd[1485]: time="2025-05-08T00:08:28.023756918Z" level=info msg="CreateContainer within sandbox \"b57a71eadb78252091926095614e53b17bd3822b57faf7722ee3b6de8004d4fb\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 8 00:08:28.050475 containerd[1485]: time="2025-05-08T00:08:28.047789506Z" level=info msg="CreateContainer within sandbox \"b57a71eadb78252091926095614e53b17bd3822b57faf7722ee3b6de8004d4fb\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"522537ef1312dbb55c035adea101d5e4076fc24e38728b1434124cf57094176c\"" May 8 00:08:28.051105 containerd[1485]: time="2025-05-08T00:08:28.050891058Z" level=info msg="StartContainer for \"522537ef1312dbb55c035adea101d5e4076fc24e38728b1434124cf57094176c\"" May 8 00:08:28.054395 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4187888127.mount: Deactivated successfully. May 8 00:08:28.126871 systemd[1]: Started cri-containerd-522537ef1312dbb55c035adea101d5e4076fc24e38728b1434124cf57094176c.scope - libcontainer container 522537ef1312dbb55c035adea101d5e4076fc24e38728b1434124cf57094176c. May 8 00:08:28.174731 containerd[1485]: time="2025-05-08T00:08:28.174559834Z" level=info msg="StartContainer for \"522537ef1312dbb55c035adea101d5e4076fc24e38728b1434124cf57094176c\" returns successfully" May 8 00:08:28.184832 systemd[1]: cri-containerd-522537ef1312dbb55c035adea101d5e4076fc24e38728b1434124cf57094176c.scope: Deactivated successfully. May 8 00:08:28.220231 containerd[1485]: time="2025-05-08T00:08:28.220088208Z" level=info msg="shim disconnected" id=522537ef1312dbb55c035adea101d5e4076fc24e38728b1434124cf57094176c namespace=k8s.io May 8 00:08:28.220231 containerd[1485]: time="2025-05-08T00:08:28.220188259Z" level=warning msg="cleaning up after shim disconnected" id=522537ef1312dbb55c035adea101d5e4076fc24e38728b1434124cf57094176c namespace=k8s.io May 8 00:08:28.220867 containerd[1485]: time="2025-05-08T00:08:28.220200664Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:08:28.757227 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-522537ef1312dbb55c035adea101d5e4076fc24e38728b1434124cf57094176c-rootfs.mount: Deactivated successfully. May 8 00:08:29.026619 kubelet[2638]: E0508 00:08:29.026480 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 8 00:08:29.038382 containerd[1485]: time="2025-05-08T00:08:29.035499247Z" level=info msg="CreateContainer within sandbox \"b57a71eadb78252091926095614e53b17bd3822b57faf7722ee3b6de8004d4fb\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 8 00:08:29.072161 containerd[1485]: time="2025-05-08T00:08:29.068451391Z" level=info msg="CreateContainer within sandbox \"b57a71eadb78252091926095614e53b17bd3822b57faf7722ee3b6de8004d4fb\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"61fd4e0c9c9eed87fc7fa0e8cb8d4a145fe1b1caedfd482b8c2f07361c579d1d\"" May 8 00:08:29.074167 containerd[1485]: time="2025-05-08T00:08:29.072624515Z" level=info msg="StartContainer for \"61fd4e0c9c9eed87fc7fa0e8cb8d4a145fe1b1caedfd482b8c2f07361c579d1d\"" May 8 00:08:29.073495 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1858851967.mount: Deactivated successfully. May 8 00:08:29.147770 systemd[1]: Started cri-containerd-61fd4e0c9c9eed87fc7fa0e8cb8d4a145fe1b1caedfd482b8c2f07361c579d1d.scope - libcontainer container 61fd4e0c9c9eed87fc7fa0e8cb8d4a145fe1b1caedfd482b8c2f07361c579d1d. May 8 00:08:29.207050 containerd[1485]: time="2025-05-08T00:08:29.206947354Z" level=info msg="StartContainer for \"61fd4e0c9c9eed87fc7fa0e8cb8d4a145fe1b1caedfd482b8c2f07361c579d1d\" returns successfully" May 8 00:08:29.215757 systemd[1]: cri-containerd-61fd4e0c9c9eed87fc7fa0e8cb8d4a145fe1b1caedfd482b8c2f07361c579d1d.scope: Deactivated successfully. May 8 00:08:29.271033 containerd[1485]: time="2025-05-08T00:08:29.270927740Z" level=info msg="shim disconnected" id=61fd4e0c9c9eed87fc7fa0e8cb8d4a145fe1b1caedfd482b8c2f07361c579d1d namespace=k8s.io May 8 00:08:29.272638 containerd[1485]: time="2025-05-08T00:08:29.271397093Z" level=warning msg="cleaning up after shim disconnected" id=61fd4e0c9c9eed87fc7fa0e8cb8d4a145fe1b1caedfd482b8c2f07361c579d1d namespace=k8s.io May 8 00:08:29.272638 containerd[1485]: time="2025-05-08T00:08:29.271444397Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:08:29.461703 kubelet[2638]: I0508 00:08:29.457477 2638 setters.go:580] "Node became not ready" node="ci-4230.1.1-n-e3439e552d" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-08T00:08:29Z","lastTransitionTime":"2025-05-08T00:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 8 00:08:29.756817 systemd[1]: run-containerd-runc-k8s.io-61fd4e0c9c9eed87fc7fa0e8cb8d4a145fe1b1caedfd482b8c2f07361c579d1d-runc.PAennM.mount: Deactivated successfully. May 8 00:08:29.757003 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-61fd4e0c9c9eed87fc7fa0e8cb8d4a145fe1b1caedfd482b8c2f07361c579d1d-rootfs.mount: Deactivated successfully. May 8 00:08:30.066183 kubelet[2638]: E0508 00:08:30.066018 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 8 00:08:30.073247 containerd[1485]: time="2025-05-08T00:08:30.073159955Z" level=info msg="CreateContainer within sandbox \"b57a71eadb78252091926095614e53b17bd3822b57faf7722ee3b6de8004d4fb\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 8 00:08:30.111027 containerd[1485]: time="2025-05-08T00:08:30.109164073Z" level=info msg="CreateContainer within sandbox \"b57a71eadb78252091926095614e53b17bd3822b57faf7722ee3b6de8004d4fb\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d73a2ef642dc0a7f14e216975e3bc3b7998fb0378baceb9873d38685e0e41736\"" May 8 00:08:30.114264 containerd[1485]: time="2025-05-08T00:08:30.113311157Z" level=info msg="StartContainer for \"d73a2ef642dc0a7f14e216975e3bc3b7998fb0378baceb9873d38685e0e41736\"" May 8 00:08:30.189804 systemd[1]: Started cri-containerd-d73a2ef642dc0a7f14e216975e3bc3b7998fb0378baceb9873d38685e0e41736.scope - libcontainer container d73a2ef642dc0a7f14e216975e3bc3b7998fb0378baceb9873d38685e0e41736. May 8 00:08:30.242632 systemd[1]: cri-containerd-d73a2ef642dc0a7f14e216975e3bc3b7998fb0378baceb9873d38685e0e41736.scope: Deactivated successfully. May 8 00:08:30.249498 containerd[1485]: time="2025-05-08T00:08:30.249251290Z" level=info msg="StartContainer for \"d73a2ef642dc0a7f14e216975e3bc3b7998fb0378baceb9873d38685e0e41736\" returns successfully" May 8 00:08:30.304282 containerd[1485]: time="2025-05-08T00:08:30.304174365Z" level=info msg="shim disconnected" id=d73a2ef642dc0a7f14e216975e3bc3b7998fb0378baceb9873d38685e0e41736 namespace=k8s.io May 8 00:08:30.304282 containerd[1485]: time="2025-05-08T00:08:30.304260526Z" level=warning msg="cleaning up after shim disconnected" id=d73a2ef642dc0a7f14e216975e3bc3b7998fb0378baceb9873d38685e0e41736 namespace=k8s.io May 8 00:08:30.304282 containerd[1485]: time="2025-05-08T00:08:30.304275109Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:08:30.333574 containerd[1485]: time="2025-05-08T00:08:30.332309555Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:08:30Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 8 00:08:30.757347 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d73a2ef642dc0a7f14e216975e3bc3b7998fb0378baceb9873d38685e0e41736-rootfs.mount: Deactivated successfully. May 8 00:08:31.076563 kubelet[2638]: E0508 00:08:31.075231 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 8 00:08:31.081690 containerd[1485]: time="2025-05-08T00:08:31.080590304Z" level=info msg="CreateContainer within sandbox \"b57a71eadb78252091926095614e53b17bd3822b57faf7722ee3b6de8004d4fb\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 8 00:08:31.115729 containerd[1485]: time="2025-05-08T00:08:31.113663678Z" level=info msg="CreateContainer within sandbox \"b57a71eadb78252091926095614e53b17bd3822b57faf7722ee3b6de8004d4fb\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e135178a81701d07c8f9d40288f23c9355dfc1fcfc28e2a54e4cef8b0d35bae9\"" May 8 00:08:31.117483 containerd[1485]: time="2025-05-08T00:08:31.116097436Z" level=info msg="StartContainer for \"e135178a81701d07c8f9d40288f23c9355dfc1fcfc28e2a54e4cef8b0d35bae9\"" May 8 00:08:31.197826 systemd[1]: Started cri-containerd-e135178a81701d07c8f9d40288f23c9355dfc1fcfc28e2a54e4cef8b0d35bae9.scope - libcontainer container e135178a81701d07c8f9d40288f23c9355dfc1fcfc28e2a54e4cef8b0d35bae9. May 8 00:08:31.300225 containerd[1485]: time="2025-05-08T00:08:31.300149469Z" level=info msg="StartContainer for \"e135178a81701d07c8f9d40288f23c9355dfc1fcfc28e2a54e4cef8b0d35bae9\" returns successfully" May 8 00:08:31.922510 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 8 00:08:32.087578 kubelet[2638]: E0508 00:08:32.087502 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 8 00:08:32.110991 kubelet[2638]: I0508 00:08:32.110924 2638 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-n8k56" podStartSLOduration=6.110896625 podStartE2EDuration="6.110896625s" podCreationTimestamp="2025-05-08 00:08:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:08:32.109057138 +0000 UTC m=+115.711686461" watchObservedRunningTime="2025-05-08 00:08:32.110896625 +0000 UTC m=+115.713525975" May 8 00:08:33.089454 kubelet[2638]: E0508 00:08:33.089393 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 8 00:08:33.787499 systemd[1]: run-containerd-runc-k8s.io-e135178a81701d07c8f9d40288f23c9355dfc1fcfc28e2a54e4cef8b0d35bae9-runc.W2sdRi.mount: Deactivated successfully. May 8 00:08:35.750171 systemd-networkd[1379]: lxc_health: Link UP May 8 00:08:35.755509 systemd-networkd[1379]: lxc_health: Gained carrier May 8 00:08:36.130688 kubelet[2638]: E0508 00:08:36.130575 2638 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:33026->127.0.0.1:45483: write tcp 127.0.0.1:33026->127.0.0.1:45483: write: broken pipe May 8 00:08:36.591622 kubelet[2638]: I0508 00:08:36.591409 2638 scope.go:117] "RemoveContainer" containerID="121f79592338f09c91d98259c2d07ff87fa87a401e914c07830327afdf026070" May 8 00:08:36.594918 containerd[1485]: time="2025-05-08T00:08:36.594860365Z" level=info msg="RemoveContainer for \"121f79592338f09c91d98259c2d07ff87fa87a401e914c07830327afdf026070\"" May 8 00:08:36.602403 containerd[1485]: time="2025-05-08T00:08:36.602315684Z" level=info msg="RemoveContainer for \"121f79592338f09c91d98259c2d07ff87fa87a401e914c07830327afdf026070\" returns successfully" May 8 00:08:36.607509 containerd[1485]: time="2025-05-08T00:08:36.605685311Z" level=info msg="StopPodSandbox for \"0eaff26c0b5d99843678c0d439bdac0477dd3c460bc1198e3cae119a642f73f2\"" May 8 00:08:36.607509 containerd[1485]: time="2025-05-08T00:08:36.605854045Z" level=info msg="TearDown network for sandbox \"0eaff26c0b5d99843678c0d439bdac0477dd3c460bc1198e3cae119a642f73f2\" successfully" May 8 00:08:36.607509 containerd[1485]: time="2025-05-08T00:08:36.605871648Z" level=info msg="StopPodSandbox for \"0eaff26c0b5d99843678c0d439bdac0477dd3c460bc1198e3cae119a642f73f2\" returns successfully" May 8 00:08:36.607509 containerd[1485]: time="2025-05-08T00:08:36.606842362Z" level=info msg="RemovePodSandbox for \"0eaff26c0b5d99843678c0d439bdac0477dd3c460bc1198e3cae119a642f73f2\"" May 8 00:08:36.607509 containerd[1485]: time="2025-05-08T00:08:36.606910143Z" level=info msg="Forcibly stopping sandbox \"0eaff26c0b5d99843678c0d439bdac0477dd3c460bc1198e3cae119a642f73f2\"" May 8 00:08:36.607509 containerd[1485]: time="2025-05-08T00:08:36.607015139Z" level=info msg="TearDown network for sandbox \"0eaff26c0b5d99843678c0d439bdac0477dd3c460bc1198e3cae119a642f73f2\" successfully" May 8 00:08:36.619003 containerd[1485]: time="2025-05-08T00:08:36.618913544Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0eaff26c0b5d99843678c0d439bdac0477dd3c460bc1198e3cae119a642f73f2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:08:36.619203 containerd[1485]: time="2025-05-08T00:08:36.619086298Z" level=info msg="RemovePodSandbox \"0eaff26c0b5d99843678c0d439bdac0477dd3c460bc1198e3cae119a642f73f2\" returns successfully" May 8 00:08:36.622464 containerd[1485]: time="2025-05-08T00:08:36.622351022Z" level=info msg="StopPodSandbox for \"fb3b4d9d241c65a0fe4af5a4e0b0d2917a9c7c902add9b864d5a982c216d38f1\"" May 8 00:08:36.622666 containerd[1485]: time="2025-05-08T00:08:36.622537045Z" level=info msg="TearDown network for sandbox \"fb3b4d9d241c65a0fe4af5a4e0b0d2917a9c7c902add9b864d5a982c216d38f1\" successfully" May 8 00:08:36.622666 containerd[1485]: time="2025-05-08T00:08:36.622558979Z" level=info msg="StopPodSandbox for \"fb3b4d9d241c65a0fe4af5a4e0b0d2917a9c7c902add9b864d5a982c216d38f1\" returns successfully" May 8 00:08:36.628462 containerd[1485]: time="2025-05-08T00:08:36.627269868Z" level=info msg="RemovePodSandbox for \"fb3b4d9d241c65a0fe4af5a4e0b0d2917a9c7c902add9b864d5a982c216d38f1\"" May 8 00:08:36.628462 containerd[1485]: time="2025-05-08T00:08:36.627339344Z" level=info msg="Forcibly stopping sandbox \"fb3b4d9d241c65a0fe4af5a4e0b0d2917a9c7c902add9b864d5a982c216d38f1\"" May 8 00:08:36.628462 containerd[1485]: time="2025-05-08T00:08:36.627519942Z" level=info msg="TearDown network for sandbox \"fb3b4d9d241c65a0fe4af5a4e0b0d2917a9c7c902add9b864d5a982c216d38f1\" successfully" May 8 00:08:36.636512 containerd[1485]: time="2025-05-08T00:08:36.636437810Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fb3b4d9d241c65a0fe4af5a4e0b0d2917a9c7c902add9b864d5a982c216d38f1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:08:36.637968 containerd[1485]: time="2025-05-08T00:08:36.637257030Z" level=info msg="RemovePodSandbox \"fb3b4d9d241c65a0fe4af5a4e0b0d2917a9c7c902add9b864d5a982c216d38f1\" returns successfully" May 8 00:08:36.901850 kubelet[2638]: E0508 00:08:36.901170 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 8 00:08:37.100856 kubelet[2638]: E0508 00:08:37.100738 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 8 00:08:37.569688 systemd-networkd[1379]: lxc_health: Gained IPv6LL May 8 00:08:38.103433 kubelet[2638]: E0508 00:08:38.103372 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 8 00:08:38.357280 kubelet[2638]: E0508 00:08:38.356873 2638 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:33030->127.0.0.1:45483: write tcp 127.0.0.1:33030->127.0.0.1:45483: write: connection reset by peer May 8 00:08:42.864320 sshd[4477]: Connection closed by 139.178.68.195 port 58070 May 8 00:08:42.866454 sshd-session[4461]: pam_unix(sshd:session): session closed for user core May 8 00:08:42.878359 systemd[1]: sshd@27-146.190.122.31:22-139.178.68.195:58070.service: Deactivated successfully. May 8 00:08:42.885332 systemd[1]: session-28.scope: Deactivated successfully. May 8 00:08:42.892149 systemd-logind[1463]: Session 28 logged out. Waiting for processes to exit. May 8 00:08:42.897681 systemd-logind[1463]: Removed session 28.