Nov 4 23:53:49.252561 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Tue Nov 4 22:00:22 -00 2025 Nov 4 23:53:49.252603 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=c57c40de146020da5f35a7230cc1da8f1a5a7a7af49d0754317609f7e94976e2 Nov 4 23:53:49.252622 kernel: BIOS-provided physical RAM map: Nov 4 23:53:49.252630 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 4 23:53:49.252637 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 4 23:53:49.252644 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 4 23:53:49.252653 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Nov 4 23:53:49.252665 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Nov 4 23:53:49.252672 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 4 23:53:49.252686 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 4 23:53:49.252693 kernel: NX (Execute Disable) protection: active Nov 4 23:53:49.252700 kernel: APIC: Static calls initialized Nov 4 23:53:49.252708 kernel: SMBIOS 2.8 present. Nov 4 23:53:49.252715 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Nov 4 23:53:49.252724 kernel: DMI: Memory slots populated: 1/1 Nov 4 23:53:49.252738 kernel: Hypervisor detected: KVM Nov 4 23:53:49.252750 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Nov 4 23:53:49.252758 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 4 23:53:49.252766 kernel: kvm-clock: using sched offset of 4335203862 cycles Nov 4 23:53:49.252775 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 4 23:53:49.252783 kernel: tsc: Detected 1995.312 MHz processor Nov 4 23:53:49.252792 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 4 23:53:49.252801 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 4 23:53:49.252815 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Nov 4 23:53:49.252824 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 4 23:53:49.252833 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 4 23:53:49.252841 kernel: ACPI: Early table checksum verification disabled Nov 4 23:53:49.252849 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Nov 4 23:53:49.252857 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 23:53:49.252865 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 23:53:49.252880 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 23:53:49.252888 kernel: ACPI: FACS 0x000000007FFE0000 000040 Nov 4 23:53:49.252896 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 23:53:49.252904 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 23:53:49.252912 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 23:53:49.252920 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 23:53:49.252928 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Nov 4 23:53:49.252962 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Nov 4 23:53:49.252971 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Nov 4 23:53:49.252979 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Nov 4 23:53:49.252996 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Nov 4 23:53:49.253004 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Nov 4 23:53:49.253019 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Nov 4 23:53:49.253027 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Nov 4 23:53:49.253036 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Nov 4 23:53:49.253045 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00001000-0x7ffdafff] Nov 4 23:53:49.253053 kernel: NODE_DATA(0) allocated [mem 0x7ffd3dc0-0x7ffdafff] Nov 4 23:53:49.253062 kernel: Zone ranges: Nov 4 23:53:49.253070 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 4 23:53:49.253085 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Nov 4 23:53:49.253093 kernel: Normal empty Nov 4 23:53:49.253101 kernel: Device empty Nov 4 23:53:49.253110 kernel: Movable zone start for each node Nov 4 23:53:49.253118 kernel: Early memory node ranges Nov 4 23:53:49.253127 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 4 23:53:49.253135 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Nov 4 23:53:49.253149 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Nov 4 23:53:49.253158 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 4 23:53:49.253166 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 4 23:53:49.253175 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Nov 4 23:53:49.253183 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 4 23:53:49.253195 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 4 23:53:49.253204 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 4 23:53:49.253222 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 4 23:53:49.253231 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 4 23:53:49.253239 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 4 23:53:49.253251 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 4 23:53:49.253260 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 4 23:53:49.253268 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 4 23:53:49.253277 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 4 23:53:49.253285 kernel: TSC deadline timer available Nov 4 23:53:49.253300 kernel: CPU topo: Max. logical packages: 1 Nov 4 23:53:49.253308 kernel: CPU topo: Max. logical dies: 1 Nov 4 23:53:49.253316 kernel: CPU topo: Max. dies per package: 1 Nov 4 23:53:49.253324 kernel: CPU topo: Max. threads per core: 1 Nov 4 23:53:49.253333 kernel: CPU topo: Num. cores per package: 2 Nov 4 23:53:49.253341 kernel: CPU topo: Num. threads per package: 2 Nov 4 23:53:49.253349 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Nov 4 23:53:49.253364 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 4 23:53:49.253373 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Nov 4 23:53:49.253381 kernel: Booting paravirtualized kernel on KVM Nov 4 23:53:49.253390 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 4 23:53:49.253398 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 4 23:53:49.253407 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Nov 4 23:53:49.253415 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Nov 4 23:53:49.253429 kernel: pcpu-alloc: [0] 0 1 Nov 4 23:53:49.253438 kernel: kvm-guest: PV spinlocks disabled, no host support Nov 4 23:53:49.253447 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=c57c40de146020da5f35a7230cc1da8f1a5a7a7af49d0754317609f7e94976e2 Nov 4 23:53:49.253456 kernel: random: crng init done Nov 4 23:53:49.253465 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 4 23:53:49.253473 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 4 23:53:49.253482 kernel: Fallback order for Node 0: 0 Nov 4 23:53:49.253496 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524153 Nov 4 23:53:49.253505 kernel: Policy zone: DMA32 Nov 4 23:53:49.253513 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 4 23:53:49.253521 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 4 23:53:49.253530 kernel: Kernel/User page tables isolation: enabled Nov 4 23:53:49.253538 kernel: ftrace: allocating 40092 entries in 157 pages Nov 4 23:53:49.253547 kernel: ftrace: allocated 157 pages with 5 groups Nov 4 23:53:49.253561 kernel: Dynamic Preempt: voluntary Nov 4 23:53:49.253570 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 4 23:53:49.253581 kernel: rcu: RCU event tracing is enabled. Nov 4 23:53:49.253589 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 4 23:53:49.253598 kernel: Trampoline variant of Tasks RCU enabled. Nov 4 23:53:49.253606 kernel: Rude variant of Tasks RCU enabled. Nov 4 23:53:49.253615 kernel: Tracing variant of Tasks RCU enabled. Nov 4 23:53:49.253623 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 4 23:53:49.253637 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 4 23:53:49.253646 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 4 23:53:49.253658 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 4 23:53:49.253667 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 4 23:53:49.253675 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 4 23:53:49.253684 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 4 23:53:49.253692 kernel: Console: colour VGA+ 80x25 Nov 4 23:53:49.253707 kernel: printk: legacy console [tty0] enabled Nov 4 23:53:49.253716 kernel: printk: legacy console [ttyS0] enabled Nov 4 23:53:49.253724 kernel: ACPI: Core revision 20240827 Nov 4 23:53:49.253733 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 4 23:53:49.253761 kernel: APIC: Switch to symmetric I/O mode setup Nov 4 23:53:49.253776 kernel: x2apic enabled Nov 4 23:53:49.253786 kernel: APIC: Switched APIC routing to: physical x2apic Nov 4 23:53:49.253795 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 4 23:53:49.253804 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3985c314e25, max_idle_ns: 881590612270 ns Nov 4 23:53:49.253823 kernel: Calibrating delay loop (skipped) preset value.. 3990.62 BogoMIPS (lpj=1995312) Nov 4 23:53:49.253832 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Nov 4 23:53:49.253844 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Nov 4 23:53:49.253858 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 4 23:53:49.253880 kernel: Spectre V2 : Mitigation: Retpolines Nov 4 23:53:49.253893 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 4 23:53:49.253906 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Nov 4 23:53:49.253920 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 4 23:53:49.253932 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 4 23:53:49.259021 kernel: MDS: Mitigation: Clear CPU buffers Nov 4 23:53:49.259061 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 4 23:53:49.259105 kernel: active return thunk: its_return_thunk Nov 4 23:53:49.259115 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 4 23:53:49.259125 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 4 23:53:49.259134 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 4 23:53:49.259144 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 4 23:53:49.259153 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 4 23:53:49.259163 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Nov 4 23:53:49.259180 kernel: Freeing SMP alternatives memory: 32K Nov 4 23:53:49.259189 kernel: pid_max: default: 32768 minimum: 301 Nov 4 23:53:49.259199 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 4 23:53:49.259208 kernel: landlock: Up and running. Nov 4 23:53:49.259217 kernel: SELinux: Initializing. Nov 4 23:53:49.259227 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 4 23:53:49.259237 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 4 23:53:49.259254 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Nov 4 23:53:49.259263 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Nov 4 23:53:49.259273 kernel: signal: max sigframe size: 1776 Nov 4 23:53:49.259282 kernel: rcu: Hierarchical SRCU implementation. Nov 4 23:53:49.259296 kernel: rcu: Max phase no-delay instances is 400. Nov 4 23:53:49.259305 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 4 23:53:49.259315 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 4 23:53:49.259331 kernel: smp: Bringing up secondary CPUs ... Nov 4 23:53:49.259349 kernel: smpboot: x86: Booting SMP configuration: Nov 4 23:53:49.259366 kernel: .... node #0, CPUs: #1 Nov 4 23:53:49.259379 kernel: smp: Brought up 1 node, 2 CPUs Nov 4 23:53:49.259393 kernel: smpboot: Total of 2 processors activated (7981.24 BogoMIPS) Nov 4 23:53:49.259483 kernel: Memory: 1989432K/2096612K available (14336K kernel code, 2443K rwdata, 26064K rodata, 15936K init, 2108K bss, 102616K reserved, 0K cma-reserved) Nov 4 23:53:49.259498 kernel: devtmpfs: initialized Nov 4 23:53:49.259521 kernel: x86/mm: Memory block size: 128MB Nov 4 23:53:49.259530 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 4 23:53:49.259540 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 4 23:53:49.259549 kernel: pinctrl core: initialized pinctrl subsystem Nov 4 23:53:49.259558 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 4 23:53:49.259567 kernel: audit: initializing netlink subsys (disabled) Nov 4 23:53:49.259577 kernel: audit: type=2000 audit(1762300426.640:1): state=initialized audit_enabled=0 res=1 Nov 4 23:53:49.259593 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 4 23:53:49.259603 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 4 23:53:49.259612 kernel: cpuidle: using governor menu Nov 4 23:53:49.259621 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 4 23:53:49.259631 kernel: dca service started, version 1.12.1 Nov 4 23:53:49.259641 kernel: PCI: Using configuration type 1 for base access Nov 4 23:53:49.259650 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 4 23:53:49.259667 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 4 23:53:49.259676 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 4 23:53:49.259685 kernel: ACPI: Added _OSI(Module Device) Nov 4 23:53:49.259695 kernel: ACPI: Added _OSI(Processor Device) Nov 4 23:53:49.259705 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 4 23:53:49.259714 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 4 23:53:49.259723 kernel: ACPI: Interpreter enabled Nov 4 23:53:49.259732 kernel: ACPI: PM: (supports S0 S5) Nov 4 23:53:49.259748 kernel: ACPI: Using IOAPIC for interrupt routing Nov 4 23:53:49.259758 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 4 23:53:49.259767 kernel: PCI: Using E820 reservations for host bridge windows Nov 4 23:53:49.259776 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Nov 4 23:53:49.259786 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 4 23:53:49.260132 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Nov 4 23:53:49.260397 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Nov 4 23:53:49.260577 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Nov 4 23:53:49.260592 kernel: acpiphp: Slot [3] registered Nov 4 23:53:49.260602 kernel: acpiphp: Slot [4] registered Nov 4 23:53:49.260613 kernel: acpiphp: Slot [5] registered Nov 4 23:53:49.260622 kernel: acpiphp: Slot [6] registered Nov 4 23:53:49.260646 kernel: acpiphp: Slot [7] registered Nov 4 23:53:49.260655 kernel: acpiphp: Slot [8] registered Nov 4 23:53:49.260665 kernel: acpiphp: Slot [9] registered Nov 4 23:53:49.260674 kernel: acpiphp: Slot [10] registered Nov 4 23:53:49.260684 kernel: acpiphp: Slot [11] registered Nov 4 23:53:49.260694 kernel: acpiphp: Slot [12] registered Nov 4 23:53:49.260703 kernel: acpiphp: Slot [13] registered Nov 4 23:53:49.260712 kernel: acpiphp: Slot [14] registered Nov 4 23:53:49.260729 kernel: acpiphp: Slot [15] registered Nov 4 23:53:49.260739 kernel: acpiphp: Slot [16] registered Nov 4 23:53:49.260748 kernel: acpiphp: Slot [17] registered Nov 4 23:53:49.260757 kernel: acpiphp: Slot [18] registered Nov 4 23:53:49.260767 kernel: acpiphp: Slot [19] registered Nov 4 23:53:49.260776 kernel: acpiphp: Slot [20] registered Nov 4 23:53:49.260785 kernel: acpiphp: Slot [21] registered Nov 4 23:53:49.260801 kernel: acpiphp: Slot [22] registered Nov 4 23:53:49.260810 kernel: acpiphp: Slot [23] registered Nov 4 23:53:49.260819 kernel: acpiphp: Slot [24] registered Nov 4 23:53:49.260829 kernel: acpiphp: Slot [25] registered Nov 4 23:53:49.260838 kernel: acpiphp: Slot [26] registered Nov 4 23:53:49.260848 kernel: acpiphp: Slot [27] registered Nov 4 23:53:49.260857 kernel: acpiphp: Slot [28] registered Nov 4 23:53:49.260866 kernel: acpiphp: Slot [29] registered Nov 4 23:53:49.260882 kernel: acpiphp: Slot [30] registered Nov 4 23:53:49.260892 kernel: acpiphp: Slot [31] registered Nov 4 23:53:49.260901 kernel: PCI host bridge to bus 0000:00 Nov 4 23:53:49.261082 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 4 23:53:49.261237 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 4 23:53:49.261364 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 4 23:53:49.261551 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Nov 4 23:53:49.261700 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Nov 4 23:53:49.261824 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 4 23:53:49.262029 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Nov 4 23:53:49.262191 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Nov 4 23:53:49.262385 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint Nov 4 23:53:49.262528 kernel: pci 0000:00:01.1: BAR 4 [io 0xc1e0-0xc1ef] Nov 4 23:53:49.262734 kernel: pci 0000:00:01.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Nov 4 23:53:49.264297 kernel: pci 0000:00:01.1: BAR 1 [io 0x03f6]: legacy IDE quirk Nov 4 23:53:49.264514 kernel: pci 0000:00:01.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Nov 4 23:53:49.264651 kernel: pci 0000:00:01.1: BAR 3 [io 0x0376]: legacy IDE quirk Nov 4 23:53:49.264851 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint Nov 4 23:53:49.265043 kernel: pci 0000:00:01.2: BAR 4 [io 0xc180-0xc19f] Nov 4 23:53:49.265194 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Nov 4 23:53:49.265366 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Nov 4 23:53:49.265562 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Nov 4 23:53:49.265771 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Nov 4 23:53:49.265916 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref] Nov 4 23:53:49.267700 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref] Nov 4 23:53:49.267871 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfebf0000-0xfebf0fff] Nov 4 23:53:49.268052 kernel: pci 0000:00:02.0: ROM [mem 0xfebe0000-0xfebeffff pref] Nov 4 23:53:49.268197 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 4 23:53:49.268400 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Nov 4 23:53:49.268558 kernel: pci 0000:00:03.0: BAR 0 [io 0xc1a0-0xc1bf] Nov 4 23:53:49.268738 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebf1000-0xfebf1fff] Nov 4 23:53:49.268881 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref] Nov 4 23:53:49.271195 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Nov 4 23:53:49.271370 kernel: pci 0000:00:04.0: BAR 0 [io 0xc1c0-0xc1df] Nov 4 23:53:49.271611 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebf2000-0xfebf2fff] Nov 4 23:53:49.271770 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref] Nov 4 23:53:49.271919 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Nov 4 23:53:49.272135 kernel: pci 0000:00:05.0: BAR 0 [io 0xc100-0xc13f] Nov 4 23:53:49.272277 kernel: pci 0000:00:05.0: BAR 1 [mem 0xfebf3000-0xfebf3fff] Nov 4 23:53:49.272431 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref] Nov 4 23:53:49.273678 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Nov 4 23:53:49.273840 kernel: pci 0000:00:06.0: BAR 0 [io 0xc000-0xc07f] Nov 4 23:53:49.275155 kernel: pci 0000:00:06.0: BAR 1 [mem 0xfebf4000-0xfebf4fff] Nov 4 23:53:49.275351 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref] Nov 4 23:53:49.275538 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Nov 4 23:53:49.275703 kernel: pci 0000:00:07.0: BAR 0 [io 0xc080-0xc0ff] Nov 4 23:53:49.275840 kernel: pci 0000:00:07.0: BAR 1 [mem 0xfebf5000-0xfebf5fff] Nov 4 23:53:49.276035 kernel: pci 0000:00:07.0: BAR 4 [mem 0xfe814000-0xfe817fff 64bit pref] Nov 4 23:53:49.276213 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint Nov 4 23:53:49.276409 kernel: pci 0000:00:08.0: BAR 0 [io 0xc140-0xc17f] Nov 4 23:53:49.276570 kernel: pci 0000:00:08.0: BAR 4 [mem 0xfe818000-0xfe81bfff 64bit pref] Nov 4 23:53:49.276584 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 4 23:53:49.276594 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 4 23:53:49.276604 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 4 23:53:49.276614 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 4 23:53:49.276623 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Nov 4 23:53:49.276643 kernel: iommu: Default domain type: Translated Nov 4 23:53:49.276652 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 4 23:53:49.276662 kernel: PCI: Using ACPI for IRQ routing Nov 4 23:53:49.276671 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 4 23:53:49.276681 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 4 23:53:49.276690 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Nov 4 23:53:49.276874 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Nov 4 23:53:49.278663 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Nov 4 23:53:49.278820 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 4 23:53:49.278834 kernel: vgaarb: loaded Nov 4 23:53:49.278844 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 4 23:53:49.278854 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 4 23:53:49.278864 kernel: clocksource: Switched to clocksource kvm-clock Nov 4 23:53:49.278874 kernel: VFS: Disk quotas dquot_6.6.0 Nov 4 23:53:49.278883 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 4 23:53:49.278915 kernel: pnp: PnP ACPI init Nov 4 23:53:49.278925 kernel: pnp: PnP ACPI: found 4 devices Nov 4 23:53:49.278934 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 4 23:53:49.278961 kernel: NET: Registered PF_INET protocol family Nov 4 23:53:49.278971 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 4 23:53:49.278980 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Nov 4 23:53:49.278990 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 4 23:53:49.279007 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 4 23:53:49.279017 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 4 23:53:49.279026 kernel: TCP: Hash tables configured (established 16384 bind 16384) Nov 4 23:53:49.279036 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 4 23:53:49.279045 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 4 23:53:49.279055 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 4 23:53:49.279065 kernel: NET: Registered PF_XDP protocol family Nov 4 23:53:49.279230 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 4 23:53:49.279354 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 4 23:53:49.279534 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 4 23:53:49.279659 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Nov 4 23:53:49.279784 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Nov 4 23:53:49.279928 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Nov 4 23:53:49.280133 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Nov 4 23:53:49.280148 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Nov 4 23:53:49.280289 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x720 took 28516 usecs Nov 4 23:53:49.280302 kernel: PCI: CLS 0 bytes, default 64 Nov 4 23:53:49.280312 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 4 23:53:49.280323 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x3985c314e25, max_idle_ns: 881590612270 ns Nov 4 23:53:49.280333 kernel: Initialise system trusted keyrings Nov 4 23:53:49.280353 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Nov 4 23:53:49.280363 kernel: Key type asymmetric registered Nov 4 23:53:49.280372 kernel: Asymmetric key parser 'x509' registered Nov 4 23:53:49.280390 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 4 23:53:49.280400 kernel: io scheduler mq-deadline registered Nov 4 23:53:49.280410 kernel: io scheduler kyber registered Nov 4 23:53:49.280420 kernel: io scheduler bfq registered Nov 4 23:53:49.280436 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 4 23:53:49.280446 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Nov 4 23:53:49.280456 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Nov 4 23:53:49.280466 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Nov 4 23:53:49.280476 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 4 23:53:49.280486 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 4 23:53:49.280495 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 4 23:53:49.280511 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 4 23:53:49.280520 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 4 23:53:49.280530 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 4 23:53:49.280698 kernel: rtc_cmos 00:03: RTC can wake from S4 Nov 4 23:53:49.280838 kernel: rtc_cmos 00:03: registered as rtc0 Nov 4 23:53:49.281044 kernel: rtc_cmos 00:03: setting system clock to 2025-11-04T23:53:47 UTC (1762300427) Nov 4 23:53:49.281195 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Nov 4 23:53:49.281209 kernel: intel_pstate: CPU model not supported Nov 4 23:53:49.281219 kernel: NET: Registered PF_INET6 protocol family Nov 4 23:53:49.281229 kernel: Segment Routing with IPv6 Nov 4 23:53:49.281240 kernel: In-situ OAM (IOAM) with IPv6 Nov 4 23:53:49.281249 kernel: NET: Registered PF_PACKET protocol family Nov 4 23:53:49.281259 kernel: Key type dns_resolver registered Nov 4 23:53:49.281279 kernel: IPI shorthand broadcast: enabled Nov 4 23:53:49.281289 kernel: sched_clock: Marking stable (1462006132, 260130600)->(1928725089, -206588357) Nov 4 23:53:49.281298 kernel: registered taskstats version 1 Nov 4 23:53:49.281308 kernel: Loading compiled-in X.509 certificates Nov 4 23:53:49.281318 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: ace064fb6689a15889f35c6439909c760a72ef44' Nov 4 23:53:49.281327 kernel: Demotion targets for Node 0: null Nov 4 23:53:49.281337 kernel: Key type .fscrypt registered Nov 4 23:53:49.281354 kernel: Key type fscrypt-provisioning registered Nov 4 23:53:49.281405 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 4 23:53:49.281421 kernel: ima: Allocated hash algorithm: sha1 Nov 4 23:53:49.281432 kernel: ima: No architecture policies found Nov 4 23:53:49.281442 kernel: clk: Disabling unused clocks Nov 4 23:53:49.281451 kernel: Freeing unused kernel image (initmem) memory: 15936K Nov 4 23:53:49.281462 kernel: Write protecting the kernel read-only data: 40960k Nov 4 23:53:49.281472 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Nov 4 23:53:49.281489 kernel: Run /init as init process Nov 4 23:53:49.281499 kernel: with arguments: Nov 4 23:53:49.281509 kernel: /init Nov 4 23:53:49.281519 kernel: with environment: Nov 4 23:53:49.281529 kernel: HOME=/ Nov 4 23:53:49.281538 kernel: TERM=linux Nov 4 23:53:49.281548 kernel: SCSI subsystem initialized Nov 4 23:53:49.281565 kernel: libata version 3.00 loaded. Nov 4 23:53:49.281722 kernel: ata_piix 0000:00:01.1: version 2.13 Nov 4 23:53:49.281895 kernel: scsi host0: ata_piix Nov 4 23:53:49.282079 kernel: scsi host1: ata_piix Nov 4 23:53:49.282096 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 lpm-pol 0 Nov 4 23:53:49.282106 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 lpm-pol 0 Nov 4 23:53:49.282132 kernel: ACPI: bus type USB registered Nov 4 23:53:49.282142 kernel: usbcore: registered new interface driver usbfs Nov 4 23:53:49.282153 kernel: usbcore: registered new interface driver hub Nov 4 23:53:49.282162 kernel: usbcore: registered new device driver usb Nov 4 23:53:49.282307 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Nov 4 23:53:49.282447 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Nov 4 23:53:49.282582 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Nov 4 23:53:49.282733 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Nov 4 23:53:49.282903 kernel: hub 1-0:1.0: USB hub found Nov 4 23:53:49.283123 kernel: hub 1-0:1.0: 2 ports detected Nov 4 23:53:49.283310 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Nov 4 23:53:49.283482 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Nov 4 23:53:49.283498 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 4 23:53:49.283509 kernel: GPT:16515071 != 125829119 Nov 4 23:53:49.283519 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 4 23:53:49.283528 kernel: GPT:16515071 != 125829119 Nov 4 23:53:49.283555 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 4 23:53:49.283565 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 4 23:53:49.283723 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Nov 4 23:53:49.283857 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Nov 4 23:53:49.284025 kernel: virtio_scsi virtio3: 2/0/0 default/read/poll queues Nov 4 23:53:49.284174 kernel: scsi host2: Virtio SCSI HBA Nov 4 23:53:49.284212 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 4 23:53:49.284226 kernel: device-mapper: uevent: version 1.0.3 Nov 4 23:53:49.284241 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 4 23:53:49.284255 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Nov 4 23:53:49.284269 kernel: raid6: avx2x4 gen() 25277 MB/s Nov 4 23:53:49.284281 kernel: raid6: avx2x2 gen() 25914 MB/s Nov 4 23:53:49.284308 kernel: raid6: avx2x1 gen() 16601 MB/s Nov 4 23:53:49.284323 kernel: raid6: using algorithm avx2x2 gen() 25914 MB/s Nov 4 23:53:49.284338 kernel: raid6: .... xor() 16647 MB/s, rmw enabled Nov 4 23:53:49.284364 kernel: raid6: using avx2x2 recovery algorithm Nov 4 23:53:49.284374 kernel: xor: automatically using best checksumming function avx Nov 4 23:53:49.284384 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 4 23:53:49.284395 kernel: BTRFS: device fsid f719dc90-1cf7-4f08-a80f-0dda441372cc devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (161) Nov 4 23:53:49.284405 kernel: BTRFS info (device dm-0): first mount of filesystem f719dc90-1cf7-4f08-a80f-0dda441372cc Nov 4 23:53:49.284422 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 4 23:53:49.284433 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 4 23:53:49.284443 kernel: BTRFS info (device dm-0): enabling free space tree Nov 4 23:53:49.284452 kernel: loop: module loaded Nov 4 23:53:49.284462 kernel: loop0: detected capacity change from 0 to 100120 Nov 4 23:53:49.284472 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 4 23:53:49.284484 systemd[1]: Successfully made /usr/ read-only. Nov 4 23:53:49.284505 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 4 23:53:49.284516 systemd[1]: Detected virtualization kvm. Nov 4 23:53:49.284526 systemd[1]: Detected architecture x86-64. Nov 4 23:53:49.284537 systemd[1]: Running in initrd. Nov 4 23:53:49.284546 systemd[1]: No hostname configured, using default hostname. Nov 4 23:53:49.284564 systemd[1]: Hostname set to . Nov 4 23:53:49.284574 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 4 23:53:49.284584 systemd[1]: Queued start job for default target initrd.target. Nov 4 23:53:49.284595 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 4 23:53:49.284605 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 4 23:53:49.284615 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 4 23:53:49.284627 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 4 23:53:49.284645 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 4 23:53:49.284656 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 4 23:53:49.284667 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 4 23:53:49.284677 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 4 23:53:49.284687 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 4 23:53:49.284704 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 4 23:53:49.284715 systemd[1]: Reached target paths.target - Path Units. Nov 4 23:53:49.284725 systemd[1]: Reached target slices.target - Slice Units. Nov 4 23:53:49.284735 systemd[1]: Reached target swap.target - Swaps. Nov 4 23:53:49.284745 systemd[1]: Reached target timers.target - Timer Units. Nov 4 23:53:49.284756 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 4 23:53:49.284767 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 4 23:53:49.284784 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 4 23:53:49.284794 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 4 23:53:49.284805 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 4 23:53:49.284815 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 4 23:53:49.284825 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 4 23:53:49.284835 systemd[1]: Reached target sockets.target - Socket Units. Nov 4 23:53:49.284846 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 4 23:53:49.284863 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 4 23:53:49.284873 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 4 23:53:49.284883 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 4 23:53:49.284894 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 4 23:53:49.284905 systemd[1]: Starting systemd-fsck-usr.service... Nov 4 23:53:49.284914 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 4 23:53:49.284925 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 4 23:53:49.284959 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 23:53:49.284971 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 4 23:53:49.284981 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 4 23:53:49.284998 systemd[1]: Finished systemd-fsck-usr.service. Nov 4 23:53:49.285009 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 4 23:53:49.285066 systemd-journald[298]: Collecting audit messages is disabled. Nov 4 23:53:49.285099 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 4 23:53:49.285109 kernel: Bridge firewalling registered Nov 4 23:53:49.285119 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 4 23:53:49.285138 systemd-journald[298]: Journal started Nov 4 23:53:49.285161 systemd-journald[298]: Runtime Journal (/run/log/journal/8cdbeea6b8f443219a793938a99692d9) is 4.9M, max 39.2M, 34.3M free. Nov 4 23:53:49.277441 systemd-modules-load[299]: Inserted module 'br_netfilter' Nov 4 23:53:49.372076 systemd[1]: Started systemd-journald.service - Journal Service. Nov 4 23:53:49.373369 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 23:53:49.375085 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 4 23:53:49.379865 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 4 23:53:49.383132 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 4 23:53:49.386309 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 4 23:53:49.391309 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 4 23:53:49.416191 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 4 23:53:49.419224 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 4 23:53:49.426730 systemd-tmpfiles[319]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 4 23:53:49.438023 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 4 23:53:49.439910 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 4 23:53:49.443444 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 4 23:53:49.447237 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 4 23:53:49.486292 dracut-cmdline[339]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=c57c40de146020da5f35a7230cc1da8f1a5a7a7af49d0754317609f7e94976e2 Nov 4 23:53:49.514866 systemd-resolved[328]: Positive Trust Anchors: Nov 4 23:53:49.514886 systemd-resolved[328]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 4 23:53:49.514890 systemd-resolved[328]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 4 23:53:49.514926 systemd-resolved[328]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 4 23:53:49.552370 systemd-resolved[328]: Defaulting to hostname 'linux'. Nov 4 23:53:49.553932 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 4 23:53:49.554868 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 4 23:53:49.646997 kernel: Loading iSCSI transport class v2.0-870. Nov 4 23:53:49.667000 kernel: iscsi: registered transport (tcp) Nov 4 23:53:49.698215 kernel: iscsi: registered transport (qla4xxx) Nov 4 23:53:49.698316 kernel: QLogic iSCSI HBA Driver Nov 4 23:53:49.734727 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 4 23:53:49.755906 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 4 23:53:49.760562 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 4 23:53:49.823344 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 4 23:53:49.827259 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 4 23:53:49.831159 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 4 23:53:49.880398 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 4 23:53:49.884139 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 4 23:53:49.918564 systemd-udevd[582]: Using default interface naming scheme 'v257'. Nov 4 23:53:49.932791 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 4 23:53:49.937241 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 4 23:53:49.968818 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 4 23:53:49.974158 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 4 23:53:49.976225 dracut-pre-trigger[651]: rd.md=0: removing MD RAID activation Nov 4 23:53:50.022763 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 4 23:53:50.034350 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 4 23:53:50.039053 systemd-networkd[688]: lo: Link UP Nov 4 23:53:50.039067 systemd-networkd[688]: lo: Gained carrier Nov 4 23:53:50.040162 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 4 23:53:50.041887 systemd[1]: Reached target network.target - Network. Nov 4 23:53:50.132038 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 4 23:53:50.136776 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 4 23:53:50.277737 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 4 23:53:50.296098 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 4 23:53:50.311764 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 4 23:53:50.324188 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 4 23:53:50.326815 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 4 23:53:50.341978 kernel: cryptd: max_cpu_qlen set to 1000 Nov 4 23:53:50.346036 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Nov 4 23:53:50.354562 disk-uuid[747]: Primary Header is updated. Nov 4 23:53:50.354562 disk-uuid[747]: Secondary Entries is updated. Nov 4 23:53:50.354562 disk-uuid[747]: Secondary Header is updated. Nov 4 23:53:50.384645 kernel: AES CTR mode by8 optimization enabled Nov 4 23:53:50.461046 systemd-networkd[688]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/yy-digitalocean.network Nov 4 23:53:50.463435 systemd-networkd[688]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Nov 4 23:53:50.466101 systemd-networkd[688]: eth0: Link UP Nov 4 23:53:50.466507 systemd-networkd[688]: eth0: Gained carrier Nov 4 23:53:50.466530 systemd-networkd[688]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/yy-digitalocean.network Nov 4 23:53:50.490083 systemd-networkd[688]: eth0: DHCPv4 address 64.23.154.5/20, gateway 64.23.144.1 acquired from 169.254.169.253 Nov 4 23:53:50.498027 systemd-networkd[688]: eth1: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 4 23:53:50.498036 systemd-networkd[688]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 4 23:53:50.505539 systemd-networkd[688]: eth1: Link UP Nov 4 23:53:50.505888 systemd-networkd[688]: eth1: Gained carrier Nov 4 23:53:50.505906 systemd-networkd[688]: eth1: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 4 23:53:50.516085 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 4 23:53:50.517193 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 23:53:50.518360 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 23:53:50.522120 systemd-networkd[688]: eth1: DHCPv4 address 10.124.0.28/20 acquired from 169.254.169.253 Nov 4 23:53:50.523792 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 23:53:50.613896 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 4 23:53:50.659691 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 23:53:50.662881 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 4 23:53:50.663735 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 4 23:53:50.665239 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 4 23:53:50.668228 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 4 23:53:50.697134 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 4 23:53:51.460088 disk-uuid[748]: Warning: The kernel is still using the old partition table. Nov 4 23:53:51.460088 disk-uuid[748]: The new table will be used at the next reboot or after you Nov 4 23:53:51.460088 disk-uuid[748]: run partprobe(8) or kpartx(8) Nov 4 23:53:51.460088 disk-uuid[748]: The operation has completed successfully. Nov 4 23:53:51.471559 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 4 23:53:51.471737 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 4 23:53:51.474812 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 4 23:53:51.515424 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (838) Nov 4 23:53:51.515510 kernel: BTRFS info (device vda6): first mount of filesystem c1921af5-b472-4b94-8690-4d6daf91a8cd Nov 4 23:53:51.518345 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 4 23:53:51.526885 kernel: BTRFS info (device vda6): turning on async discard Nov 4 23:53:51.527023 kernel: BTRFS info (device vda6): enabling free space tree Nov 4 23:53:51.538039 kernel: BTRFS info (device vda6): last unmount of filesystem c1921af5-b472-4b94-8690-4d6daf91a8cd Nov 4 23:53:51.539705 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 4 23:53:51.542639 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 4 23:53:51.636210 systemd-networkd[688]: eth1: Gained IPv6LL Nov 4 23:53:51.637134 systemd-networkd[688]: eth0: Gained IPv6LL Nov 4 23:53:51.802267 ignition[857]: Ignition 2.22.0 Nov 4 23:53:51.803034 ignition[857]: Stage: fetch-offline Nov 4 23:53:51.803105 ignition[857]: no configs at "/usr/lib/ignition/base.d" Nov 4 23:53:51.803120 ignition[857]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 4 23:53:51.808122 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 4 23:53:51.803257 ignition[857]: parsed url from cmdline: "" Nov 4 23:53:51.803262 ignition[857]: no config URL provided Nov 4 23:53:51.803268 ignition[857]: reading system config file "/usr/lib/ignition/user.ign" Nov 4 23:53:51.803279 ignition[857]: no config at "/usr/lib/ignition/user.ign" Nov 4 23:53:51.803286 ignition[857]: failed to fetch config: resource requires networking Nov 4 23:53:51.804106 ignition[857]: Ignition finished successfully Nov 4 23:53:51.813224 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 4 23:53:51.854372 ignition[864]: Ignition 2.22.0 Nov 4 23:53:51.854394 ignition[864]: Stage: fetch Nov 4 23:53:51.854628 ignition[864]: no configs at "/usr/lib/ignition/base.d" Nov 4 23:53:51.854642 ignition[864]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 4 23:53:51.854777 ignition[864]: parsed url from cmdline: "" Nov 4 23:53:51.854782 ignition[864]: no config URL provided Nov 4 23:53:51.854791 ignition[864]: reading system config file "/usr/lib/ignition/user.ign" Nov 4 23:53:51.854802 ignition[864]: no config at "/usr/lib/ignition/user.ign" Nov 4 23:53:51.854842 ignition[864]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Nov 4 23:53:51.887725 ignition[864]: GET result: OK Nov 4 23:53:51.888855 ignition[864]: parsing config with SHA512: 87b70759f6841063c26ce3d1ce4d41a103f0d46f71358438677f37923c9ea955ce87f8af7d3261b4b8e0319de63ae0f87f224248d4b3efeb80c1a7648391df5a Nov 4 23:53:51.894570 unknown[864]: fetched base config from "system" Nov 4 23:53:51.894586 unknown[864]: fetched base config from "system" Nov 4 23:53:51.894960 ignition[864]: fetch: fetch complete Nov 4 23:53:51.894594 unknown[864]: fetched user config from "digitalocean" Nov 4 23:53:51.894969 ignition[864]: fetch: fetch passed Nov 4 23:53:51.898585 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 4 23:53:51.895038 ignition[864]: Ignition finished successfully Nov 4 23:53:51.901529 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 4 23:53:51.963995 ignition[870]: Ignition 2.22.0 Nov 4 23:53:51.964011 ignition[870]: Stage: kargs Nov 4 23:53:51.964182 ignition[870]: no configs at "/usr/lib/ignition/base.d" Nov 4 23:53:51.964192 ignition[870]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 4 23:53:51.968263 ignition[870]: kargs: kargs passed Nov 4 23:53:51.968359 ignition[870]: Ignition finished successfully Nov 4 23:53:51.970126 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 4 23:53:51.973594 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 4 23:53:52.015844 ignition[876]: Ignition 2.22.0 Nov 4 23:53:52.015862 ignition[876]: Stage: disks Nov 4 23:53:52.016043 ignition[876]: no configs at "/usr/lib/ignition/base.d" Nov 4 23:53:52.016052 ignition[876]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 4 23:53:52.018854 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 4 23:53:52.017030 ignition[876]: disks: disks passed Nov 4 23:53:52.020930 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 4 23:53:52.017098 ignition[876]: Ignition finished successfully Nov 4 23:53:52.030269 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 4 23:53:52.031882 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 4 23:53:52.033264 systemd[1]: Reached target sysinit.target - System Initialization. Nov 4 23:53:52.035000 systemd[1]: Reached target basic.target - Basic System. Nov 4 23:53:52.039126 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 4 23:53:52.095066 systemd-fsck[885]: ROOT: clean, 15/456736 files, 38230/456704 blocks Nov 4 23:53:52.099711 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 4 23:53:52.105132 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 4 23:53:52.257014 kernel: EXT4-fs (vda9): mounted filesystem cfb29ed0-6faf-41a8-b421-3abc514e4975 r/w with ordered data mode. Quota mode: none. Nov 4 23:53:52.257126 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 4 23:53:52.258752 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 4 23:53:52.262085 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 4 23:53:52.264873 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 4 23:53:52.272513 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... Nov 4 23:53:52.276699 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 4 23:53:52.277636 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 4 23:53:52.277687 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 4 23:53:52.295486 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (893) Nov 4 23:53:52.295515 kernel: BTRFS info (device vda6): first mount of filesystem c1921af5-b472-4b94-8690-4d6daf91a8cd Nov 4 23:53:52.295529 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 4 23:53:52.297527 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 4 23:53:52.303180 kernel: BTRFS info (device vda6): turning on async discard Nov 4 23:53:52.304506 kernel: BTRFS info (device vda6): enabling free space tree Nov 4 23:53:52.305461 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 4 23:53:52.311357 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 4 23:53:52.412488 initrd-setup-root[921]: cut: /sysroot/etc/passwd: No such file or directory Nov 4 23:53:52.424587 initrd-setup-root[930]: cut: /sysroot/etc/group: No such file or directory Nov 4 23:53:52.446196 initrd-setup-root[937]: cut: /sysroot/etc/shadow: No such file or directory Nov 4 23:53:52.456888 coreos-metadata[895]: Nov 04 23:53:52.455 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 4 23:53:52.459104 coreos-metadata[896]: Nov 04 23:53:52.458 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 4 23:53:52.462556 initrd-setup-root[944]: cut: /sysroot/etc/gshadow: No such file or directory Nov 4 23:53:52.471046 coreos-metadata[895]: Nov 04 23:53:52.470 INFO Fetch successful Nov 4 23:53:52.474342 coreos-metadata[896]: Nov 04 23:53:52.474 INFO Fetch successful Nov 4 23:53:52.479933 coreos-metadata[896]: Nov 04 23:53:52.479 INFO wrote hostname ci-4487.0.0-n-50b5667972 to /sysroot/etc/hostname Nov 4 23:53:52.481134 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 4 23:53:52.487335 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. Nov 4 23:53:52.487572 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. Nov 4 23:53:52.602888 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 4 23:53:52.605684 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 4 23:53:52.608116 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 4 23:53:52.631880 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 4 23:53:52.636217 kernel: BTRFS info (device vda6): last unmount of filesystem c1921af5-b472-4b94-8690-4d6daf91a8cd Nov 4 23:53:52.656302 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 4 23:53:52.685867 ignition[1014]: INFO : Ignition 2.22.0 Nov 4 23:53:52.685867 ignition[1014]: INFO : Stage: mount Nov 4 23:53:52.687870 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 4 23:53:52.687870 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 4 23:53:52.689858 ignition[1014]: INFO : mount: mount passed Nov 4 23:53:52.689858 ignition[1014]: INFO : Ignition finished successfully Nov 4 23:53:52.692418 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 4 23:53:52.695586 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 4 23:53:52.723683 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 4 23:53:52.749009 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1025) Nov 4 23:53:52.753605 kernel: BTRFS info (device vda6): first mount of filesystem c1921af5-b472-4b94-8690-4d6daf91a8cd Nov 4 23:53:52.753706 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 4 23:53:52.762993 kernel: BTRFS info (device vda6): turning on async discard Nov 4 23:53:52.763114 kernel: BTRFS info (device vda6): enabling free space tree Nov 4 23:53:52.766112 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 4 23:53:52.819517 ignition[1042]: INFO : Ignition 2.22.0 Nov 4 23:53:52.819517 ignition[1042]: INFO : Stage: files Nov 4 23:53:52.821452 ignition[1042]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 4 23:53:52.821452 ignition[1042]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 4 23:53:52.821452 ignition[1042]: DEBUG : files: compiled without relabeling support, skipping Nov 4 23:53:52.824310 ignition[1042]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 4 23:53:52.824310 ignition[1042]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 4 23:53:52.828973 ignition[1042]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 4 23:53:52.830384 ignition[1042]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 4 23:53:52.831717 unknown[1042]: wrote ssh authorized keys file for user: core Nov 4 23:53:52.833068 ignition[1042]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 4 23:53:52.834311 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 4 23:53:52.834311 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 4 23:53:52.878770 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 4 23:53:52.953383 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 4 23:53:52.953383 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 4 23:53:52.956308 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Nov 4 23:53:53.154190 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 4 23:53:53.279639 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 4 23:53:53.279639 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 4 23:53:53.282460 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 4 23:53:53.282460 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 4 23:53:53.282460 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 4 23:53:53.282460 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 4 23:53:53.282460 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 4 23:53:53.282460 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 4 23:53:53.282460 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 4 23:53:53.291572 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 4 23:53:53.291572 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 4 23:53:53.291572 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 4 23:53:53.291572 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 4 23:53:53.291572 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 4 23:53:53.291572 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Nov 4 23:53:53.734837 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Nov 4 23:53:54.069172 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 4 23:53:54.069172 ignition[1042]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Nov 4 23:53:54.072340 ignition[1042]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 4 23:53:54.074853 ignition[1042]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 4 23:53:54.074853 ignition[1042]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Nov 4 23:53:54.074853 ignition[1042]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Nov 4 23:53:54.074853 ignition[1042]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Nov 4 23:53:54.074853 ignition[1042]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 4 23:53:54.074853 ignition[1042]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 4 23:53:54.074853 ignition[1042]: INFO : files: files passed Nov 4 23:53:54.074853 ignition[1042]: INFO : Ignition finished successfully Nov 4 23:53:54.077397 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 4 23:53:54.082152 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 4 23:53:54.087180 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 4 23:53:54.103456 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 4 23:53:54.103644 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 4 23:53:54.114998 initrd-setup-root-after-ignition[1073]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 4 23:53:54.114998 initrd-setup-root-after-ignition[1073]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 4 23:53:54.117764 initrd-setup-root-after-ignition[1077]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 4 23:53:54.119597 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 4 23:53:54.121015 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 4 23:53:54.123445 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 4 23:53:54.185207 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 4 23:53:54.185376 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 4 23:53:54.187180 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 4 23:53:54.188651 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 4 23:53:54.190592 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 4 23:53:54.193091 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 4 23:53:54.237201 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 4 23:53:54.241155 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 4 23:53:54.268342 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 4 23:53:54.268883 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 4 23:53:54.272285 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 4 23:53:54.273224 systemd[1]: Stopped target timers.target - Timer Units. Nov 4 23:53:54.275130 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 4 23:53:54.275415 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 4 23:53:54.277527 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 4 23:53:54.278501 systemd[1]: Stopped target basic.target - Basic System. Nov 4 23:53:54.280217 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 4 23:53:54.281916 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 4 23:53:54.283571 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 4 23:53:54.285252 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 4 23:53:54.286919 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 4 23:53:54.288820 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 4 23:53:54.290715 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 4 23:53:54.292626 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 4 23:53:54.294401 systemd[1]: Stopped target swap.target - Swaps. Nov 4 23:53:54.296072 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 4 23:53:54.296272 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 4 23:53:54.298126 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 4 23:53:54.299207 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 4 23:53:54.300747 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 4 23:53:54.301230 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 4 23:53:54.302512 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 4 23:53:54.302704 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 4 23:53:54.305098 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 4 23:53:54.305299 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 4 23:53:54.307149 systemd[1]: ignition-files.service: Deactivated successfully. Nov 4 23:53:54.307420 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 4 23:53:54.309096 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 4 23:53:54.309315 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 4 23:53:54.313119 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 4 23:53:54.317296 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 4 23:53:54.319287 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 4 23:53:54.319627 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 4 23:53:54.323572 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 4 23:53:54.323892 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 4 23:53:54.326415 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 4 23:53:54.326576 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 4 23:53:54.340185 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 4 23:53:54.340307 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 4 23:53:54.368063 ignition[1097]: INFO : Ignition 2.22.0 Nov 4 23:53:54.368063 ignition[1097]: INFO : Stage: umount Nov 4 23:53:54.370725 ignition[1097]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 4 23:53:54.370725 ignition[1097]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 4 23:53:54.370725 ignition[1097]: INFO : umount: umount passed Nov 4 23:53:54.370725 ignition[1097]: INFO : Ignition finished successfully Nov 4 23:53:54.379505 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 4 23:53:54.380573 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 4 23:53:54.382778 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 4 23:53:54.385294 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 4 23:53:54.386150 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 4 23:53:54.388266 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 4 23:53:54.388448 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 4 23:53:54.390367 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 4 23:53:54.390469 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 4 23:53:54.391875 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 4 23:53:54.391960 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 4 23:53:54.393440 systemd[1]: Stopped target network.target - Network. Nov 4 23:53:54.394746 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 4 23:53:54.394853 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 4 23:53:54.396444 systemd[1]: Stopped target paths.target - Path Units. Nov 4 23:53:54.397907 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 4 23:53:54.402129 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 4 23:53:54.403206 systemd[1]: Stopped target slices.target - Slice Units. Nov 4 23:53:54.405052 systemd[1]: Stopped target sockets.target - Socket Units. Nov 4 23:53:54.406816 systemd[1]: iscsid.socket: Deactivated successfully. Nov 4 23:53:54.406916 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 4 23:53:54.408521 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 4 23:53:54.408596 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 4 23:53:54.410166 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 4 23:53:54.410278 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 4 23:53:54.412029 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 4 23:53:54.412144 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 4 23:53:54.413606 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 4 23:53:54.413702 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 4 23:53:54.415466 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 4 23:53:54.417162 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 4 23:53:54.432739 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 4 23:53:54.433001 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 4 23:53:54.437902 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 4 23:53:54.438144 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 4 23:53:54.443817 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 4 23:53:54.445026 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 4 23:53:54.445104 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 4 23:53:54.448131 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 4 23:53:54.450401 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 4 23:53:54.450509 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 4 23:53:54.451317 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 4 23:53:54.451452 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 4 23:53:54.455279 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 4 23:53:54.455416 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 4 23:53:54.457560 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 4 23:53:54.471226 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 4 23:53:54.471520 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 4 23:53:54.473807 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 4 23:53:54.473865 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 4 23:53:54.474609 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 4 23:53:54.474665 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 4 23:53:54.479160 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 4 23:53:54.479263 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 4 23:53:54.482251 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 4 23:53:54.482343 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 4 23:53:54.483235 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 4 23:53:54.483308 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 4 23:53:54.487163 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 4 23:53:54.490376 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 4 23:53:54.490497 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 4 23:53:54.493137 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 4 23:53:54.493229 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 4 23:53:54.494637 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 4 23:53:54.494711 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 4 23:53:54.497276 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 4 23:53:54.497347 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 4 23:53:54.498906 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 4 23:53:54.499008 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 23:53:54.514424 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 4 23:53:54.516499 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 4 23:53:54.520426 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 4 23:53:54.520577 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 4 23:53:54.522774 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 4 23:53:54.524795 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 4 23:53:54.566169 systemd[1]: Switching root. Nov 4 23:53:54.605289 systemd-journald[298]: Journal stopped Nov 4 23:53:56.015129 systemd-journald[298]: Received SIGTERM from PID 1 (systemd). Nov 4 23:53:56.015215 kernel: SELinux: policy capability network_peer_controls=1 Nov 4 23:53:56.015236 kernel: SELinux: policy capability open_perms=1 Nov 4 23:53:56.015249 kernel: SELinux: policy capability extended_socket_class=1 Nov 4 23:53:56.015261 kernel: SELinux: policy capability always_check_network=0 Nov 4 23:53:56.015273 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 4 23:53:56.015312 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 4 23:53:56.015325 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 4 23:53:56.015362 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 4 23:53:56.015381 kernel: SELinux: policy capability userspace_initial_context=0 Nov 4 23:53:56.015401 kernel: audit: type=1403 audit(1762300434.865:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 4 23:53:56.015418 systemd[1]: Successfully loaded SELinux policy in 92.259ms. Nov 4 23:53:56.015453 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 11.698ms. Nov 4 23:53:56.015468 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 4 23:53:56.015492 systemd[1]: Detected virtualization kvm. Nov 4 23:53:56.015505 systemd[1]: Detected architecture x86-64. Nov 4 23:53:56.015518 systemd[1]: Detected first boot. Nov 4 23:53:56.015532 systemd[1]: Hostname set to . Nov 4 23:53:56.015545 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 4 23:53:56.015564 zram_generator::config[1141]: No configuration found. Nov 4 23:53:56.015578 kernel: Guest personality initialized and is inactive Nov 4 23:53:56.015590 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Nov 4 23:53:56.015602 kernel: Initialized host personality Nov 4 23:53:56.015613 kernel: NET: Registered PF_VSOCK protocol family Nov 4 23:53:56.015629 systemd[1]: Populated /etc with preset unit settings. Nov 4 23:53:56.015644 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 4 23:53:56.015663 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 4 23:53:56.015675 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 4 23:53:56.015689 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 4 23:53:56.015701 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 4 23:53:56.015714 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 4 23:53:56.015726 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 4 23:53:56.015739 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 4 23:53:56.015758 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 4 23:53:56.015772 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 4 23:53:56.015784 systemd[1]: Created slice user.slice - User and Session Slice. Nov 4 23:53:56.015796 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 4 23:53:56.015809 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 4 23:53:56.015822 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 4 23:53:56.015834 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 4 23:53:56.015854 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 4 23:53:56.015867 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 4 23:53:56.015879 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 4 23:53:56.015892 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 4 23:53:56.015904 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 4 23:53:56.015923 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 4 23:53:56.015936 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 4 23:53:56.015963 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 4 23:53:56.015976 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 4 23:53:56.015988 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 4 23:53:56.016001 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 4 23:53:56.016013 systemd[1]: Reached target slices.target - Slice Units. Nov 4 23:53:56.016034 systemd[1]: Reached target swap.target - Swaps. Nov 4 23:53:56.016047 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 4 23:53:56.016059 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 4 23:53:56.016071 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 4 23:53:56.016084 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 4 23:53:56.016097 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 4 23:53:56.016110 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 4 23:53:56.016129 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 4 23:53:56.016142 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 4 23:53:56.016155 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 4 23:53:56.016169 systemd[1]: Mounting media.mount - External Media Directory... Nov 4 23:53:56.016182 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:53:56.016195 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 4 23:53:56.016208 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 4 23:53:56.016227 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 4 23:53:56.016240 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 4 23:53:56.016253 systemd[1]: Reached target machines.target - Containers. Nov 4 23:53:56.016267 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 4 23:53:56.016280 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 4 23:53:56.016293 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 4 23:53:56.016306 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 4 23:53:56.016327 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 4 23:53:56.016340 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 4 23:53:56.016353 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 4 23:53:56.016366 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 4 23:53:56.016378 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 4 23:53:56.016391 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 4 23:53:56.016411 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 4 23:53:56.016424 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 4 23:53:56.016436 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 4 23:53:56.016448 systemd[1]: Stopped systemd-fsck-usr.service. Nov 4 23:53:56.016461 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 23:53:56.016474 kernel: fuse: init (API version 7.41) Nov 4 23:53:56.016486 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 4 23:53:56.016506 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 4 23:53:56.016519 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 4 23:53:56.016532 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 4 23:53:56.016547 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 4 23:53:56.016567 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 4 23:53:56.016580 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:53:56.016593 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 4 23:53:56.016606 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 4 23:53:56.016618 systemd[1]: Mounted media.mount - External Media Directory. Nov 4 23:53:56.016631 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 4 23:53:56.016643 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 4 23:53:56.016663 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 4 23:53:56.016676 kernel: ACPI: bus type drm_connector registered Nov 4 23:53:56.016688 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 4 23:53:56.016707 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 4 23:53:56.016726 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 4 23:53:56.016738 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 4 23:53:56.016751 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 4 23:53:56.016763 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 4 23:53:56.016823 systemd-journald[1218]: Collecting audit messages is disabled. Nov 4 23:53:56.022624 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 4 23:53:56.022678 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 4 23:53:56.022693 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 4 23:53:56.022706 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 4 23:53:56.022721 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 4 23:53:56.022738 systemd-journald[1218]: Journal started Nov 4 23:53:56.022765 systemd-journald[1218]: Runtime Journal (/run/log/journal/8cdbeea6b8f443219a793938a99692d9) is 4.9M, max 39.2M, 34.3M free. Nov 4 23:53:55.590284 systemd[1]: Queued start job for default target multi-user.target. Nov 4 23:53:56.023134 systemd[1]: Started systemd-journald.service - Journal Service. Nov 4 23:53:55.614712 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 4 23:53:55.615280 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 4 23:53:56.028361 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 4 23:53:56.028600 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 4 23:53:56.029860 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 4 23:53:56.031412 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 4 23:53:56.032528 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 4 23:53:56.048447 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 4 23:53:56.050155 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Nov 4 23:53:56.050933 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 4 23:53:56.051051 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 4 23:53:56.052888 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 4 23:53:56.053836 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 23:53:56.059220 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 4 23:53:56.062251 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 4 23:53:56.064065 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 4 23:53:56.066403 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 4 23:53:56.068103 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 4 23:53:56.073287 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 4 23:53:56.078490 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 4 23:53:56.080674 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 4 23:53:56.082464 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 4 23:53:56.083898 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 4 23:53:56.098385 systemd-journald[1218]: Time spent on flushing to /var/log/journal/8cdbeea6b8f443219a793938a99692d9 is 87.044ms for 995 entries. Nov 4 23:53:56.098385 systemd-journald[1218]: System Journal (/var/log/journal/8cdbeea6b8f443219a793938a99692d9) is 8M, max 163.5M, 155.5M free. Nov 4 23:53:56.203936 systemd-journald[1218]: Received client request to flush runtime journal. Nov 4 23:53:56.204165 kernel: loop1: detected capacity change from 0 to 219144 Nov 4 23:53:56.204197 kernel: loop2: detected capacity change from 0 to 110984 Nov 4 23:53:56.117044 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 4 23:53:56.118350 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 4 23:53:56.127241 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 4 23:53:56.176047 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 4 23:53:56.193029 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. Nov 4 23:53:56.193051 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. Nov 4 23:53:56.208107 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 4 23:53:56.212344 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 4 23:53:56.213619 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 4 23:53:56.222303 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 4 23:53:56.229023 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 4 23:53:56.238465 kernel: loop3: detected capacity change from 0 to 8 Nov 4 23:53:56.258192 kernel: loop4: detected capacity change from 0 to 128048 Nov 4 23:53:56.287190 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 4 23:53:56.301150 kernel: loop5: detected capacity change from 0 to 219144 Nov 4 23:53:56.295385 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 4 23:53:56.302669 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 4 23:53:56.316001 kernel: loop6: detected capacity change from 0 to 110984 Nov 4 23:53:56.331061 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 4 23:53:56.336977 kernel: loop7: detected capacity change from 0 to 8 Nov 4 23:53:56.340989 kernel: loop1: detected capacity change from 0 to 128048 Nov 4 23:53:56.355217 (sd-merge)[1287]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-digitalocean.raw'. Nov 4 23:53:56.360772 (sd-merge)[1287]: Merged extensions into '/usr'. Nov 4 23:53:56.367845 systemd[1]: Reload requested from client PID 1263 ('systemd-sysext') (unit systemd-sysext.service)... Nov 4 23:53:56.367880 systemd[1]: Reloading... Nov 4 23:53:56.380698 systemd-tmpfiles[1288]: ACLs are not supported, ignoring. Nov 4 23:53:56.381267 systemd-tmpfiles[1288]: ACLs are not supported, ignoring. Nov 4 23:53:56.484990 zram_generator::config[1318]: No configuration found. Nov 4 23:53:56.646475 systemd-resolved[1286]: Positive Trust Anchors: Nov 4 23:53:56.646499 systemd-resolved[1286]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 4 23:53:56.646504 systemd-resolved[1286]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 4 23:53:56.646540 systemd-resolved[1286]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 4 23:53:56.673401 systemd-resolved[1286]: Using system hostname 'ci-4487.0.0-n-50b5667972'. Nov 4 23:53:56.769527 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 4 23:53:56.769756 systemd[1]: Reloading finished in 401 ms. Nov 4 23:53:56.783931 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 4 23:53:56.785179 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 4 23:53:56.786473 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 4 23:53:56.789031 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 4 23:53:56.794169 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 4 23:53:56.796694 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 4 23:53:56.803126 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 4 23:53:56.819263 systemd[1]: Starting ensure-sysext.service... Nov 4 23:53:56.829249 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 4 23:53:56.841068 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 4 23:53:56.842169 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 4 23:53:56.857060 systemd[1]: Reload requested from client PID 1369 ('systemctl') (unit ensure-sysext.service)... Nov 4 23:53:56.857084 systemd[1]: Reloading... Nov 4 23:53:56.890104 systemd-tmpfiles[1370]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 4 23:53:56.890139 systemd-tmpfiles[1370]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 4 23:53:56.890376 systemd-tmpfiles[1370]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 4 23:53:56.890600 systemd-tmpfiles[1370]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 4 23:53:56.891476 systemd-tmpfiles[1370]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 4 23:53:56.891700 systemd-tmpfiles[1370]: ACLs are not supported, ignoring. Nov 4 23:53:56.891758 systemd-tmpfiles[1370]: ACLs are not supported, ignoring. Nov 4 23:53:56.898244 systemd-tmpfiles[1370]: Detected autofs mount point /boot during canonicalization of boot. Nov 4 23:53:56.898262 systemd-tmpfiles[1370]: Skipping /boot Nov 4 23:53:56.918362 systemd-tmpfiles[1370]: Detected autofs mount point /boot during canonicalization of boot. Nov 4 23:53:56.918382 systemd-tmpfiles[1370]: Skipping /boot Nov 4 23:53:56.999066 zram_generator::config[1405]: No configuration found. Nov 4 23:53:57.251792 systemd[1]: Reloading finished in 394 ms. Nov 4 23:53:57.265474 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 4 23:53:57.288115 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 4 23:53:57.300281 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 4 23:53:57.304400 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 4 23:53:57.309315 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 4 23:53:57.318482 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 4 23:53:57.324527 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 4 23:53:57.329346 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 4 23:53:57.336601 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:53:57.336883 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 4 23:53:57.347859 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 4 23:53:57.357989 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 4 23:53:57.366618 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 4 23:53:57.367780 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 23:53:57.369160 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 23:53:57.369367 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:53:57.387486 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:53:57.388162 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 4 23:53:57.388445 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 23:53:57.388583 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 23:53:57.388730 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:53:57.400693 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:53:57.403288 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 4 23:53:57.409654 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 4 23:53:57.412477 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 23:53:57.412957 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 23:53:57.413182 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:53:57.427424 systemd[1]: Finished ensure-sysext.service. Nov 4 23:53:57.429874 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 4 23:53:57.452187 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 4 23:53:57.527386 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 4 23:53:57.530938 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 4 23:53:57.537596 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 4 23:53:57.539379 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 4 23:53:57.540870 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 4 23:53:57.542989 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 4 23:53:57.546371 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 4 23:53:57.557535 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 4 23:53:57.557833 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 4 23:53:57.558674 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 4 23:53:57.561390 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 4 23:53:57.612910 augenrules[1486]: No rules Nov 4 23:53:57.609701 systemd[1]: audit-rules.service: Deactivated successfully. Nov 4 23:53:57.610072 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 4 23:53:57.624340 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 4 23:53:57.630911 systemd-udevd[1451]: Using default interface naming scheme 'v257'. Nov 4 23:53:57.636339 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 4 23:53:57.664785 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 4 23:53:57.666170 systemd[1]: Reached target time-set.target - System Time Set. Nov 4 23:53:57.691075 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 4 23:53:57.695963 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 4 23:53:57.837094 systemd-networkd[1497]: lo: Link UP Nov 4 23:53:57.838031 systemd-networkd[1497]: lo: Gained carrier Nov 4 23:53:57.866449 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 4 23:53:57.867615 systemd[1]: Reached target network.target - Network. Nov 4 23:53:57.880839 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 4 23:53:57.889852 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 4 23:53:57.943971 systemd[1]: Condition check resulted in dev-disk-by\x2dlabel-config\x2d2.device - /dev/disk/by-label/config-2 being skipped. Nov 4 23:53:57.948765 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 4 23:53:57.970230 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Nov 4 23:53:57.971362 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:53:57.971595 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 4 23:53:57.975187 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 4 23:53:57.980211 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 4 23:53:57.986852 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 4 23:53:57.988120 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 23:53:57.988195 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 23:53:57.988252 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 4 23:53:57.988281 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:53:57.988792 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 4 23:53:58.025635 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 4 23:53:58.025940 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 4 23:53:58.029014 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 4 23:53:58.033070 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 4 23:53:58.040648 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 4 23:53:58.087997 kernel: ISO 9660 Extensions: RRIP_1991A Nov 4 23:53:58.090882 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 4 23:53:58.091466 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 4 23:53:58.095182 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Nov 4 23:53:58.103558 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 4 23:53:58.169779 systemd-networkd[1497]: eth1: Configuring with /run/systemd/network/10-4e:ba:26:db:ce:fa.network. Nov 4 23:53:58.172696 systemd-networkd[1497]: eth1: Link UP Nov 4 23:53:58.174486 systemd-networkd[1497]: eth1: Gained carrier Nov 4 23:53:58.186719 systemd-timesyncd[1465]: Network configuration changed, trying to establish connection. Nov 4 23:53:58.201016 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 4 23:53:58.229754 kernel: mousedev: PS/2 mouse device common for all mice Nov 4 23:53:58.229375 systemd-networkd[1497]: eth0: Configuring with /run/systemd/network/10-9e:18:00:89:e0:2f.network. Nov 4 23:53:58.230467 systemd-networkd[1497]: eth0: Link UP Nov 4 23:53:58.230535 systemd-timesyncd[1465]: Network configuration changed, trying to establish connection. Nov 4 23:53:58.231146 systemd-networkd[1497]: eth0: Gained carrier Nov 4 23:53:58.233267 systemd-timesyncd[1465]: Network configuration changed, trying to establish connection. Nov 4 23:53:58.236302 kernel: ACPI: button: Power Button [PWRF] Nov 4 23:53:58.238666 systemd-timesyncd[1465]: Network configuration changed, trying to establish connection. Nov 4 23:53:58.352093 ldconfig[1449]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 4 23:53:58.356270 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 4 23:53:58.363207 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 4 23:53:58.370652 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 4 23:53:58.384223 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 4 23:53:58.414991 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Nov 4 23:53:58.430179 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 4 23:53:58.435662 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 4 23:53:58.444087 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 4 23:53:58.446440 systemd[1]: Reached target sysinit.target - System Initialization. Nov 4 23:53:58.448720 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 4 23:53:58.450594 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 4 23:53:58.452282 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Nov 4 23:53:58.454433 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 4 23:53:58.456425 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 4 23:53:58.458632 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 4 23:53:58.460119 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 4 23:53:58.460168 systemd[1]: Reached target paths.target - Path Units. Nov 4 23:53:58.476291 systemd[1]: Reached target timers.target - Timer Units. Nov 4 23:53:58.479520 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 4 23:53:58.484158 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 4 23:53:58.493931 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 4 23:53:58.496407 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 4 23:53:58.498393 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 4 23:53:58.517111 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 4 23:53:58.521080 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 4 23:53:58.525324 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 4 23:53:58.527549 systemd[1]: Reached target sockets.target - Socket Units. Nov 4 23:53:58.528394 systemd[1]: Reached target basic.target - Basic System. Nov 4 23:53:58.531293 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 4 23:53:58.531354 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 4 23:53:58.535112 systemd[1]: Starting containerd.service - containerd container runtime... Nov 4 23:53:58.540019 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 4 23:53:58.545315 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 4 23:53:58.551313 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 4 23:53:58.557232 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 4 23:53:58.566282 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 4 23:53:58.568240 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 4 23:53:58.570743 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Nov 4 23:53:58.579332 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 4 23:53:58.584170 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 4 23:53:58.590281 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 4 23:53:58.601334 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 4 23:53:58.609984 jq[1563]: false Nov 4 23:53:58.620283 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 4 23:53:58.621358 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 4 23:53:58.629874 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 4 23:53:58.636374 systemd[1]: Starting update-engine.service - Update Engine... Nov 4 23:53:58.647404 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 4 23:53:58.654820 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 4 23:53:58.657525 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 4 23:53:58.657848 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 4 23:53:58.731918 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 4 23:53:58.731629 dbus-daemon[1561]: [system] SELinux support is enabled Nov 4 23:53:58.739232 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Nov 4 23:53:58.739332 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Nov 4 23:53:58.744154 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 4 23:53:58.744205 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 4 23:53:58.754246 google_oslogin_nss_cache[1566]: oslogin_cache_refresh[1566]: Refreshing passwd entry cache Nov 4 23:53:58.751418 oslogin_cache_refresh[1566]: Refreshing passwd entry cache Nov 4 23:53:58.761991 jq[1579]: true Nov 4 23:53:58.763039 google_oslogin_nss_cache[1566]: oslogin_cache_refresh[1566]: Failure getting users, quitting Nov 4 23:53:58.763968 kernel: Console: switching to colour dummy device 80x25 Nov 4 23:53:58.764044 oslogin_cache_refresh[1566]: Failure getting users, quitting Nov 4 23:53:58.764842 google_oslogin_nss_cache[1566]: oslogin_cache_refresh[1566]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 4 23:53:58.764842 google_oslogin_nss_cache[1566]: oslogin_cache_refresh[1566]: Refreshing group entry cache Nov 4 23:53:58.764087 oslogin_cache_refresh[1566]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 4 23:53:58.764158 oslogin_cache_refresh[1566]: Refreshing group entry cache Nov 4 23:53:58.767363 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 4 23:53:58.767584 oslogin_cache_refresh[1566]: Failure getting groups, quitting Nov 4 23:53:58.769218 google_oslogin_nss_cache[1566]: oslogin_cache_refresh[1566]: Failure getting groups, quitting Nov 4 23:53:58.769218 google_oslogin_nss_cache[1566]: oslogin_cache_refresh[1566]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 4 23:53:58.767842 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Nov 4 23:53:58.767603 oslogin_cache_refresh[1566]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 4 23:53:58.767871 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 4 23:53:58.768741 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 4 23:53:58.770199 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 4 23:53:58.771256 extend-filesystems[1565]: Found /dev/vda6 Nov 4 23:53:58.775498 systemd[1]: motdgen.service: Deactivated successfully. Nov 4 23:53:58.775827 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 4 23:53:58.789149 extend-filesystems[1565]: Found /dev/vda9 Nov 4 23:53:58.791796 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Nov 4 23:53:58.794159 tar[1588]: linux-amd64/LICENSE Nov 4 23:53:58.794159 tar[1588]: linux-amd64/helm Nov 4 23:53:58.792235 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Nov 4 23:53:58.800679 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Nov 4 23:53:58.800779 kernel: [drm] features: -context_init Nov 4 23:53:58.800802 extend-filesystems[1565]: Checking size of /dev/vda9 Nov 4 23:53:58.812620 (ntainerd)[1600]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 4 23:53:58.823502 update_engine[1573]: I20251104 23:53:58.820850 1573 main.cc:92] Flatcar Update Engine starting Nov 4 23:53:58.845187 coreos-metadata[1560]: Nov 04 23:53:58.843 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 4 23:53:58.852140 systemd[1]: Started update-engine.service - Update Engine. Nov 4 23:53:58.861355 update_engine[1573]: I20251104 23:53:58.858237 1573 update_check_scheduler.cc:74] Next update check in 2m41s Nov 4 23:53:58.873692 jq[1599]: true Nov 4 23:53:58.878989 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 4 23:53:58.892978 coreos-metadata[1560]: Nov 04 23:53:58.890 INFO Fetch successful Nov 4 23:53:58.898229 extend-filesystems[1565]: Resized partition /dev/vda9 Nov 4 23:53:58.902396 kernel: [drm] number of scanouts: 1 Nov 4 23:53:58.902493 kernel: [drm] number of cap sets: 0 Nov 4 23:53:58.908325 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0 Nov 4 23:53:58.911822 extend-filesystems[1615]: resize2fs 1.47.3 (8-Jul-2025) Nov 4 23:53:58.940081 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Nov 4 23:53:58.940170 kernel: Console: switching to colour frame buffer device 128x48 Nov 4 23:53:58.940187 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Nov 4 23:53:58.951994 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 14138363 blocks Nov 4 23:53:58.972340 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 4 23:53:58.974650 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 4 23:53:59.109096 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 23:53:59.148051 kernel: EXT4-fs (vda9): resized filesystem to 14138363 Nov 4 23:53:59.210469 extend-filesystems[1615]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 4 23:53:59.210469 extend-filesystems[1615]: old_desc_blocks = 1, new_desc_blocks = 7 Nov 4 23:53:59.210469 extend-filesystems[1615]: The filesystem on /dev/vda9 is now 14138363 (4k) blocks long. Nov 4 23:53:59.210181 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 4 23:53:59.210891 extend-filesystems[1565]: Resized filesystem in /dev/vda9 Nov 4 23:53:59.211170 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 4 23:53:59.229999 bash[1638]: Updated "/home/core/.ssh/authorized_keys" Nov 4 23:53:59.227053 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 4 23:53:59.235802 systemd[1]: Starting sshkeys.service... Nov 4 23:53:59.349887 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 4 23:53:59.358627 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 4 23:53:59.443183 systemd-networkd[1497]: eth1: Gained IPv6LL Nov 4 23:53:59.445153 systemd-timesyncd[1465]: Network configuration changed, trying to establish connection. Nov 4 23:53:59.451152 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 4 23:53:59.456164 systemd[1]: Reached target network-online.target - Network is Online. Nov 4 23:53:59.463839 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:53:59.474445 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 4 23:53:59.599294 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 23:53:59.636183 systemd-networkd[1497]: eth0: Gained IPv6LL Nov 4 23:53:59.638662 systemd-timesyncd[1465]: Network configuration changed, trying to establish connection. Nov 4 23:53:59.659992 coreos-metadata[1650]: Nov 04 23:53:59.654 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 4 23:53:59.659632 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 4 23:53:59.659940 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 23:53:59.662894 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 23:53:59.673972 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 23:53:59.682416 coreos-metadata[1650]: Nov 04 23:53:59.680 INFO Fetch successful Nov 4 23:53:59.742071 containerd[1600]: time="2025-11-04T23:53:59Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 4 23:53:59.749015 containerd[1600]: time="2025-11-04T23:53:59.747240289Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Nov 4 23:53:59.811142 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 4 23:53:59.815163 unknown[1650]: wrote ssh authorized keys file for user: core Nov 4 23:53:59.867330 locksmithd[1610]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 4 23:53:59.891791 update-ssh-keys[1679]: Updated "/home/core/.ssh/authorized_keys" Nov 4 23:53:59.895242 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 4 23:53:59.905589 containerd[1600]: time="2025-11-04T23:53:59.904422548Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="12.805µs" Nov 4 23:53:59.905589 containerd[1600]: time="2025-11-04T23:53:59.904466145Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 4 23:53:59.905589 containerd[1600]: time="2025-11-04T23:53:59.904488224Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 4 23:53:59.905589 containerd[1600]: time="2025-11-04T23:53:59.904722320Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 4 23:53:59.905589 containerd[1600]: time="2025-11-04T23:53:59.904740154Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 4 23:53:59.905589 containerd[1600]: time="2025-11-04T23:53:59.904770802Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 4 23:53:59.905589 containerd[1600]: time="2025-11-04T23:53:59.904841829Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 4 23:53:59.905589 containerd[1600]: time="2025-11-04T23:53:59.904858499Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 4 23:53:59.914042 containerd[1600]: time="2025-11-04T23:53:59.907070094Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 4 23:53:59.914042 containerd[1600]: time="2025-11-04T23:53:59.907102104Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 4 23:53:59.914042 containerd[1600]: time="2025-11-04T23:53:59.907118112Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 4 23:53:59.914042 containerd[1600]: time="2025-11-04T23:53:59.907127452Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 4 23:53:59.914042 containerd[1600]: time="2025-11-04T23:53:59.907267257Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 4 23:53:59.914042 containerd[1600]: time="2025-11-04T23:53:59.907548151Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 4 23:53:59.914042 containerd[1600]: time="2025-11-04T23:53:59.907586987Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 4 23:53:59.914042 containerd[1600]: time="2025-11-04T23:53:59.907596977Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 4 23:53:59.914042 containerd[1600]: time="2025-11-04T23:53:59.911372573Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 4 23:53:59.914042 containerd[1600]: time="2025-11-04T23:53:59.912014959Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 4 23:53:59.914042 containerd[1600]: time="2025-11-04T23:53:59.912204364Z" level=info msg="metadata content store policy set" policy=shared Nov 4 23:53:59.908767 systemd[1]: Finished sshkeys.service. Nov 4 23:53:59.928994 containerd[1600]: time="2025-11-04T23:53:59.927566439Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 4 23:53:59.928994 containerd[1600]: time="2025-11-04T23:53:59.927680010Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 4 23:53:59.928994 containerd[1600]: time="2025-11-04T23:53:59.927703972Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 4 23:53:59.928994 containerd[1600]: time="2025-11-04T23:53:59.927722099Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 4 23:53:59.928994 containerd[1600]: time="2025-11-04T23:53:59.927740111Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 4 23:53:59.928994 containerd[1600]: time="2025-11-04T23:53:59.927758864Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 4 23:53:59.928994 containerd[1600]: time="2025-11-04T23:53:59.927776796Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 4 23:53:59.928994 containerd[1600]: time="2025-11-04T23:53:59.927794840Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 4 23:53:59.928994 containerd[1600]: time="2025-11-04T23:53:59.927811668Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 4 23:53:59.928994 containerd[1600]: time="2025-11-04T23:53:59.927825946Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 4 23:53:59.928994 containerd[1600]: time="2025-11-04T23:53:59.927838205Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 4 23:53:59.928994 containerd[1600]: time="2025-11-04T23:53:59.927853172Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 4 23:53:59.928994 containerd[1600]: time="2025-11-04T23:53:59.928049548Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 4 23:53:59.928994 containerd[1600]: time="2025-11-04T23:53:59.928071128Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 4 23:53:59.929481 containerd[1600]: time="2025-11-04T23:53:59.928087460Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 4 23:53:59.929481 containerd[1600]: time="2025-11-04T23:53:59.928108264Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 4 23:53:59.929481 containerd[1600]: time="2025-11-04T23:53:59.928119592Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 4 23:53:59.929481 containerd[1600]: time="2025-11-04T23:53:59.928130748Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 4 23:53:59.929481 containerd[1600]: time="2025-11-04T23:53:59.928144670Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 4 23:53:59.929481 containerd[1600]: time="2025-11-04T23:53:59.928154769Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 4 23:53:59.929481 containerd[1600]: time="2025-11-04T23:53:59.928169042Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 4 23:53:59.929481 containerd[1600]: time="2025-11-04T23:53:59.928186613Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 4 23:53:59.929481 containerd[1600]: time="2025-11-04T23:53:59.928206132Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 4 23:53:59.929481 containerd[1600]: time="2025-11-04T23:53:59.928310113Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 4 23:53:59.929481 containerd[1600]: time="2025-11-04T23:53:59.928332337Z" level=info msg="Start snapshots syncer" Nov 4 23:53:59.929481 containerd[1600]: time="2025-11-04T23:53:59.928363255Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 4 23:53:59.929750 containerd[1600]: time="2025-11-04T23:53:59.928669497Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 4 23:53:59.929750 containerd[1600]: time="2025-11-04T23:53:59.928723399Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 4 23:53:59.929922 containerd[1600]: time="2025-11-04T23:53:59.928816587Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 4 23:53:59.932014 containerd[1600]: time="2025-11-04T23:53:59.930670557Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 4 23:53:59.932014 containerd[1600]: time="2025-11-04T23:53:59.930757840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 4 23:53:59.932014 containerd[1600]: time="2025-11-04T23:53:59.930772695Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 4 23:53:59.932014 containerd[1600]: time="2025-11-04T23:53:59.930784310Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 4 23:53:59.932014 containerd[1600]: time="2025-11-04T23:53:59.930797572Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 4 23:53:59.932014 containerd[1600]: time="2025-11-04T23:53:59.930828724Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 4 23:53:59.932014 containerd[1600]: time="2025-11-04T23:53:59.930843104Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 4 23:53:59.932014 containerd[1600]: time="2025-11-04T23:53:59.930891304Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 4 23:53:59.932014 containerd[1600]: time="2025-11-04T23:53:59.930903950Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 4 23:53:59.932014 containerd[1600]: time="2025-11-04T23:53:59.930914460Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 4 23:53:59.932014 containerd[1600]: time="2025-11-04T23:53:59.930978795Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 4 23:53:59.932014 containerd[1600]: time="2025-11-04T23:53:59.931005919Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 4 23:53:59.932014 containerd[1600]: time="2025-11-04T23:53:59.931016041Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 4 23:53:59.934244 containerd[1600]: time="2025-11-04T23:53:59.931027131Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 4 23:53:59.934244 containerd[1600]: time="2025-11-04T23:53:59.934161113Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 4 23:53:59.934396 containerd[1600]: time="2025-11-04T23:53:59.934292487Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 4 23:53:59.934396 containerd[1600]: time="2025-11-04T23:53:59.934316349Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 4 23:53:59.934396 containerd[1600]: time="2025-11-04T23:53:59.934343438Z" level=info msg="runtime interface created" Nov 4 23:53:59.934396 containerd[1600]: time="2025-11-04T23:53:59.934351741Z" level=info msg="created NRI interface" Nov 4 23:53:59.934396 containerd[1600]: time="2025-11-04T23:53:59.934366119Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 4 23:53:59.934396 containerd[1600]: time="2025-11-04T23:53:59.934390070Z" level=info msg="Connect containerd service" Nov 4 23:53:59.934507 containerd[1600]: time="2025-11-04T23:53:59.934439814Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 4 23:53:59.941471 containerd[1600]: time="2025-11-04T23:53:59.941388159Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 4 23:53:59.951407 systemd-logind[1571]: New seat seat0. Nov 4 23:53:59.962274 systemd-logind[1571]: Watching system buttons on /dev/input/event2 (Power Button) Nov 4 23:53:59.962316 systemd-logind[1571]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 4 23:53:59.962746 systemd[1]: Started systemd-logind.service - User Login Management. Nov 4 23:54:00.012278 kernel: EDAC MC: Ver: 3.0.0 Nov 4 23:54:00.075892 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 23:54:00.392904 tar[1588]: linux-amd64/README.md Nov 4 23:54:00.413534 containerd[1600]: time="2025-11-04T23:54:00.412040553Z" level=info msg="Start subscribing containerd event" Nov 4 23:54:00.413534 containerd[1600]: time="2025-11-04T23:54:00.412115042Z" level=info msg="Start recovering state" Nov 4 23:54:00.413534 containerd[1600]: time="2025-11-04T23:54:00.412243915Z" level=info msg="Start event monitor" Nov 4 23:54:00.413534 containerd[1600]: time="2025-11-04T23:54:00.412260996Z" level=info msg="Start cni network conf syncer for default" Nov 4 23:54:00.413534 containerd[1600]: time="2025-11-04T23:54:00.412273837Z" level=info msg="Start streaming server" Nov 4 23:54:00.413534 containerd[1600]: time="2025-11-04T23:54:00.412291192Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 4 23:54:00.413534 containerd[1600]: time="2025-11-04T23:54:00.412300195Z" level=info msg="runtime interface starting up..." Nov 4 23:54:00.413534 containerd[1600]: time="2025-11-04T23:54:00.412309684Z" level=info msg="starting plugins..." Nov 4 23:54:00.413534 containerd[1600]: time="2025-11-04T23:54:00.412326680Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 4 23:54:00.415163 containerd[1600]: time="2025-11-04T23:54:00.414549168Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 4 23:54:00.415163 containerd[1600]: time="2025-11-04T23:54:00.414782434Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 4 23:54:00.422873 systemd[1]: Started containerd.service - containerd container runtime. Nov 4 23:54:00.423611 containerd[1600]: time="2025-11-04T23:54:00.423182928Z" level=info msg="containerd successfully booted in 0.713676s" Nov 4 23:54:00.448001 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 4 23:54:00.738331 sshd_keygen[1606]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 4 23:54:00.779524 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 4 23:54:00.788442 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 4 23:54:00.814510 systemd[1]: issuegen.service: Deactivated successfully. Nov 4 23:54:00.814936 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 4 23:54:00.820333 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 4 23:54:00.850459 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 4 23:54:00.857544 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 4 23:54:00.863586 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 4 23:54:00.865390 systemd[1]: Reached target getty.target - Login Prompts. Nov 4 23:54:01.183105 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 4 23:54:01.188408 systemd[1]: Started sshd@0-64.23.154.5:22-139.178.89.65:46410.service - OpenSSH per-connection server daemon (139.178.89.65:46410). Nov 4 23:54:01.320164 sshd[1724]: Accepted publickey for core from 139.178.89.65 port 46410 ssh2: RSA SHA256:Rq5CXoWTIcdYifnntDTUaY9VjA9cJ84ZY23eH9iA0qk Nov 4 23:54:01.324615 sshd-session[1724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:54:01.338968 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 4 23:54:01.344190 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 4 23:54:01.364602 systemd-logind[1571]: New session 1 of user core. Nov 4 23:54:01.393637 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 4 23:54:01.406745 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 4 23:54:01.428099 (systemd)[1729]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 4 23:54:01.437932 systemd-logind[1571]: New session c1 of user core. Nov 4 23:54:01.475227 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:54:01.476848 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 4 23:54:01.493604 (kubelet)[1738]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 4 23:54:01.666305 systemd[1729]: Queued start job for default target default.target. Nov 4 23:54:01.672736 systemd[1729]: Created slice app.slice - User Application Slice. Nov 4 23:54:01.672785 systemd[1729]: Reached target paths.target - Paths. Nov 4 23:54:01.673663 systemd[1729]: Reached target timers.target - Timers. Nov 4 23:54:01.678168 systemd[1729]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 4 23:54:01.703778 systemd[1729]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 4 23:54:01.703983 systemd[1729]: Reached target sockets.target - Sockets. Nov 4 23:54:01.704063 systemd[1729]: Reached target basic.target - Basic System. Nov 4 23:54:01.704121 systemd[1729]: Reached target default.target - Main User Target. Nov 4 23:54:01.704166 systemd[1729]: Startup finished in 248ms. Nov 4 23:54:01.704470 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 4 23:54:01.717369 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 4 23:54:01.720742 systemd[1]: Startup finished in 2.682s (kernel) + 6.047s (initrd) + 6.936s (userspace) = 15.665s. Nov 4 23:54:02.017258 systemd[1]: Started sshd@1-64.23.154.5:22-139.178.89.65:46416.service - OpenSSH per-connection server daemon (139.178.89.65:46416). Nov 4 23:54:02.714418 sshd[1750]: Accepted publickey for core from 139.178.89.65 port 46416 ssh2: RSA SHA256:Rq5CXoWTIcdYifnntDTUaY9VjA9cJ84ZY23eH9iA0qk Nov 4 23:54:02.718363 sshd-session[1750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:54:02.732083 systemd-logind[1571]: New session 2 of user core. Nov 4 23:54:02.739339 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 4 23:54:02.813128 sshd[1757]: Connection closed by 139.178.89.65 port 46416 Nov 4 23:54:02.817598 sshd-session[1750]: pam_unix(sshd:session): session closed for user core Nov 4 23:54:02.833756 systemd[1]: sshd@1-64.23.154.5:22-139.178.89.65:46416.service: Deactivated successfully. Nov 4 23:54:02.840002 systemd[1]: session-2.scope: Deactivated successfully. Nov 4 23:54:02.842908 systemd-logind[1571]: Session 2 logged out. Waiting for processes to exit. Nov 4 23:54:02.850557 systemd[1]: Started sshd@2-64.23.154.5:22-139.178.89.65:46430.service - OpenSSH per-connection server daemon (139.178.89.65:46430). Nov 4 23:54:02.852563 systemd-logind[1571]: Removed session 2. Nov 4 23:54:02.940850 sshd[1763]: Accepted publickey for core from 139.178.89.65 port 46430 ssh2: RSA SHA256:Rq5CXoWTIcdYifnntDTUaY9VjA9cJ84ZY23eH9iA0qk Nov 4 23:54:02.942598 sshd-session[1763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:54:02.950767 systemd-logind[1571]: New session 3 of user core. Nov 4 23:54:02.957235 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 4 23:54:02.988400 kubelet[1738]: E1104 23:54:02.987667 1738 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 4 23:54:02.991848 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 4 23:54:02.992092 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 4 23:54:02.992845 systemd[1]: kubelet.service: Consumed 1.736s CPU time, 257.7M memory peak. Nov 4 23:54:03.021161 sshd[1767]: Connection closed by 139.178.89.65 port 46430 Nov 4 23:54:03.022026 sshd-session[1763]: pam_unix(sshd:session): session closed for user core Nov 4 23:54:03.037601 systemd[1]: sshd@2-64.23.154.5:22-139.178.89.65:46430.service: Deactivated successfully. Nov 4 23:54:03.040054 systemd[1]: session-3.scope: Deactivated successfully. Nov 4 23:54:03.041636 systemd-logind[1571]: Session 3 logged out. Waiting for processes to exit. Nov 4 23:54:03.044803 systemd[1]: Started sshd@3-64.23.154.5:22-139.178.89.65:46432.service - OpenSSH per-connection server daemon (139.178.89.65:46432). Nov 4 23:54:03.046635 systemd-logind[1571]: Removed session 3. Nov 4 23:54:03.126432 sshd[1775]: Accepted publickey for core from 139.178.89.65 port 46432 ssh2: RSA SHA256:Rq5CXoWTIcdYifnntDTUaY9VjA9cJ84ZY23eH9iA0qk Nov 4 23:54:03.128729 sshd-session[1775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:54:03.135447 systemd-logind[1571]: New session 4 of user core. Nov 4 23:54:03.145335 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 4 23:54:03.208733 sshd[1778]: Connection closed by 139.178.89.65 port 46432 Nov 4 23:54:03.209451 sshd-session[1775]: pam_unix(sshd:session): session closed for user core Nov 4 23:54:03.224211 systemd[1]: sshd@3-64.23.154.5:22-139.178.89.65:46432.service: Deactivated successfully. Nov 4 23:54:03.226742 systemd[1]: session-4.scope: Deactivated successfully. Nov 4 23:54:03.228215 systemd-logind[1571]: Session 4 logged out. Waiting for processes to exit. Nov 4 23:54:03.231710 systemd[1]: Started sshd@4-64.23.154.5:22-139.178.89.65:46436.service - OpenSSH per-connection server daemon (139.178.89.65:46436). Nov 4 23:54:03.232557 systemd-logind[1571]: Removed session 4. Nov 4 23:54:03.308088 sshd[1784]: Accepted publickey for core from 139.178.89.65 port 46436 ssh2: RSA SHA256:Rq5CXoWTIcdYifnntDTUaY9VjA9cJ84ZY23eH9iA0qk Nov 4 23:54:03.309800 sshd-session[1784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:54:03.316336 systemd-logind[1571]: New session 5 of user core. Nov 4 23:54:03.325335 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 4 23:54:03.413490 sudo[1788]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 4 23:54:03.413919 sudo[1788]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 23:54:03.430104 sudo[1788]: pam_unix(sudo:session): session closed for user root Nov 4 23:54:03.433967 sshd[1787]: Connection closed by 139.178.89.65 port 46436 Nov 4 23:54:03.435201 sshd-session[1784]: pam_unix(sshd:session): session closed for user core Nov 4 23:54:03.451692 systemd[1]: sshd@4-64.23.154.5:22-139.178.89.65:46436.service: Deactivated successfully. Nov 4 23:54:03.453865 systemd[1]: session-5.scope: Deactivated successfully. Nov 4 23:54:03.454906 systemd-logind[1571]: Session 5 logged out. Waiting for processes to exit. Nov 4 23:54:03.459252 systemd[1]: Started sshd@5-64.23.154.5:22-139.178.89.65:46440.service - OpenSSH per-connection server daemon (139.178.89.65:46440). Nov 4 23:54:03.460633 systemd-logind[1571]: Removed session 5. Nov 4 23:54:03.546005 sshd[1794]: Accepted publickey for core from 139.178.89.65 port 46440 ssh2: RSA SHA256:Rq5CXoWTIcdYifnntDTUaY9VjA9cJ84ZY23eH9iA0qk Nov 4 23:54:03.548764 sshd-session[1794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:54:03.558277 systemd-logind[1571]: New session 6 of user core. Nov 4 23:54:03.564334 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 4 23:54:03.631481 sudo[1799]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 4 23:54:03.632325 sudo[1799]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 23:54:03.640519 sudo[1799]: pam_unix(sudo:session): session closed for user root Nov 4 23:54:03.650275 sudo[1798]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 4 23:54:03.650559 sudo[1798]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 23:54:03.669258 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 4 23:54:03.728073 augenrules[1821]: No rules Nov 4 23:54:03.729833 systemd[1]: audit-rules.service: Deactivated successfully. Nov 4 23:54:03.730208 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 4 23:54:03.732251 sudo[1798]: pam_unix(sudo:session): session closed for user root Nov 4 23:54:03.738993 sshd[1797]: Connection closed by 139.178.89.65 port 46440 Nov 4 23:54:03.739434 sshd-session[1794]: pam_unix(sshd:session): session closed for user core Nov 4 23:54:03.750871 systemd[1]: sshd@5-64.23.154.5:22-139.178.89.65:46440.service: Deactivated successfully. Nov 4 23:54:03.754141 systemd[1]: session-6.scope: Deactivated successfully. Nov 4 23:54:03.755647 systemd-logind[1571]: Session 6 logged out. Waiting for processes to exit. Nov 4 23:54:03.760909 systemd[1]: Started sshd@6-64.23.154.5:22-139.178.89.65:46456.service - OpenSSH per-connection server daemon (139.178.89.65:46456). Nov 4 23:54:03.763094 systemd-logind[1571]: Removed session 6. Nov 4 23:54:03.837138 sshd[1830]: Accepted publickey for core from 139.178.89.65 port 46456 ssh2: RSA SHA256:Rq5CXoWTIcdYifnntDTUaY9VjA9cJ84ZY23eH9iA0qk Nov 4 23:54:03.839385 sshd-session[1830]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:54:03.845918 systemd-logind[1571]: New session 7 of user core. Nov 4 23:54:03.853308 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 4 23:54:03.916543 sudo[1834]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 4 23:54:03.917042 sudo[1834]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 23:54:04.723838 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 4 23:54:04.742506 (dockerd)[1851]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 4 23:54:05.249364 dockerd[1851]: time="2025-11-04T23:54:05.249199541Z" level=info msg="Starting up" Nov 4 23:54:05.250508 dockerd[1851]: time="2025-11-04T23:54:05.250441197Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 4 23:54:05.276498 dockerd[1851]: time="2025-11-04T23:54:05.276419681Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 4 23:54:05.303990 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2305393307-merged.mount: Deactivated successfully. Nov 4 23:54:05.352764 dockerd[1851]: time="2025-11-04T23:54:05.352435692Z" level=info msg="Loading containers: start." Nov 4 23:54:05.368991 kernel: Initializing XFRM netlink socket Nov 4 23:54:05.679368 systemd-timesyncd[1465]: Network configuration changed, trying to establish connection. Nov 4 23:54:05.684836 systemd-timesyncd[1465]: Network configuration changed, trying to establish connection. Nov 4 23:54:05.697490 systemd-timesyncd[1465]: Network configuration changed, trying to establish connection. Nov 4 23:54:05.749211 systemd-networkd[1497]: docker0: Link UP Nov 4 23:54:05.750010 systemd-timesyncd[1465]: Network configuration changed, trying to establish connection. Nov 4 23:54:05.754436 dockerd[1851]: time="2025-11-04T23:54:05.754357879Z" level=info msg="Loading containers: done." Nov 4 23:54:05.779522 dockerd[1851]: time="2025-11-04T23:54:05.779438404Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 4 23:54:05.779769 dockerd[1851]: time="2025-11-04T23:54:05.779571752Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 4 23:54:05.779769 dockerd[1851]: time="2025-11-04T23:54:05.779710669Z" level=info msg="Initializing buildkit" Nov 4 23:54:05.817262 dockerd[1851]: time="2025-11-04T23:54:05.817194307Z" level=info msg="Completed buildkit initialization" Nov 4 23:54:05.832412 dockerd[1851]: time="2025-11-04T23:54:05.832270399Z" level=info msg="Daemon has completed initialization" Nov 4 23:54:05.833616 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 4 23:54:05.834863 dockerd[1851]: time="2025-11-04T23:54:05.834215513Z" level=info msg="API listen on /run/docker.sock" Nov 4 23:54:06.300584 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2214530312-merged.mount: Deactivated successfully. Nov 4 23:54:06.673012 containerd[1600]: time="2025-11-04T23:54:06.672227483Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\"" Nov 4 23:54:07.520350 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1298160541.mount: Deactivated successfully. Nov 4 23:54:09.208681 containerd[1600]: time="2025-11-04T23:54:09.208597578Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:54:09.210496 containerd[1600]: time="2025-11-04T23:54:09.210442042Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.1: active requests=0, bytes read=27065392" Nov 4 23:54:09.212005 containerd[1600]: time="2025-11-04T23:54:09.211286570Z" level=info msg="ImageCreate event name:\"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:54:09.215884 containerd[1600]: time="2025-11-04T23:54:09.214436999Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:54:09.215884 containerd[1600]: time="2025-11-04T23:54:09.215671696Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.1\" with image id \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\", size \"27061991\" in 2.543299525s" Nov 4 23:54:09.215884 containerd[1600]: time="2025-11-04T23:54:09.215722145Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\" returns image reference \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\"" Nov 4 23:54:09.217076 containerd[1600]: time="2025-11-04T23:54:09.216987770Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\"" Nov 4 23:54:10.814920 containerd[1600]: time="2025-11-04T23:54:10.813308978Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:54:10.814920 containerd[1600]: time="2025-11-04T23:54:10.814858173Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.1: active requests=0, bytes read=21159757" Nov 4 23:54:10.815816 containerd[1600]: time="2025-11-04T23:54:10.815777981Z" level=info msg="ImageCreate event name:\"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:54:10.818738 containerd[1600]: time="2025-11-04T23:54:10.818685051Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:54:10.820309 containerd[1600]: time="2025-11-04T23:54:10.820263901Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.1\" with image id \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\", size \"22820214\" in 1.602916436s" Nov 4 23:54:10.820560 containerd[1600]: time="2025-11-04T23:54:10.820320337Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\" returns image reference \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\"" Nov 4 23:54:10.820993 containerd[1600]: time="2025-11-04T23:54:10.820970101Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\"" Nov 4 23:54:11.975188 containerd[1600]: time="2025-11-04T23:54:11.975086973Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:54:11.976562 containerd[1600]: time="2025-11-04T23:54:11.976507407Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.1: active requests=0, bytes read=15725093" Nov 4 23:54:11.977980 containerd[1600]: time="2025-11-04T23:54:11.977362207Z" level=info msg="ImageCreate event name:\"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:54:11.981667 containerd[1600]: time="2025-11-04T23:54:11.981552494Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:54:11.983165 containerd[1600]: time="2025-11-04T23:54:11.983113559Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.1\" with image id \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\", size \"17385568\" in 1.162107751s" Nov 4 23:54:11.983386 containerd[1600]: time="2025-11-04T23:54:11.983361021Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\" returns image reference \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\"" Nov 4 23:54:11.984557 containerd[1600]: time="2025-11-04T23:54:11.984506280Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Nov 4 23:54:13.013940 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 4 23:54:13.018832 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:54:13.457544 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3075752218.mount: Deactivated successfully. Nov 4 23:54:13.488268 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:54:13.504208 (kubelet)[2150]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 4 23:54:13.605843 kubelet[2150]: E1104 23:54:13.605720 2150 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 4 23:54:13.610796 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 4 23:54:13.611086 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 4 23:54:13.611620 systemd[1]: kubelet.service: Consumed 477ms CPU time, 110.3M memory peak. Nov 4 23:54:14.033727 containerd[1600]: time="2025-11-04T23:54:14.033659549Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:54:14.036269 containerd[1600]: time="2025-11-04T23:54:14.036187036Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.1: active requests=0, bytes read=25964699" Nov 4 23:54:14.038288 containerd[1600]: time="2025-11-04T23:54:14.038192491Z" level=info msg="ImageCreate event name:\"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:54:14.048003 containerd[1600]: time="2025-11-04T23:54:14.047173894Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:54:14.049480 containerd[1600]: time="2025-11-04T23:54:14.049414541Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.1\" with image id \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\", repo tag \"registry.k8s.io/kube-proxy:v1.34.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\", size \"25963718\" in 2.064673489s" Nov 4 23:54:14.049480 containerd[1600]: time="2025-11-04T23:54:14.049484369Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\"" Nov 4 23:54:14.050261 containerd[1600]: time="2025-11-04T23:54:14.050063610Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Nov 4 23:54:14.051992 systemd-resolved[1286]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Nov 4 23:54:14.739725 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2402728808.mount: Deactivated successfully. Nov 4 23:54:15.963518 containerd[1600]: time="2025-11-04T23:54:15.963423982Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:54:15.965043 containerd[1600]: time="2025-11-04T23:54:15.964993061Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Nov 4 23:54:15.965782 containerd[1600]: time="2025-11-04T23:54:15.965745225Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:54:15.969910 containerd[1600]: time="2025-11-04T23:54:15.969002246Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:54:15.970514 containerd[1600]: time="2025-11-04T23:54:15.970472737Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.920376762s" Nov 4 23:54:15.970514 containerd[1600]: time="2025-11-04T23:54:15.970512900Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Nov 4 23:54:15.971246 containerd[1600]: time="2025-11-04T23:54:15.971016099Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Nov 4 23:54:16.570522 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4259817722.mount: Deactivated successfully. Nov 4 23:54:16.578328 containerd[1600]: time="2025-11-04T23:54:16.578231987Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:54:16.579990 containerd[1600]: time="2025-11-04T23:54:16.579924122Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Nov 4 23:54:16.580642 containerd[1600]: time="2025-11-04T23:54:16.580574936Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:54:16.583901 containerd[1600]: time="2025-11-04T23:54:16.583793670Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:54:16.584308 containerd[1600]: time="2025-11-04T23:54:16.584273404Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 613.233098ms" Nov 4 23:54:16.584369 containerd[1600]: time="2025-11-04T23:54:16.584313981Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Nov 4 23:54:16.586404 containerd[1600]: time="2025-11-04T23:54:16.586007570Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Nov 4 23:54:17.107417 systemd-resolved[1286]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Nov 4 23:54:20.056988 containerd[1600]: time="2025-11-04T23:54:20.056122816Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:54:20.058890 containerd[1600]: time="2025-11-04T23:54:20.058844429Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=73514593" Nov 4 23:54:20.060268 containerd[1600]: time="2025-11-04T23:54:20.060225865Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:54:20.063373 containerd[1600]: time="2025-11-04T23:54:20.063320120Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:54:20.065988 containerd[1600]: time="2025-11-04T23:54:20.064572172Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 3.478440256s" Nov 4 23:54:20.065988 containerd[1600]: time="2025-11-04T23:54:20.064623704Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Nov 4 23:54:23.764204 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 4 23:54:23.768280 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:54:24.006271 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:54:24.019671 (kubelet)[2284]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 4 23:54:24.105993 kubelet[2284]: E1104 23:54:24.105899 2284 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 4 23:54:24.110592 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 4 23:54:24.110854 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 4 23:54:24.112122 systemd[1]: kubelet.service: Consumed 239ms CPU time, 110.6M memory peak. Nov 4 23:54:25.050215 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:54:25.050458 systemd[1]: kubelet.service: Consumed 239ms CPU time, 110.6M memory peak. Nov 4 23:54:25.064112 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:54:25.099332 systemd[1]: Reload requested from client PID 2298 ('systemctl') (unit session-7.scope)... Nov 4 23:54:25.099392 systemd[1]: Reloading... Nov 4 23:54:25.262994 zram_generator::config[2342]: No configuration found. Nov 4 23:54:25.592047 systemd[1]: Reloading finished in 492 ms. Nov 4 23:54:25.674205 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:54:25.678160 systemd[1]: kubelet.service: Deactivated successfully. Nov 4 23:54:25.678586 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:54:25.678681 systemd[1]: kubelet.service: Consumed 157ms CPU time, 98.2M memory peak. Nov 4 23:54:25.681689 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:54:25.894460 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:54:25.911576 (kubelet)[2398]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 4 23:54:25.976185 kubelet[2398]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 4 23:54:25.976780 kubelet[2398]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 4 23:54:25.978774 kubelet[2398]: I1104 23:54:25.978690 2398 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 4 23:54:27.311546 kubelet[2398]: I1104 23:54:27.311480 2398 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 4 23:54:27.312537 kubelet[2398]: I1104 23:54:27.312316 2398 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 4 23:54:27.316382 kubelet[2398]: I1104 23:54:27.316321 2398 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 4 23:54:27.317889 kubelet[2398]: I1104 23:54:27.316606 2398 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 4 23:54:27.317889 kubelet[2398]: I1104 23:54:27.317057 2398 server.go:956] "Client rotation is on, will bootstrap in background" Nov 4 23:54:27.343728 kubelet[2398]: E1104 23:54:27.343614 2398 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://64.23.154.5:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 64.23.154.5:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 4 23:54:27.346114 kubelet[2398]: I1104 23:54:27.345057 2398 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 4 23:54:27.363602 kubelet[2398]: I1104 23:54:27.363562 2398 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 4 23:54:27.373715 kubelet[2398]: I1104 23:54:27.373662 2398 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 4 23:54:27.374359 kubelet[2398]: I1104 23:54:27.374312 2398 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 4 23:54:27.376682 kubelet[2398]: I1104 23:54:27.374490 2398 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4487.0.0-n-50b5667972","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 4 23:54:27.377229 kubelet[2398]: I1104 23:54:27.377201 2398 topology_manager.go:138] "Creating topology manager with none policy" Nov 4 23:54:27.377371 kubelet[2398]: I1104 23:54:27.377355 2398 container_manager_linux.go:306] "Creating device plugin manager" Nov 4 23:54:27.377629 kubelet[2398]: I1104 23:54:27.377606 2398 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 4 23:54:27.385324 kubelet[2398]: I1104 23:54:27.385269 2398 state_mem.go:36] "Initialized new in-memory state store" Nov 4 23:54:27.385934 kubelet[2398]: I1104 23:54:27.385916 2398 kubelet.go:475] "Attempting to sync node with API server" Nov 4 23:54:27.386730 kubelet[2398]: I1104 23:54:27.386702 2398 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 4 23:54:27.387228 kubelet[2398]: E1104 23:54:27.387050 2398 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://64.23.154.5:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4487.0.0-n-50b5667972&limit=500&resourceVersion=0\": dial tcp 64.23.154.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 4 23:54:27.388975 kubelet[2398]: I1104 23:54:27.388852 2398 kubelet.go:387] "Adding apiserver pod source" Nov 4 23:54:27.388975 kubelet[2398]: I1104 23:54:27.388914 2398 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 4 23:54:27.395187 kubelet[2398]: E1104 23:54:27.394816 2398 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://64.23.154.5:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 64.23.154.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 4 23:54:27.395820 kubelet[2398]: I1104 23:54:27.395618 2398 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 4 23:54:27.399782 kubelet[2398]: I1104 23:54:27.399746 2398 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 4 23:54:27.401201 kubelet[2398]: I1104 23:54:27.400014 2398 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 4 23:54:27.401201 kubelet[2398]: W1104 23:54:27.400106 2398 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 4 23:54:27.406496 kubelet[2398]: I1104 23:54:27.406460 2398 server.go:1262] "Started kubelet" Nov 4 23:54:27.407970 kubelet[2398]: I1104 23:54:27.407924 2398 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 4 23:54:27.413384 kubelet[2398]: E1104 23:54:27.410843 2398 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://64.23.154.5:6443/api/v1/namespaces/default/events\": dial tcp 64.23.154.5:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4487.0.0-n-50b5667972.1874f2f023435159 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4487.0.0-n-50b5667972,UID:ci-4487.0.0-n-50b5667972,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4487.0.0-n-50b5667972,},FirstTimestamp:2025-11-04 23:54:27.406393689 +0000 UTC m=+1.490156358,LastTimestamp:2025-11-04 23:54:27.406393689 +0000 UTC m=+1.490156358,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4487.0.0-n-50b5667972,}" Nov 4 23:54:27.413384 kubelet[2398]: I1104 23:54:27.412979 2398 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 4 23:54:27.420487 kubelet[2398]: I1104 23:54:27.417429 2398 server.go:310] "Adding debug handlers to kubelet server" Nov 4 23:54:27.429838 kubelet[2398]: I1104 23:54:27.429121 2398 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 4 23:54:27.429838 kubelet[2398]: E1104 23:54:27.429542 2398 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4487.0.0-n-50b5667972\" not found" Nov 4 23:54:27.430822 kubelet[2398]: I1104 23:54:27.430737 2398 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 4 23:54:27.430985 kubelet[2398]: I1104 23:54:27.430845 2398 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 4 23:54:27.431753 kubelet[2398]: I1104 23:54:27.431724 2398 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 4 23:54:27.431908 kubelet[2398]: I1104 23:54:27.431794 2398 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 4 23:54:27.433068 kubelet[2398]: I1104 23:54:27.432857 2398 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 4 23:54:27.433322 kubelet[2398]: I1104 23:54:27.433302 2398 reconciler.go:29] "Reconciler: start to sync state" Nov 4 23:54:27.436927 kubelet[2398]: E1104 23:54:27.436874 2398 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://64.23.154.5:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 64.23.154.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 4 23:54:27.437119 kubelet[2398]: E1104 23:54:27.437052 2398 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.154.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4487.0.0-n-50b5667972?timeout=10s\": dial tcp 64.23.154.5:6443: connect: connection refused" interval="200ms" Nov 4 23:54:27.440554 kubelet[2398]: I1104 23:54:27.440448 2398 factory.go:223] Registration of the systemd container factory successfully Nov 4 23:54:27.440829 kubelet[2398]: I1104 23:54:27.440785 2398 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 4 23:54:27.442013 kubelet[2398]: I1104 23:54:27.441813 2398 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 4 23:54:27.457891 kubelet[2398]: E1104 23:54:27.457837 2398 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 4 23:54:27.457891 kubelet[2398]: I1104 23:54:27.458121 2398 factory.go:223] Registration of the containerd container factory successfully Nov 4 23:54:27.488298 kubelet[2398]: I1104 23:54:27.488265 2398 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 4 23:54:27.488523 kubelet[2398]: I1104 23:54:27.488507 2398 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 4 23:54:27.488692 kubelet[2398]: I1104 23:54:27.488605 2398 state_mem.go:36] "Initialized new in-memory state store" Nov 4 23:54:27.494079 kubelet[2398]: I1104 23:54:27.494035 2398 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 4 23:54:27.494492 kubelet[2398]: I1104 23:54:27.494326 2398 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 4 23:54:27.495769 kubelet[2398]: I1104 23:54:27.495445 2398 kubelet.go:2427] "Starting kubelet main sync loop" Nov 4 23:54:27.495769 kubelet[2398]: E1104 23:54:27.495538 2398 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 4 23:54:27.496439 kubelet[2398]: I1104 23:54:27.496414 2398 policy_none.go:49] "None policy: Start" Nov 4 23:54:27.497037 kubelet[2398]: I1104 23:54:27.496895 2398 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 4 23:54:27.497037 kubelet[2398]: I1104 23:54:27.496932 2398 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 4 23:54:27.497972 kubelet[2398]: E1104 23:54:27.496811 2398 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://64.23.154.5:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 64.23.154.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 4 23:54:27.499403 kubelet[2398]: I1104 23:54:27.499354 2398 policy_none.go:47] "Start" Nov 4 23:54:27.509471 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 4 23:54:27.524654 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 4 23:54:27.528900 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 4 23:54:27.529645 kubelet[2398]: E1104 23:54:27.529608 2398 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4487.0.0-n-50b5667972\" not found" Nov 4 23:54:27.538145 kubelet[2398]: E1104 23:54:27.537640 2398 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 4 23:54:27.538145 kubelet[2398]: I1104 23:54:27.537846 2398 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 4 23:54:27.538145 kubelet[2398]: I1104 23:54:27.537857 2398 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 4 23:54:27.539438 kubelet[2398]: I1104 23:54:27.539400 2398 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 4 23:54:27.543137 kubelet[2398]: E1104 23:54:27.542979 2398 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 4 23:54:27.543365 kubelet[2398]: E1104 23:54:27.543339 2398 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4487.0.0-n-50b5667972\" not found" Nov 4 23:54:27.623696 systemd[1]: Created slice kubepods-burstable-podb57c84615e215db8fbc293fbe0e8fe26.slice - libcontainer container kubepods-burstable-podb57c84615e215db8fbc293fbe0e8fe26.slice. Nov 4 23:54:27.635415 kubelet[2398]: I1104 23:54:27.635338 2398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b57c84615e215db8fbc293fbe0e8fe26-kubeconfig\") pod \"kube-controller-manager-ci-4487.0.0-n-50b5667972\" (UID: \"b57c84615e215db8fbc293fbe0e8fe26\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-n-50b5667972" Nov 4 23:54:27.635415 kubelet[2398]: I1104 23:54:27.635416 2398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5d7977a6b72c726b0ee8fc9493ade3b3-kubeconfig\") pod \"kube-scheduler-ci-4487.0.0-n-50b5667972\" (UID: \"5d7977a6b72c726b0ee8fc9493ade3b3\") " pod="kube-system/kube-scheduler-ci-4487.0.0-n-50b5667972" Nov 4 23:54:27.635624 kubelet[2398]: I1104 23:54:27.635461 2398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b57c84615e215db8fbc293fbe0e8fe26-ca-certs\") pod \"kube-controller-manager-ci-4487.0.0-n-50b5667972\" (UID: \"b57c84615e215db8fbc293fbe0e8fe26\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-n-50b5667972" Nov 4 23:54:27.635624 kubelet[2398]: I1104 23:54:27.635494 2398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b57c84615e215db8fbc293fbe0e8fe26-k8s-certs\") pod \"kube-controller-manager-ci-4487.0.0-n-50b5667972\" (UID: \"b57c84615e215db8fbc293fbe0e8fe26\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-n-50b5667972" Nov 4 23:54:27.635624 kubelet[2398]: I1104 23:54:27.635536 2398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b57c84615e215db8fbc293fbe0e8fe26-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4487.0.0-n-50b5667972\" (UID: \"b57c84615e215db8fbc293fbe0e8fe26\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-n-50b5667972" Nov 4 23:54:27.635624 kubelet[2398]: I1104 23:54:27.635565 2398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9de39fa52c9e72b4feee95b27cb4b38f-ca-certs\") pod \"kube-apiserver-ci-4487.0.0-n-50b5667972\" (UID: \"9de39fa52c9e72b4feee95b27cb4b38f\") " pod="kube-system/kube-apiserver-ci-4487.0.0-n-50b5667972" Nov 4 23:54:27.635624 kubelet[2398]: I1104 23:54:27.635599 2398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9de39fa52c9e72b4feee95b27cb4b38f-k8s-certs\") pod \"kube-apiserver-ci-4487.0.0-n-50b5667972\" (UID: \"9de39fa52c9e72b4feee95b27cb4b38f\") " pod="kube-system/kube-apiserver-ci-4487.0.0-n-50b5667972" Nov 4 23:54:27.635776 kubelet[2398]: I1104 23:54:27.635627 2398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9de39fa52c9e72b4feee95b27cb4b38f-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4487.0.0-n-50b5667972\" (UID: \"9de39fa52c9e72b4feee95b27cb4b38f\") " pod="kube-system/kube-apiserver-ci-4487.0.0-n-50b5667972" Nov 4 23:54:27.635776 kubelet[2398]: I1104 23:54:27.635730 2398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b57c84615e215db8fbc293fbe0e8fe26-flexvolume-dir\") pod \"kube-controller-manager-ci-4487.0.0-n-50b5667972\" (UID: \"b57c84615e215db8fbc293fbe0e8fe26\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-n-50b5667972" Nov 4 23:54:27.640779 kubelet[2398]: E1104 23:54:27.640688 2398 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.0-n-50b5667972\" not found" node="ci-4487.0.0-n-50b5667972" Nov 4 23:54:27.642049 kubelet[2398]: I1104 23:54:27.641618 2398 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487.0.0-n-50b5667972" Nov 4 23:54:27.642454 kubelet[2398]: E1104 23:54:27.642421 2398 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://64.23.154.5:6443/api/v1/nodes\": dial tcp 64.23.154.5:6443: connect: connection refused" node="ci-4487.0.0-n-50b5667972" Nov 4 23:54:27.642729 kubelet[2398]: E1104 23:54:27.642446 2398 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.154.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4487.0.0-n-50b5667972?timeout=10s\": dial tcp 64.23.154.5:6443: connect: connection refused" interval="400ms" Nov 4 23:54:27.647602 systemd[1]: Created slice kubepods-burstable-pod5d7977a6b72c726b0ee8fc9493ade3b3.slice - libcontainer container kubepods-burstable-pod5d7977a6b72c726b0ee8fc9493ade3b3.slice. Nov 4 23:54:27.661118 kubelet[2398]: E1104 23:54:27.661076 2398 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.0-n-50b5667972\" not found" node="ci-4487.0.0-n-50b5667972" Nov 4 23:54:27.665434 systemd[1]: Created slice kubepods-burstable-pod9de39fa52c9e72b4feee95b27cb4b38f.slice - libcontainer container kubepods-burstable-pod9de39fa52c9e72b4feee95b27cb4b38f.slice. Nov 4 23:54:27.668838 kubelet[2398]: E1104 23:54:27.668792 2398 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.0-n-50b5667972\" not found" node="ci-4487.0.0-n-50b5667972" Nov 4 23:54:27.844129 kubelet[2398]: I1104 23:54:27.844097 2398 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487.0.0-n-50b5667972" Nov 4 23:54:27.844919 kubelet[2398]: E1104 23:54:27.844869 2398 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://64.23.154.5:6443/api/v1/nodes\": dial tcp 64.23.154.5:6443: connect: connection refused" node="ci-4487.0.0-n-50b5667972" Nov 4 23:54:27.946474 kubelet[2398]: E1104 23:54:27.946338 2398 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:54:27.949184 containerd[1600]: time="2025-11-04T23:54:27.948728556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4487.0.0-n-50b5667972,Uid:b57c84615e215db8fbc293fbe0e8fe26,Namespace:kube-system,Attempt:0,}" Nov 4 23:54:27.966703 kubelet[2398]: E1104 23:54:27.966520 2398 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:54:27.967993 containerd[1600]: time="2025-11-04T23:54:27.967873554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4487.0.0-n-50b5667972,Uid:5d7977a6b72c726b0ee8fc9493ade3b3,Namespace:kube-system,Attempt:0,}" Nov 4 23:54:27.969448 systemd-resolved[1286]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Nov 4 23:54:27.975995 kubelet[2398]: E1104 23:54:27.975918 2398 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:54:27.978623 containerd[1600]: time="2025-11-04T23:54:27.976978742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4487.0.0-n-50b5667972,Uid:9de39fa52c9e72b4feee95b27cb4b38f,Namespace:kube-system,Attempt:0,}" Nov 4 23:54:28.044111 kubelet[2398]: E1104 23:54:28.044024 2398 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.154.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4487.0.0-n-50b5667972?timeout=10s\": dial tcp 64.23.154.5:6443: connect: connection refused" interval="800ms" Nov 4 23:54:28.248135 kubelet[2398]: I1104 23:54:28.247742 2398 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487.0.0-n-50b5667972" Nov 4 23:54:28.248494 kubelet[2398]: E1104 23:54:28.248282 2398 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://64.23.154.5:6443/api/v1/nodes\": dial tcp 64.23.154.5:6443: connect: connection refused" node="ci-4487.0.0-n-50b5667972" Nov 4 23:54:28.323589 kubelet[2398]: E1104 23:54:28.323511 2398 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://64.23.154.5:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 64.23.154.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 4 23:54:28.487283 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount990879327.mount: Deactivated successfully. Nov 4 23:54:28.496221 containerd[1600]: time="2025-11-04T23:54:28.496018795Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 4 23:54:28.499141 containerd[1600]: time="2025-11-04T23:54:28.498511367Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 4 23:54:28.500708 containerd[1600]: time="2025-11-04T23:54:28.500580897Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Nov 4 23:54:28.500708 containerd[1600]: time="2025-11-04T23:54:28.500706234Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Nov 4 23:54:28.502981 containerd[1600]: time="2025-11-04T23:54:28.502158481Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 4 23:54:28.503595 containerd[1600]: time="2025-11-04T23:54:28.503544378Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 4 23:54:28.504044 containerd[1600]: time="2025-11-04T23:54:28.504005335Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Nov 4 23:54:28.506400 containerd[1600]: time="2025-11-04T23:54:28.506344639Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 4 23:54:28.512360 containerd[1600]: time="2025-11-04T23:54:28.512272925Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 531.16234ms" Nov 4 23:54:28.518978 containerd[1600]: time="2025-11-04T23:54:28.518879364Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 537.768857ms" Nov 4 23:54:28.527478 containerd[1600]: time="2025-11-04T23:54:28.526812440Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 559.25304ms" Nov 4 23:54:28.672122 containerd[1600]: time="2025-11-04T23:54:28.671710327Z" level=info msg="connecting to shim 6195340cf1091f937e7ce00e9a33a7252fb91154b5c4ece078eed5e635a4cb1d" address="unix:///run/containerd/s/9bb0b6170205520ac366546b991606298480f8c4c6ba4eff43d548c85c851cc1" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:54:28.677777 containerd[1600]: time="2025-11-04T23:54:28.677679069Z" level=info msg="connecting to shim b6583b6079f7d7ab9604a9be7179006afea9e7baaaf25dbf384b33a3038d8c94" address="unix:///run/containerd/s/acd5e0675a61080b4314ff9c2cfa50512fb3b46e1d7a36a37ee9331db57cf8cb" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:54:28.687749 containerd[1600]: time="2025-11-04T23:54:28.687259208Z" level=info msg="connecting to shim 42b0b02f2b523613aed501c51194fb3b3999b44c2d3af6fbebe82df89df8b843" address="unix:///run/containerd/s/d9561d3ccd2b43c399f42dfa260d34bf32a523b1f3cef0314466d25dbbedbcdb" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:54:28.828680 systemd[1]: Started cri-containerd-6195340cf1091f937e7ce00e9a33a7252fb91154b5c4ece078eed5e635a4cb1d.scope - libcontainer container 6195340cf1091f937e7ce00e9a33a7252fb91154b5c4ece078eed5e635a4cb1d. Nov 4 23:54:28.841760 systemd[1]: Started cri-containerd-b6583b6079f7d7ab9604a9be7179006afea9e7baaaf25dbf384b33a3038d8c94.scope - libcontainer container b6583b6079f7d7ab9604a9be7179006afea9e7baaaf25dbf384b33a3038d8c94. Nov 4 23:54:28.845264 kubelet[2398]: E1104 23:54:28.845225 2398 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.154.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4487.0.0-n-50b5667972?timeout=10s\": dial tcp 64.23.154.5:6443: connect: connection refused" interval="1.6s" Nov 4 23:54:28.849610 systemd[1]: Started cri-containerd-42b0b02f2b523613aed501c51194fb3b3999b44c2d3af6fbebe82df89df8b843.scope - libcontainer container 42b0b02f2b523613aed501c51194fb3b3999b44c2d3af6fbebe82df89df8b843. Nov 4 23:54:28.897619 kubelet[2398]: E1104 23:54:28.897519 2398 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://64.23.154.5:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 64.23.154.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 4 23:54:28.913908 kubelet[2398]: E1104 23:54:28.913859 2398 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://64.23.154.5:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4487.0.0-n-50b5667972&limit=500&resourceVersion=0\": dial tcp 64.23.154.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 4 23:54:28.955333 kubelet[2398]: E1104 23:54:28.955193 2398 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://64.23.154.5:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 64.23.154.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 4 23:54:28.963986 containerd[1600]: time="2025-11-04T23:54:28.962766347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4487.0.0-n-50b5667972,Uid:b57c84615e215db8fbc293fbe0e8fe26,Namespace:kube-system,Attempt:0,} returns sandbox id \"42b0b02f2b523613aed501c51194fb3b3999b44c2d3af6fbebe82df89df8b843\"" Nov 4 23:54:28.966359 kubelet[2398]: E1104 23:54:28.966309 2398 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:54:28.976308 containerd[1600]: time="2025-11-04T23:54:28.975509600Z" level=info msg="CreateContainer within sandbox \"42b0b02f2b523613aed501c51194fb3b3999b44c2d3af6fbebe82df89df8b843\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 4 23:54:28.996364 containerd[1600]: time="2025-11-04T23:54:28.996236330Z" level=info msg="Container 99b0f6f8821b6c739921438e62d461d0d5efc8f4a44cc200161f987e65a3361f: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:54:29.016263 containerd[1600]: time="2025-11-04T23:54:29.016070380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4487.0.0-n-50b5667972,Uid:9de39fa52c9e72b4feee95b27cb4b38f,Namespace:kube-system,Attempt:0,} returns sandbox id \"b6583b6079f7d7ab9604a9be7179006afea9e7baaaf25dbf384b33a3038d8c94\"" Nov 4 23:54:29.020164 kubelet[2398]: E1104 23:54:29.020116 2398 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:54:29.036314 containerd[1600]: time="2025-11-04T23:54:29.036251969Z" level=info msg="CreateContainer within sandbox \"b6583b6079f7d7ab9604a9be7179006afea9e7baaaf25dbf384b33a3038d8c94\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 4 23:54:29.039981 containerd[1600]: time="2025-11-04T23:54:29.038695194Z" level=info msg="CreateContainer within sandbox \"42b0b02f2b523613aed501c51194fb3b3999b44c2d3af6fbebe82df89df8b843\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"99b0f6f8821b6c739921438e62d461d0d5efc8f4a44cc200161f987e65a3361f\"" Nov 4 23:54:29.043217 containerd[1600]: time="2025-11-04T23:54:29.043141625Z" level=info msg="StartContainer for \"99b0f6f8821b6c739921438e62d461d0d5efc8f4a44cc200161f987e65a3361f\"" Nov 4 23:54:29.044852 containerd[1600]: time="2025-11-04T23:54:29.044543071Z" level=info msg="connecting to shim 99b0f6f8821b6c739921438e62d461d0d5efc8f4a44cc200161f987e65a3361f" address="unix:///run/containerd/s/d9561d3ccd2b43c399f42dfa260d34bf32a523b1f3cef0314466d25dbbedbcdb" protocol=ttrpc version=3 Nov 4 23:54:29.050786 kubelet[2398]: I1104 23:54:29.050258 2398 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487.0.0-n-50b5667972" Nov 4 23:54:29.050786 kubelet[2398]: E1104 23:54:29.050735 2398 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://64.23.154.5:6443/api/v1/nodes\": dial tcp 64.23.154.5:6443: connect: connection refused" node="ci-4487.0.0-n-50b5667972" Nov 4 23:54:29.052554 containerd[1600]: time="2025-11-04T23:54:29.052500291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4487.0.0-n-50b5667972,Uid:5d7977a6b72c726b0ee8fc9493ade3b3,Namespace:kube-system,Attempt:0,} returns sandbox id \"6195340cf1091f937e7ce00e9a33a7252fb91154b5c4ece078eed5e635a4cb1d\"" Nov 4 23:54:29.053868 kubelet[2398]: E1104 23:54:29.053828 2398 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:54:29.060985 containerd[1600]: time="2025-11-04T23:54:29.059717965Z" level=info msg="Container c713da99f5c5d56cc766122618a5544ccfdbb74533ac352acdf7b7418016aa81: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:54:29.060985 containerd[1600]: time="2025-11-04T23:54:29.060143929Z" level=info msg="CreateContainer within sandbox \"6195340cf1091f937e7ce00e9a33a7252fb91154b5c4ece078eed5e635a4cb1d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 4 23:54:29.076673 containerd[1600]: time="2025-11-04T23:54:29.076607964Z" level=info msg="CreateContainer within sandbox \"b6583b6079f7d7ab9604a9be7179006afea9e7baaaf25dbf384b33a3038d8c94\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c713da99f5c5d56cc766122618a5544ccfdbb74533ac352acdf7b7418016aa81\"" Nov 4 23:54:29.079220 containerd[1600]: time="2025-11-04T23:54:29.079164119Z" level=info msg="Container 789fa9a9134a24d004f737896bbcfc1e603dd1c4a310c5edd3097f79b50a51ec: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:54:29.081670 containerd[1600]: time="2025-11-04T23:54:29.080634666Z" level=info msg="StartContainer for \"c713da99f5c5d56cc766122618a5544ccfdbb74533ac352acdf7b7418016aa81\"" Nov 4 23:54:29.089220 containerd[1600]: time="2025-11-04T23:54:29.089141917Z" level=info msg="connecting to shim c713da99f5c5d56cc766122618a5544ccfdbb74533ac352acdf7b7418016aa81" address="unix:///run/containerd/s/acd5e0675a61080b4314ff9c2cfa50512fb3b46e1d7a36a37ee9331db57cf8cb" protocol=ttrpc version=3 Nov 4 23:54:29.095423 systemd[1]: Started cri-containerd-99b0f6f8821b6c739921438e62d461d0d5efc8f4a44cc200161f987e65a3361f.scope - libcontainer container 99b0f6f8821b6c739921438e62d461d0d5efc8f4a44cc200161f987e65a3361f. Nov 4 23:54:29.106835 containerd[1600]: time="2025-11-04T23:54:29.106766014Z" level=info msg="CreateContainer within sandbox \"6195340cf1091f937e7ce00e9a33a7252fb91154b5c4ece078eed5e635a4cb1d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"789fa9a9134a24d004f737896bbcfc1e603dd1c4a310c5edd3097f79b50a51ec\"" Nov 4 23:54:29.110338 containerd[1600]: time="2025-11-04T23:54:29.110275364Z" level=info msg="StartContainer for \"789fa9a9134a24d004f737896bbcfc1e603dd1c4a310c5edd3097f79b50a51ec\"" Nov 4 23:54:29.113913 containerd[1600]: time="2025-11-04T23:54:29.113300326Z" level=info msg="connecting to shim 789fa9a9134a24d004f737896bbcfc1e603dd1c4a310c5edd3097f79b50a51ec" address="unix:///run/containerd/s/9bb0b6170205520ac366546b991606298480f8c4c6ba4eff43d548c85c851cc1" protocol=ttrpc version=3 Nov 4 23:54:29.162494 systemd[1]: Started cri-containerd-c713da99f5c5d56cc766122618a5544ccfdbb74533ac352acdf7b7418016aa81.scope - libcontainer container c713da99f5c5d56cc766122618a5544ccfdbb74533ac352acdf7b7418016aa81. Nov 4 23:54:29.177622 systemd[1]: Started cri-containerd-789fa9a9134a24d004f737896bbcfc1e603dd1c4a310c5edd3097f79b50a51ec.scope - libcontainer container 789fa9a9134a24d004f737896bbcfc1e603dd1c4a310c5edd3097f79b50a51ec. Nov 4 23:54:29.261353 containerd[1600]: time="2025-11-04T23:54:29.261283745Z" level=info msg="StartContainer for \"99b0f6f8821b6c739921438e62d461d0d5efc8f4a44cc200161f987e65a3361f\" returns successfully" Nov 4 23:54:29.352783 containerd[1600]: time="2025-11-04T23:54:29.352165466Z" level=info msg="StartContainer for \"789fa9a9134a24d004f737896bbcfc1e603dd1c4a310c5edd3097f79b50a51ec\" returns successfully" Nov 4 23:54:29.376632 containerd[1600]: time="2025-11-04T23:54:29.376514143Z" level=info msg="StartContainer for \"c713da99f5c5d56cc766122618a5544ccfdbb74533ac352acdf7b7418016aa81\" returns successfully" Nov 4 23:54:29.509528 kubelet[2398]: E1104 23:54:29.508939 2398 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://64.23.154.5:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 64.23.154.5:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 4 23:54:29.525970 kubelet[2398]: E1104 23:54:29.525893 2398 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.0-n-50b5667972\" not found" node="ci-4487.0.0-n-50b5667972" Nov 4 23:54:29.526587 kubelet[2398]: E1104 23:54:29.526560 2398 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:54:29.539751 kubelet[2398]: E1104 23:54:29.539695 2398 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.0-n-50b5667972\" not found" node="ci-4487.0.0-n-50b5667972" Nov 4 23:54:29.539978 kubelet[2398]: E1104 23:54:29.539921 2398 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:54:29.548605 kubelet[2398]: E1104 23:54:29.548555 2398 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.0-n-50b5667972\" not found" node="ci-4487.0.0-n-50b5667972" Nov 4 23:54:29.548817 kubelet[2398]: E1104 23:54:29.548794 2398 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:54:29.681889 kubelet[2398]: E1104 23:54:29.681660 2398 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://64.23.154.5:6443/api/v1/namespaces/default/events\": dial tcp 64.23.154.5:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4487.0.0-n-50b5667972.1874f2f023435159 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4487.0.0-n-50b5667972,UID:ci-4487.0.0-n-50b5667972,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4487.0.0-n-50b5667972,},FirstTimestamp:2025-11-04 23:54:27.406393689 +0000 UTC m=+1.490156358,LastTimestamp:2025-11-04 23:54:27.406393689 +0000 UTC m=+1.490156358,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4487.0.0-n-50b5667972,}" Nov 4 23:54:30.549568 kubelet[2398]: E1104 23:54:30.549520 2398 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.0-n-50b5667972\" not found" node="ci-4487.0.0-n-50b5667972" Nov 4 23:54:30.550088 kubelet[2398]: E1104 23:54:30.549732 2398 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:54:30.550426 kubelet[2398]: E1104 23:54:30.550399 2398 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.0-n-50b5667972\" not found" node="ci-4487.0.0-n-50b5667972" Nov 4 23:54:30.550594 kubelet[2398]: E1104 23:54:30.550575 2398 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:54:30.652114 kubelet[2398]: I1104 23:54:30.652074 2398 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487.0.0-n-50b5667972" Nov 4 23:54:31.551901 kubelet[2398]: E1104 23:54:31.551834 2398 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.0-n-50b5667972\" not found" node="ci-4487.0.0-n-50b5667972" Nov 4 23:54:31.552771 kubelet[2398]: E1104 23:54:31.552076 2398 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:54:32.260846 kubelet[2398]: E1104 23:54:32.260789 2398 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4487.0.0-n-50b5667972\" not found" node="ci-4487.0.0-n-50b5667972" Nov 4 23:54:32.306772 kubelet[2398]: I1104 23:54:32.306713 2398 kubelet_node_status.go:78] "Successfully registered node" node="ci-4487.0.0-n-50b5667972" Nov 4 23:54:32.306972 kubelet[2398]: E1104 23:54:32.306797 2398 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ci-4487.0.0-n-50b5667972\": node \"ci-4487.0.0-n-50b5667972\" not found" Nov 4 23:54:32.332672 kubelet[2398]: I1104 23:54:32.332602 2398 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4487.0.0-n-50b5667972" Nov 4 23:54:32.398036 kubelet[2398]: I1104 23:54:32.397979 2398 apiserver.go:52] "Watching apiserver" Nov 4 23:54:32.432970 kubelet[2398]: I1104 23:54:32.432897 2398 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 4 23:54:32.471057 kubelet[2398]: E1104 23:54:32.470985 2398 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4487.0.0-n-50b5667972\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4487.0.0-n-50b5667972" Nov 4 23:54:32.471057 kubelet[2398]: I1104 23:54:32.471058 2398 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4487.0.0-n-50b5667972" Nov 4 23:54:32.473469 kubelet[2398]: E1104 23:54:32.473414 2398 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4487.0.0-n-50b5667972\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4487.0.0-n-50b5667972" Nov 4 23:54:32.473469 kubelet[2398]: I1104 23:54:32.473478 2398 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4487.0.0-n-50b5667972" Nov 4 23:54:32.477970 kubelet[2398]: E1104 23:54:32.477498 2398 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4487.0.0-n-50b5667972\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4487.0.0-n-50b5667972" Nov 4 23:54:34.914542 systemd[1]: Reload requested from client PID 2681 ('systemctl') (unit session-7.scope)... Nov 4 23:54:34.914576 systemd[1]: Reloading... Nov 4 23:54:35.045011 zram_generator::config[2728]: No configuration found. Nov 4 23:54:35.361818 systemd[1]: Reloading finished in 446 ms. Nov 4 23:54:35.413802 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:54:35.429115 systemd[1]: kubelet.service: Deactivated successfully. Nov 4 23:54:35.429816 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:54:35.430077 systemd[1]: kubelet.service: Consumed 2.064s CPU time, 122.1M memory peak. Nov 4 23:54:35.434758 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:54:35.704686 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:54:35.718669 (kubelet)[2776]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 4 23:54:36.682001 systemd-resolved[1286]: Clock change detected. Flushing caches. Nov 4 23:54:36.683608 systemd-timesyncd[1465]: Contacted time server 162.159.200.123:123 (2.flatcar.pool.ntp.org). Nov 4 23:54:36.683984 systemd-timesyncd[1465]: Initial clock synchronization to Tue 2025-11-04 23:54:36.681702 UTC. Nov 4 23:54:36.728768 kubelet[2776]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 4 23:54:36.729674 kubelet[2776]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 4 23:54:36.729674 kubelet[2776]: I1104 23:54:36.729605 2776 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 4 23:54:36.740208 kubelet[2776]: I1104 23:54:36.740132 2776 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 4 23:54:36.740208 kubelet[2776]: I1104 23:54:36.740181 2776 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 4 23:54:36.740208 kubelet[2776]: I1104 23:54:36.740222 2776 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 4 23:54:36.740498 kubelet[2776]: I1104 23:54:36.740232 2776 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 4 23:54:36.740732 kubelet[2776]: I1104 23:54:36.740695 2776 server.go:956] "Client rotation is on, will bootstrap in background" Nov 4 23:54:36.742814 kubelet[2776]: I1104 23:54:36.742772 2776 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 4 23:54:36.752207 kubelet[2776]: I1104 23:54:36.751767 2776 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 4 23:54:36.760038 kubelet[2776]: I1104 23:54:36.759994 2776 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 4 23:54:36.770975 kubelet[2776]: I1104 23:54:36.770927 2776 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 4 23:54:36.774497 kubelet[2776]: I1104 23:54:36.773616 2776 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 4 23:54:36.774497 kubelet[2776]: I1104 23:54:36.773698 2776 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4487.0.0-n-50b5667972","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 4 23:54:36.774497 kubelet[2776]: I1104 23:54:36.773912 2776 topology_manager.go:138] "Creating topology manager with none policy" Nov 4 23:54:36.774497 kubelet[2776]: I1104 23:54:36.773928 2776 container_manager_linux.go:306] "Creating device plugin manager" Nov 4 23:54:36.774844 kubelet[2776]: I1104 23:54:36.774823 2776 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 4 23:54:36.776478 kubelet[2776]: I1104 23:54:36.776445 2776 state_mem.go:36] "Initialized new in-memory state store" Nov 4 23:54:36.781832 kubelet[2776]: I1104 23:54:36.781665 2776 kubelet.go:475] "Attempting to sync node with API server" Nov 4 23:54:36.781832 kubelet[2776]: I1104 23:54:36.781709 2776 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 4 23:54:36.781832 kubelet[2776]: I1104 23:54:36.781736 2776 kubelet.go:387] "Adding apiserver pod source" Nov 4 23:54:36.782586 kubelet[2776]: I1104 23:54:36.782557 2776 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 4 23:54:36.795793 kubelet[2776]: I1104 23:54:36.795619 2776 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 4 23:54:36.797522 kubelet[2776]: I1104 23:54:36.797287 2776 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 4 23:54:36.800446 kubelet[2776]: I1104 23:54:36.799413 2776 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 4 23:54:36.819011 kubelet[2776]: I1104 23:54:36.818445 2776 server.go:1262] "Started kubelet" Nov 4 23:54:36.819011 kubelet[2776]: I1104 23:54:36.818682 2776 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 4 23:54:36.821127 kubelet[2776]: I1104 23:54:36.821073 2776 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 4 23:54:36.821376 kubelet[2776]: I1104 23:54:36.821356 2776 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 4 23:54:36.821893 kubelet[2776]: I1104 23:54:36.821870 2776 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 4 23:54:36.822479 kubelet[2776]: I1104 23:54:36.822445 2776 server.go:310] "Adding debug handlers to kubelet server" Nov 4 23:54:36.824803 sudo[2790]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Nov 4 23:54:36.827273 sudo[2790]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Nov 4 23:54:36.830714 kubelet[2776]: I1104 23:54:36.830575 2776 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 4 23:54:36.856245 kubelet[2776]: I1104 23:54:36.855881 2776 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 4 23:54:36.862934 kubelet[2776]: E1104 23:54:36.859978 2776 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 4 23:54:36.862934 kubelet[2776]: I1104 23:54:36.861362 2776 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 4 23:54:36.862934 kubelet[2776]: I1104 23:54:36.861545 2776 reconciler.go:29] "Reconciler: start to sync state" Nov 4 23:54:36.868118 kubelet[2776]: I1104 23:54:36.866009 2776 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 4 23:54:36.874488 kubelet[2776]: I1104 23:54:36.873876 2776 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 4 23:54:36.876534 kubelet[2776]: I1104 23:54:36.876379 2776 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 4 23:54:36.876534 kubelet[2776]: I1104 23:54:36.876417 2776 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 4 23:54:36.876534 kubelet[2776]: I1104 23:54:36.876451 2776 kubelet.go:2427] "Starting kubelet main sync loop" Nov 4 23:54:36.876534 kubelet[2776]: E1104 23:54:36.876532 2776 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 4 23:54:36.878186 kubelet[2776]: I1104 23:54:36.877875 2776 factory.go:223] Registration of the systemd container factory successfully Nov 4 23:54:36.879315 kubelet[2776]: I1104 23:54:36.879132 2776 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 4 23:54:36.890539 kubelet[2776]: I1104 23:54:36.890490 2776 factory.go:223] Registration of the containerd container factory successfully Nov 4 23:54:36.958869 kubelet[2776]: I1104 23:54:36.958661 2776 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 4 23:54:36.958869 kubelet[2776]: I1104 23:54:36.958685 2776 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 4 23:54:36.958869 kubelet[2776]: I1104 23:54:36.958716 2776 state_mem.go:36] "Initialized new in-memory state store" Nov 4 23:54:36.959708 kubelet[2776]: I1104 23:54:36.958908 2776 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 4 23:54:36.959708 kubelet[2776]: I1104 23:54:36.958924 2776 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 4 23:54:36.959708 kubelet[2776]: I1104 23:54:36.958942 2776 policy_none.go:49] "None policy: Start" Nov 4 23:54:36.959708 kubelet[2776]: I1104 23:54:36.958952 2776 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 4 23:54:36.959708 kubelet[2776]: I1104 23:54:36.958963 2776 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 4 23:54:36.959708 kubelet[2776]: I1104 23:54:36.959084 2776 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Nov 4 23:54:36.959708 kubelet[2776]: I1104 23:54:36.959093 2776 policy_none.go:47] "Start" Nov 4 23:54:36.976744 kubelet[2776]: E1104 23:54:36.976689 2776 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 4 23:54:36.979101 kubelet[2776]: E1104 23:54:36.977472 2776 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 4 23:54:36.979101 kubelet[2776]: I1104 23:54:36.978294 2776 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 4 23:54:36.979577 kubelet[2776]: I1104 23:54:36.979513 2776 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 4 23:54:36.980191 kubelet[2776]: I1104 23:54:36.980155 2776 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 4 23:54:36.982727 kubelet[2776]: E1104 23:54:36.982694 2776 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 4 23:54:37.091133 kubelet[2776]: I1104 23:54:37.091091 2776 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487.0.0-n-50b5667972" Nov 4 23:54:37.117919 kubelet[2776]: I1104 23:54:37.117867 2776 kubelet_node_status.go:124] "Node was previously registered" node="ci-4487.0.0-n-50b5667972" Nov 4 23:54:37.118064 kubelet[2776]: I1104 23:54:37.118007 2776 kubelet_node_status.go:78] "Successfully registered node" node="ci-4487.0.0-n-50b5667972" Nov 4 23:54:37.179480 kubelet[2776]: I1104 23:54:37.179291 2776 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4487.0.0-n-50b5667972" Nov 4 23:54:37.181857 kubelet[2776]: I1104 23:54:37.181735 2776 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4487.0.0-n-50b5667972" Nov 4 23:54:37.191334 kubelet[2776]: I1104 23:54:37.181295 2776 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4487.0.0-n-50b5667972" Nov 4 23:54:37.207376 kubelet[2776]: I1104 23:54:37.207292 2776 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 4 23:54:37.211912 kubelet[2776]: I1104 23:54:37.211869 2776 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 4 23:54:37.216392 kubelet[2776]: I1104 23:54:37.216347 2776 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 4 23:54:37.266409 kubelet[2776]: I1104 23:54:37.265415 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b57c84615e215db8fbc293fbe0e8fe26-k8s-certs\") pod \"kube-controller-manager-ci-4487.0.0-n-50b5667972\" (UID: \"b57c84615e215db8fbc293fbe0e8fe26\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-n-50b5667972" Nov 4 23:54:37.266409 kubelet[2776]: I1104 23:54:37.265559 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9de39fa52c9e72b4feee95b27cb4b38f-ca-certs\") pod \"kube-apiserver-ci-4487.0.0-n-50b5667972\" (UID: \"9de39fa52c9e72b4feee95b27cb4b38f\") " pod="kube-system/kube-apiserver-ci-4487.0.0-n-50b5667972" Nov 4 23:54:37.266409 kubelet[2776]: I1104 23:54:37.265604 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b57c84615e215db8fbc293fbe0e8fe26-ca-certs\") pod \"kube-controller-manager-ci-4487.0.0-n-50b5667972\" (UID: \"b57c84615e215db8fbc293fbe0e8fe26\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-n-50b5667972" Nov 4 23:54:37.266409 kubelet[2776]: I1104 23:54:37.265649 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b57c84615e215db8fbc293fbe0e8fe26-flexvolume-dir\") pod \"kube-controller-manager-ci-4487.0.0-n-50b5667972\" (UID: \"b57c84615e215db8fbc293fbe0e8fe26\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-n-50b5667972" Nov 4 23:54:37.266409 kubelet[2776]: I1104 23:54:37.265673 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b57c84615e215db8fbc293fbe0e8fe26-kubeconfig\") pod \"kube-controller-manager-ci-4487.0.0-n-50b5667972\" (UID: \"b57c84615e215db8fbc293fbe0e8fe26\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-n-50b5667972" Nov 4 23:54:37.266806 kubelet[2776]: I1104 23:54:37.265732 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b57c84615e215db8fbc293fbe0e8fe26-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4487.0.0-n-50b5667972\" (UID: \"b57c84615e215db8fbc293fbe0e8fe26\") " pod="kube-system/kube-controller-manager-ci-4487.0.0-n-50b5667972" Nov 4 23:54:37.266806 kubelet[2776]: I1104 23:54:37.265765 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5d7977a6b72c726b0ee8fc9493ade3b3-kubeconfig\") pod \"kube-scheduler-ci-4487.0.0-n-50b5667972\" (UID: \"5d7977a6b72c726b0ee8fc9493ade3b3\") " pod="kube-system/kube-scheduler-ci-4487.0.0-n-50b5667972" Nov 4 23:54:37.266806 kubelet[2776]: I1104 23:54:37.265796 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9de39fa52c9e72b4feee95b27cb4b38f-k8s-certs\") pod \"kube-apiserver-ci-4487.0.0-n-50b5667972\" (UID: \"9de39fa52c9e72b4feee95b27cb4b38f\") " pod="kube-system/kube-apiserver-ci-4487.0.0-n-50b5667972" Nov 4 23:54:37.266806 kubelet[2776]: I1104 23:54:37.265840 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9de39fa52c9e72b4feee95b27cb4b38f-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4487.0.0-n-50b5667972\" (UID: \"9de39fa52c9e72b4feee95b27cb4b38f\") " pod="kube-system/kube-apiserver-ci-4487.0.0-n-50b5667972" Nov 4 23:54:37.485617 sudo[2790]: pam_unix(sudo:session): session closed for user root Nov 4 23:54:37.512463 kubelet[2776]: E1104 23:54:37.512407 2776 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:54:37.514928 kubelet[2776]: E1104 23:54:37.514724 2776 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:54:37.518293 kubelet[2776]: E1104 23:54:37.518260 2776 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:54:37.788497 kubelet[2776]: I1104 23:54:37.788357 2776 apiserver.go:52] "Watching apiserver" Nov 4 23:54:37.861931 kubelet[2776]: I1104 23:54:37.861867 2776 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 4 23:54:37.928946 kubelet[2776]: I1104 23:54:37.928560 2776 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4487.0.0-n-50b5667972" Nov 4 23:54:37.934743 kubelet[2776]: E1104 23:54:37.934695 2776 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:54:37.936402 kubelet[2776]: E1104 23:54:37.935297 2776 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:54:37.949396 kubelet[2776]: I1104 23:54:37.949270 2776 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 4 23:54:37.949396 kubelet[2776]: E1104 23:54:37.949363 2776 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4487.0.0-n-50b5667972\" already exists" pod="kube-system/kube-controller-manager-ci-4487.0.0-n-50b5667972" Nov 4 23:54:37.950535 kubelet[2776]: E1104 23:54:37.949608 2776 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:54:37.989867 kubelet[2776]: I1104 23:54:37.989710 2776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4487.0.0-n-50b5667972" podStartSLOduration=0.989639192 podStartE2EDuration="989.639192ms" podCreationTimestamp="2025-11-04 23:54:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 23:54:37.980575719 +0000 UTC m=+1.347444317" watchObservedRunningTime="2025-11-04 23:54:37.989639192 +0000 UTC m=+1.356507759" Nov 4 23:54:38.029083 kubelet[2776]: I1104 23:54:38.028934 2776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4487.0.0-n-50b5667972" podStartSLOduration=1.028885044 podStartE2EDuration="1.028885044s" podCreationTimestamp="2025-11-04 23:54:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 23:54:38.028802602 +0000 UTC m=+1.395671202" watchObservedRunningTime="2025-11-04 23:54:38.028885044 +0000 UTC m=+1.395753647" Nov 4 23:54:38.125435 kubelet[2776]: I1104 23:54:38.124024 2776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4487.0.0-n-50b5667972" podStartSLOduration=1.123995425 podStartE2EDuration="1.123995425s" podCreationTimestamp="2025-11-04 23:54:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 23:54:38.068582707 +0000 UTC m=+1.435451312" watchObservedRunningTime="2025-11-04 23:54:38.123995425 +0000 UTC m=+1.490864024" Nov 4 23:54:38.931486 kubelet[2776]: E1104 23:54:38.930576 2776 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:54:38.931969 kubelet[2776]: E1104 23:54:38.931585 2776 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:54:38.937595 kubelet[2776]: E1104 23:54:38.937454 2776 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:54:39.678500 sudo[1834]: pam_unix(sudo:session): session closed for user root Nov 4 23:54:39.682689 sshd[1833]: Connection closed by 139.178.89.65 port 46456 Nov 4 23:54:39.684850 sshd-session[1830]: pam_unix(sshd:session): session closed for user core Nov 4 23:54:39.691505 systemd-logind[1571]: Session 7 logged out. Waiting for processes to exit. Nov 4 23:54:39.692495 systemd[1]: sshd@6-64.23.154.5:22-139.178.89.65:46456.service: Deactivated successfully. Nov 4 23:54:39.696481 systemd[1]: session-7.scope: Deactivated successfully. Nov 4 23:54:39.697074 systemd[1]: session-7.scope: Consumed 8.103s CPU time, 223.7M memory peak. Nov 4 23:54:39.701147 systemd-logind[1571]: Removed session 7. Nov 4 23:54:39.933248 kubelet[2776]: E1104 23:54:39.933062 2776 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:54:41.506245 kubelet[2776]: I1104 23:54:41.506208 2776 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 4 23:54:41.507078 kubelet[2776]: I1104 23:54:41.507010 2776 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 4 23:54:41.507159 containerd[1600]: time="2025-11-04T23:54:41.506819941Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 4 23:54:41.644036 kubelet[2776]: E1104 23:54:41.643987 2776 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:54:41.937531 kubelet[2776]: E1104 23:54:41.937211 2776 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:54:42.572808 systemd[1]: Created slice kubepods-besteffort-poda883aba7_a4f0_48ee_aaa7_60f45b286dcc.slice - libcontainer container kubepods-besteffort-poda883aba7_a4f0_48ee_aaa7_60f45b286dcc.slice. Nov 4 23:54:42.594411 systemd[1]: Created slice kubepods-burstable-pod6bc7019d_9c96_4edf_a83b_2bef1113a48e.slice - libcontainer container kubepods-burstable-pod6bc7019d_9c96_4edf_a83b_2bef1113a48e.slice. Nov 4 23:54:42.606745 kubelet[2776]: I1104 23:54:42.606694 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6bc7019d-9c96-4edf-a83b-2bef1113a48e-bpf-maps\") pod \"cilium-sblsz\" (UID: \"6bc7019d-9c96-4edf-a83b-2bef1113a48e\") " pod="kube-system/cilium-sblsz" Nov 4 23:54:42.607215 kubelet[2776]: I1104 23:54:42.606775 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6bc7019d-9c96-4edf-a83b-2bef1113a48e-lib-modules\") pod \"cilium-sblsz\" (UID: \"6bc7019d-9c96-4edf-a83b-2bef1113a48e\") " pod="kube-system/cilium-sblsz" Nov 4 23:54:42.607215 kubelet[2776]: I1104 23:54:42.606795 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6bc7019d-9c96-4edf-a83b-2bef1113a48e-clustermesh-secrets\") pod \"cilium-sblsz\" (UID: \"6bc7019d-9c96-4edf-a83b-2bef1113a48e\") " pod="kube-system/cilium-sblsz" Nov 4 23:54:42.607215 kubelet[2776]: I1104 23:54:42.606840 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6bc7019d-9c96-4edf-a83b-2bef1113a48e-cni-path\") pod \"cilium-sblsz\" (UID: \"6bc7019d-9c96-4edf-a83b-2bef1113a48e\") " pod="kube-system/cilium-sblsz" Nov 4 23:54:42.607215 kubelet[2776]: I1104 23:54:42.606909 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6bc7019d-9c96-4edf-a83b-2bef1113a48e-xtables-lock\") pod \"cilium-sblsz\" (UID: \"6bc7019d-9c96-4edf-a83b-2bef1113a48e\") " pod="kube-system/cilium-sblsz" Nov 4 23:54:42.607215 kubelet[2776]: I1104 23:54:42.606928 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6bc7019d-9c96-4edf-a83b-2bef1113a48e-cilium-config-path\") pod \"cilium-sblsz\" (UID: \"6bc7019d-9c96-4edf-a83b-2bef1113a48e\") " pod="kube-system/cilium-sblsz" Nov 4 23:54:42.607926 kubelet[2776]: I1104 23:54:42.607233 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6bc7019d-9c96-4edf-a83b-2bef1113a48e-host-proc-sys-net\") pod \"cilium-sblsz\" (UID: \"6bc7019d-9c96-4edf-a83b-2bef1113a48e\") " pod="kube-system/cilium-sblsz" Nov 4 23:54:42.607926 kubelet[2776]: I1104 23:54:42.607265 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6bc7019d-9c96-4edf-a83b-2bef1113a48e-host-proc-sys-kernel\") pod \"cilium-sblsz\" (UID: \"6bc7019d-9c96-4edf-a83b-2bef1113a48e\") " pod="kube-system/cilium-sblsz" Nov 4 23:54:42.607926 kubelet[2776]: I1104 23:54:42.607291 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6bc7019d-9c96-4edf-a83b-2bef1113a48e-hubble-tls\") pod \"cilium-sblsz\" (UID: \"6bc7019d-9c96-4edf-a83b-2bef1113a48e\") " pod="kube-system/cilium-sblsz" Nov 4 23:54:42.607926 kubelet[2776]: I1104 23:54:42.607437 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwk6p\" (UniqueName: \"kubernetes.io/projected/6bc7019d-9c96-4edf-a83b-2bef1113a48e-kube-api-access-xwk6p\") pod \"cilium-sblsz\" (UID: \"6bc7019d-9c96-4edf-a83b-2bef1113a48e\") " pod="kube-system/cilium-sblsz" Nov 4 23:54:42.607926 kubelet[2776]: I1104 23:54:42.607471 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6bc7019d-9c96-4edf-a83b-2bef1113a48e-cilium-cgroup\") pod \"cilium-sblsz\" (UID: \"6bc7019d-9c96-4edf-a83b-2bef1113a48e\") " pod="kube-system/cilium-sblsz" Nov 4 23:54:42.608052 kubelet[2776]: I1104 23:54:42.607502 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6bc7019d-9c96-4edf-a83b-2bef1113a48e-etc-cni-netd\") pod \"cilium-sblsz\" (UID: \"6bc7019d-9c96-4edf-a83b-2bef1113a48e\") " pod="kube-system/cilium-sblsz" Nov 4 23:54:42.608052 kubelet[2776]: I1104 23:54:42.607562 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a883aba7-a4f0-48ee-aaa7-60f45b286dcc-xtables-lock\") pod \"kube-proxy-29x92\" (UID: \"a883aba7-a4f0-48ee-aaa7-60f45b286dcc\") " pod="kube-system/kube-proxy-29x92" Nov 4 23:54:42.608052 kubelet[2776]: I1104 23:54:42.607580 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a883aba7-a4f0-48ee-aaa7-60f45b286dcc-lib-modules\") pod \"kube-proxy-29x92\" (UID: \"a883aba7-a4f0-48ee-aaa7-60f45b286dcc\") " pod="kube-system/kube-proxy-29x92" Nov 4 23:54:42.608052 kubelet[2776]: I1104 23:54:42.607597 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6bc7019d-9c96-4edf-a83b-2bef1113a48e-cilium-run\") pod \"cilium-sblsz\" (UID: \"6bc7019d-9c96-4edf-a83b-2bef1113a48e\") " pod="kube-system/cilium-sblsz" Nov 4 23:54:42.608052 kubelet[2776]: I1104 23:54:42.607615 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a883aba7-a4f0-48ee-aaa7-60f45b286dcc-kube-proxy\") pod \"kube-proxy-29x92\" (UID: \"a883aba7-a4f0-48ee-aaa7-60f45b286dcc\") " pod="kube-system/kube-proxy-29x92" Nov 4 23:54:42.608175 kubelet[2776]: I1104 23:54:42.607630 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbm66\" (UniqueName: \"kubernetes.io/projected/a883aba7-a4f0-48ee-aaa7-60f45b286dcc-kube-api-access-pbm66\") pod \"kube-proxy-29x92\" (UID: \"a883aba7-a4f0-48ee-aaa7-60f45b286dcc\") " pod="kube-system/kube-proxy-29x92" Nov 4 23:54:42.608175 kubelet[2776]: I1104 23:54:42.607646 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6bc7019d-9c96-4edf-a83b-2bef1113a48e-hostproc\") pod \"cilium-sblsz\" (UID: \"6bc7019d-9c96-4edf-a83b-2bef1113a48e\") " pod="kube-system/cilium-sblsz" Nov 4 23:54:42.702876 systemd[1]: Created slice kubepods-besteffort-podfde141ce_658c_4947_84ce_8de61f26a185.slice - libcontainer container kubepods-besteffort-podfde141ce_658c_4947_84ce_8de61f26a185.slice. Nov 4 23:54:42.709090 kubelet[2776]: I1104 23:54:42.708617 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9456\" (UniqueName: \"kubernetes.io/projected/fde141ce-658c-4947-84ce-8de61f26a185-kube-api-access-h9456\") pod \"cilium-operator-6f9c7c5859-p7qzk\" (UID: \"fde141ce-658c-4947-84ce-8de61f26a185\") " pod="kube-system/cilium-operator-6f9c7c5859-p7qzk" Nov 4 23:54:42.709090 kubelet[2776]: I1104 23:54:42.708684 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fde141ce-658c-4947-84ce-8de61f26a185-cilium-config-path\") pod \"cilium-operator-6f9c7c5859-p7qzk\" (UID: \"fde141ce-658c-4947-84ce-8de61f26a185\") " pod="kube-system/cilium-operator-6f9c7c5859-p7qzk" Nov 4 23:54:42.890316 kubelet[2776]: E1104 23:54:42.890156 2776 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:54:42.893380 containerd[1600]: time="2025-11-04T23:54:42.893256933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-29x92,Uid:a883aba7-a4f0-48ee-aaa7-60f45b286dcc,Namespace:kube-system,Attempt:0,}" Nov 4 23:54:42.903564 kubelet[2776]: E1104 23:54:42.903504 2776 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:54:42.905704 containerd[1600]: time="2025-11-04T23:54:42.905625014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sblsz,Uid:6bc7019d-9c96-4edf-a83b-2bef1113a48e,Namespace:kube-system,Attempt:0,}" Nov 4 23:54:42.921582 containerd[1600]: time="2025-11-04T23:54:42.921514813Z" level=info msg="connecting to shim 844d7426d786ae9e3e2dbc908e2c910ab891028da697033d9783e68728a9596b" address="unix:///run/containerd/s/7f272cd2ed9cd2e88fa28cb10fe9f4fd22ccf11c68a02aee576db45f0c4707aa" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:54:42.934368 containerd[1600]: time="2025-11-04T23:54:42.934145830Z" level=info msg="connecting to shim ea9e5e3e96feee9fd4cf6f8b1944d55ed8a7091ebf57739eed30ce9d8a12be05" address="unix:///run/containerd/s/d516f3b76b9fd6a02b637f012eb4078afdffd69c6a23df294fa6885476e376a8" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:54:42.941013 kubelet[2776]: E1104 23:54:42.940940 2776 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:54:42.975797 systemd[1]: Started cri-containerd-844d7426d786ae9e3e2dbc908e2c910ab891028da697033d9783e68728a9596b.scope - libcontainer container 844d7426d786ae9e3e2dbc908e2c910ab891028da697033d9783e68728a9596b. Nov 4 23:54:43.005676 systemd[1]: Started cri-containerd-ea9e5e3e96feee9fd4cf6f8b1944d55ed8a7091ebf57739eed30ce9d8a12be05.scope - libcontainer container ea9e5e3e96feee9fd4cf6f8b1944d55ed8a7091ebf57739eed30ce9d8a12be05. Nov 4 23:54:43.014034 kubelet[2776]: E1104 23:54:43.013958 2776 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:54:43.016403 containerd[1600]: time="2025-11-04T23:54:43.016265892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-p7qzk,Uid:fde141ce-658c-4947-84ce-8de61f26a185,Namespace:kube-system,Attempt:0,}" Nov 4 23:54:43.044884 containerd[1600]: time="2025-11-04T23:54:43.044803278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-29x92,Uid:a883aba7-a4f0-48ee-aaa7-60f45b286dcc,Namespace:kube-system,Attempt:0,} returns sandbox id \"844d7426d786ae9e3e2dbc908e2c910ab891028da697033d9783e68728a9596b\"" Nov 4 23:54:43.046986 kubelet[2776]: E1104 23:54:43.046946 2776 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:54:43.059578 containerd[1600]: time="2025-11-04T23:54:43.059532915Z" level=info msg="CreateContainer within sandbox \"844d7426d786ae9e3e2dbc908e2c910ab891028da697033d9783e68728a9596b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 4 23:54:43.074154 containerd[1600]: time="2025-11-04T23:54:43.073684974Z" level=info msg="Container 8c7de72e1f2db0249396f40ab7fb3ab6940ee7ad54c9f64ed30b79bf086788d3: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:54:43.077475 containerd[1600]: time="2025-11-04T23:54:43.077396450Z" level=info msg="connecting to shim d2eee780c41c69855c05ce3150dd6023a763769bca94c6158b04a00c3bd91e74" address="unix:///run/containerd/s/d127d07b234f103db2b97d421376064d46c3ffc389c9ae25cbd502242f1a04fd" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:54:43.089392 containerd[1600]: time="2025-11-04T23:54:43.089296217Z" level=info msg="CreateContainer within sandbox \"844d7426d786ae9e3e2dbc908e2c910ab891028da697033d9783e68728a9596b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8c7de72e1f2db0249396f40ab7fb3ab6940ee7ad54c9f64ed30b79bf086788d3\"" Nov 4 23:54:43.091102 containerd[1600]: time="2025-11-04T23:54:43.090911808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sblsz,Uid:6bc7019d-9c96-4edf-a83b-2bef1113a48e,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea9e5e3e96feee9fd4cf6f8b1944d55ed8a7091ebf57739eed30ce9d8a12be05\"" Nov 4 23:54:43.092370 containerd[1600]: time="2025-11-04T23:54:43.092342512Z" level=info msg="StartContainer for \"8c7de72e1f2db0249396f40ab7fb3ab6940ee7ad54c9f64ed30b79bf086788d3\"" Nov 4 23:54:43.093709 kubelet[2776]: E1104 23:54:43.093670 2776 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:54:43.097029 containerd[1600]: time="2025-11-04T23:54:43.096979610Z" level=info msg="connecting to shim 8c7de72e1f2db0249396f40ab7fb3ab6940ee7ad54c9f64ed30b79bf086788d3" address="unix:///run/containerd/s/7f272cd2ed9cd2e88fa28cb10fe9f4fd22ccf11c68a02aee576db45f0c4707aa" protocol=ttrpc version=3 Nov 4 23:54:43.098034 containerd[1600]: time="2025-11-04T23:54:43.097001312Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Nov 4 23:54:43.134651 systemd[1]: Started cri-containerd-8c7de72e1f2db0249396f40ab7fb3ab6940ee7ad54c9f64ed30b79bf086788d3.scope - libcontainer container 8c7de72e1f2db0249396f40ab7fb3ab6940ee7ad54c9f64ed30b79bf086788d3. Nov 4 23:54:43.144806 systemd[1]: Started cri-containerd-d2eee780c41c69855c05ce3150dd6023a763769bca94c6158b04a00c3bd91e74.scope - libcontainer container d2eee780c41c69855c05ce3150dd6023a763769bca94c6158b04a00c3bd91e74. Nov 4 23:54:43.217858 containerd[1600]: time="2025-11-04T23:54:43.217808828Z" level=info msg="StartContainer for \"8c7de72e1f2db0249396f40ab7fb3ab6940ee7ad54c9f64ed30b79bf086788d3\" returns successfully" Nov 4 23:54:43.250049 containerd[1600]: time="2025-11-04T23:54:43.249880243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-p7qzk,Uid:fde141ce-658c-4947-84ce-8de61f26a185,Namespace:kube-system,Attempt:0,} returns sandbox id \"d2eee780c41c69855c05ce3150dd6023a763769bca94c6158b04a00c3bd91e74\"" Nov 4 23:54:43.252076 kubelet[2776]: E1104 23:54:43.251503 2776 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:54:43.956823 kubelet[2776]: E1104 23:54:43.956763 2776 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:54:43.979001 kubelet[2776]: I1104 23:54:43.978885 2776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-29x92" podStartSLOduration=1.978856989 podStartE2EDuration="1.978856989s" podCreationTimestamp="2025-11-04 23:54:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 23:54:43.976382865 +0000 UTC m=+7.343251462" watchObservedRunningTime="2025-11-04 23:54:43.978856989 +0000 UTC m=+7.345725581" Nov 4 23:54:44.046547 kubelet[2776]: E1104 23:54:44.046108 2776 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:54:44.963512 kubelet[2776]: E1104 23:54:44.963439 2776 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:54:44.965347 update_engine[1573]: I20251104 23:54:44.964870 1573 update_attempter.cc:509] Updating boot flags... Nov 4 23:54:45.966374 kubelet[2776]: E1104 23:54:45.966288 2776 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:54:48.331898 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2343797357.mount: Deactivated successfully. Nov 4 23:54:48.802688 kubelet[2776]: E1104 23:54:48.802045 2776 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:54:51.214515 containerd[1600]: time="2025-11-04T23:54:51.213935805Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Nov 4 23:54:51.215868 containerd[1600]: time="2025-11-04T23:54:51.215830580Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.11767531s" Nov 4 23:54:51.216047 containerd[1600]: time="2025-11-04T23:54:51.216028852Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Nov 4 23:54:51.219946 containerd[1600]: time="2025-11-04T23:54:51.219904430Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Nov 4 23:54:51.234013 containerd[1600]: time="2025-11-04T23:54:51.233550475Z" level=info msg="CreateContainer within sandbox \"ea9e5e3e96feee9fd4cf6f8b1944d55ed8a7091ebf57739eed30ce9d8a12be05\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 4 23:54:51.253857 containerd[1600]: time="2025-11-04T23:54:51.253727567Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:54:51.257705 containerd[1600]: time="2025-11-04T23:54:51.256591540Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:54:51.317390 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2464056367.mount: Deactivated successfully. Nov 4 23:54:51.338365 containerd[1600]: time="2025-11-04T23:54:51.336535842Z" level=info msg="Container 36ca1b1854f3e70e724c0f5495fdc75e3b28ad386cf6d5fa8e74baa0fcba7826: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:54:51.340231 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1029468403.mount: Deactivated successfully. Nov 4 23:54:51.347743 containerd[1600]: time="2025-11-04T23:54:51.347580320Z" level=info msg="CreateContainer within sandbox \"ea9e5e3e96feee9fd4cf6f8b1944d55ed8a7091ebf57739eed30ce9d8a12be05\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"36ca1b1854f3e70e724c0f5495fdc75e3b28ad386cf6d5fa8e74baa0fcba7826\"" Nov 4 23:54:51.350152 containerd[1600]: time="2025-11-04T23:54:51.350073540Z" level=info msg="StartContainer for \"36ca1b1854f3e70e724c0f5495fdc75e3b28ad386cf6d5fa8e74baa0fcba7826\"" Nov 4 23:54:51.353386 containerd[1600]: time="2025-11-04T23:54:51.353301676Z" level=info msg="connecting to shim 36ca1b1854f3e70e724c0f5495fdc75e3b28ad386cf6d5fa8e74baa0fcba7826" address="unix:///run/containerd/s/d516f3b76b9fd6a02b637f012eb4078afdffd69c6a23df294fa6885476e376a8" protocol=ttrpc version=3 Nov 4 23:54:51.388630 systemd[1]: Started cri-containerd-36ca1b1854f3e70e724c0f5495fdc75e3b28ad386cf6d5fa8e74baa0fcba7826.scope - libcontainer container 36ca1b1854f3e70e724c0f5495fdc75e3b28ad386cf6d5fa8e74baa0fcba7826. Nov 4 23:54:51.444430 containerd[1600]: time="2025-11-04T23:54:51.444306794Z" level=info msg="StartContainer for \"36ca1b1854f3e70e724c0f5495fdc75e3b28ad386cf6d5fa8e74baa0fcba7826\" returns successfully" Nov 4 23:54:51.466963 systemd[1]: cri-containerd-36ca1b1854f3e70e724c0f5495fdc75e3b28ad386cf6d5fa8e74baa0fcba7826.scope: Deactivated successfully. Nov 4 23:54:51.486897 containerd[1600]: time="2025-11-04T23:54:51.486839135Z" level=info msg="received exit event container_id:\"36ca1b1854f3e70e724c0f5495fdc75e3b28ad386cf6d5fa8e74baa0fcba7826\" id:\"36ca1b1854f3e70e724c0f5495fdc75e3b28ad386cf6d5fa8e74baa0fcba7826\" pid:3215 exited_at:{seconds:1762300491 nanos:471443527}" Nov 4 23:54:51.504886 containerd[1600]: time="2025-11-04T23:54:51.504810858Z" level=info msg="TaskExit event in podsandbox handler container_id:\"36ca1b1854f3e70e724c0f5495fdc75e3b28ad386cf6d5fa8e74baa0fcba7826\" id:\"36ca1b1854f3e70e724c0f5495fdc75e3b28ad386cf6d5fa8e74baa0fcba7826\" pid:3215 exited_at:{seconds:1762300491 nanos:471443527}" Nov 4 23:54:51.996225 kubelet[2776]: E1104 23:54:51.996178 2776 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:54:52.007092 containerd[1600]: time="2025-11-04T23:54:52.006573853Z" level=info msg="CreateContainer within sandbox \"ea9e5e3e96feee9fd4cf6f8b1944d55ed8a7091ebf57739eed30ce9d8a12be05\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 4 23:54:52.020575 containerd[1600]: time="2025-11-04T23:54:52.020510016Z" level=info msg="Container 1ddb88d0d3a216706dc327db775945301ba22760a57000fb67aad2cd060d5c54: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:54:52.043972 containerd[1600]: time="2025-11-04T23:54:52.043763399Z" level=info msg="CreateContainer within sandbox \"ea9e5e3e96feee9fd4cf6f8b1944d55ed8a7091ebf57739eed30ce9d8a12be05\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1ddb88d0d3a216706dc327db775945301ba22760a57000fb67aad2cd060d5c54\"" Nov 4 23:54:52.045655 containerd[1600]: time="2025-11-04T23:54:52.045594136Z" level=info msg="StartContainer for \"1ddb88d0d3a216706dc327db775945301ba22760a57000fb67aad2cd060d5c54\"" Nov 4 23:54:52.047879 containerd[1600]: time="2025-11-04T23:54:52.047783363Z" level=info msg="connecting to shim 1ddb88d0d3a216706dc327db775945301ba22760a57000fb67aad2cd060d5c54" address="unix:///run/containerd/s/d516f3b76b9fd6a02b637f012eb4078afdffd69c6a23df294fa6885476e376a8" protocol=ttrpc version=3 Nov 4 23:54:52.081884 systemd[1]: Started cri-containerd-1ddb88d0d3a216706dc327db775945301ba22760a57000fb67aad2cd060d5c54.scope - libcontainer container 1ddb88d0d3a216706dc327db775945301ba22760a57000fb67aad2cd060d5c54. Nov 4 23:54:52.153859 containerd[1600]: time="2025-11-04T23:54:52.153789108Z" level=info msg="StartContainer for \"1ddb88d0d3a216706dc327db775945301ba22760a57000fb67aad2cd060d5c54\" returns successfully" Nov 4 23:54:52.177180 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 4 23:54:52.177641 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 4 23:54:52.177762 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Nov 4 23:54:52.181796 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 4 23:54:52.187612 systemd[1]: cri-containerd-1ddb88d0d3a216706dc327db775945301ba22760a57000fb67aad2cd060d5c54.scope: Deactivated successfully. Nov 4 23:54:52.189588 containerd[1600]: time="2025-11-04T23:54:52.188646775Z" level=info msg="received exit event container_id:\"1ddb88d0d3a216706dc327db775945301ba22760a57000fb67aad2cd060d5c54\" id:\"1ddb88d0d3a216706dc327db775945301ba22760a57000fb67aad2cd060d5c54\" pid:3261 exited_at:{seconds:1762300492 nanos:188044019}" Nov 4 23:54:52.189588 containerd[1600]: time="2025-11-04T23:54:52.189037726Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1ddb88d0d3a216706dc327db775945301ba22760a57000fb67aad2cd060d5c54\" id:\"1ddb88d0d3a216706dc327db775945301ba22760a57000fb67aad2cd060d5c54\" pid:3261 exited_at:{seconds:1762300492 nanos:188044019}" Nov 4 23:54:52.240974 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 4 23:54:52.309000 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-36ca1b1854f3e70e724c0f5495fdc75e3b28ad386cf6d5fa8e74baa0fcba7826-rootfs.mount: Deactivated successfully. Nov 4 23:54:53.006774 kubelet[2776]: E1104 23:54:53.006695 2776 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:54:53.022553 containerd[1600]: time="2025-11-04T23:54:53.020636884Z" level=info msg="CreateContainer within sandbox \"ea9e5e3e96feee9fd4cf6f8b1944d55ed8a7091ebf57739eed30ce9d8a12be05\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 4 23:54:53.095003 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1965933949.mount: Deactivated successfully. Nov 4 23:54:53.100311 containerd[1600]: time="2025-11-04T23:54:53.100118228Z" level=info msg="Container aeb562b5658034b84d51b7abdbc3574e7dae9a67dbf3e35712202928945d89b6: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:54:53.119713 containerd[1600]: time="2025-11-04T23:54:53.119660344Z" level=info msg="CreateContainer within sandbox \"ea9e5e3e96feee9fd4cf6f8b1944d55ed8a7091ebf57739eed30ce9d8a12be05\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"aeb562b5658034b84d51b7abdbc3574e7dae9a67dbf3e35712202928945d89b6\"" Nov 4 23:54:53.124445 containerd[1600]: time="2025-11-04T23:54:53.124393127Z" level=info msg="StartContainer for \"aeb562b5658034b84d51b7abdbc3574e7dae9a67dbf3e35712202928945d89b6\"" Nov 4 23:54:53.129094 containerd[1600]: time="2025-11-04T23:54:53.129033180Z" level=info msg="connecting to shim aeb562b5658034b84d51b7abdbc3574e7dae9a67dbf3e35712202928945d89b6" address="unix:///run/containerd/s/d516f3b76b9fd6a02b637f012eb4078afdffd69c6a23df294fa6885476e376a8" protocol=ttrpc version=3 Nov 4 23:54:53.191666 systemd[1]: Started cri-containerd-aeb562b5658034b84d51b7abdbc3574e7dae9a67dbf3e35712202928945d89b6.scope - libcontainer container aeb562b5658034b84d51b7abdbc3574e7dae9a67dbf3e35712202928945d89b6. Nov 4 23:54:53.305568 systemd[1]: cri-containerd-aeb562b5658034b84d51b7abdbc3574e7dae9a67dbf3e35712202928945d89b6.scope: Deactivated successfully. Nov 4 23:54:53.306925 systemd[1]: cri-containerd-aeb562b5658034b84d51b7abdbc3574e7dae9a67dbf3e35712202928945d89b6.scope: Consumed 48ms CPU time, 5.8M memory peak, 1M read from disk. Nov 4 23:54:53.316474 containerd[1600]: time="2025-11-04T23:54:53.316305366Z" level=info msg="received exit event container_id:\"aeb562b5658034b84d51b7abdbc3574e7dae9a67dbf3e35712202928945d89b6\" id:\"aeb562b5658034b84d51b7abdbc3574e7dae9a67dbf3e35712202928945d89b6\" pid:3321 exited_at:{seconds:1762300493 nanos:315199642}" Nov 4 23:54:53.322124 containerd[1600]: time="2025-11-04T23:54:53.322057180Z" level=info msg="StartContainer for \"aeb562b5658034b84d51b7abdbc3574e7dae9a67dbf3e35712202928945d89b6\" returns successfully" Nov 4 23:54:53.326093 containerd[1600]: time="2025-11-04T23:54:53.326030622Z" level=info msg="TaskExit event in podsandbox handler container_id:\"aeb562b5658034b84d51b7abdbc3574e7dae9a67dbf3e35712202928945d89b6\" id:\"aeb562b5658034b84d51b7abdbc3574e7dae9a67dbf3e35712202928945d89b6\" pid:3321 exited_at:{seconds:1762300493 nanos:315199642}" Nov 4 23:54:53.391042 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aeb562b5658034b84d51b7abdbc3574e7dae9a67dbf3e35712202928945d89b6-rootfs.mount: Deactivated successfully. Nov 4 23:54:53.521546 containerd[1600]: time="2025-11-04T23:54:53.521467631Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:54:53.522743 containerd[1600]: time="2025-11-04T23:54:53.522629553Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Nov 4 23:54:53.524677 containerd[1600]: time="2025-11-04T23:54:53.524276988Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:54:53.529055 containerd[1600]: time="2025-11-04T23:54:53.528995861Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.309042849s" Nov 4 23:54:53.529542 containerd[1600]: time="2025-11-04T23:54:53.529289699Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Nov 4 23:54:53.538213 containerd[1600]: time="2025-11-04T23:54:53.538152575Z" level=info msg="CreateContainer within sandbox \"d2eee780c41c69855c05ce3150dd6023a763769bca94c6158b04a00c3bd91e74\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Nov 4 23:54:53.555768 containerd[1600]: time="2025-11-04T23:54:53.553901241Z" level=info msg="Container 7f547b22496c5b77033af3bc9e82d2ea5eb0f551263d31dfc5dd902510064041: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:54:53.571164 containerd[1600]: time="2025-11-04T23:54:53.571032256Z" level=info msg="CreateContainer within sandbox \"d2eee780c41c69855c05ce3150dd6023a763769bca94c6158b04a00c3bd91e74\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"7f547b22496c5b77033af3bc9e82d2ea5eb0f551263d31dfc5dd902510064041\"" Nov 4 23:54:53.573528 containerd[1600]: time="2025-11-04T23:54:53.572593868Z" level=info msg="StartContainer for \"7f547b22496c5b77033af3bc9e82d2ea5eb0f551263d31dfc5dd902510064041\"" Nov 4 23:54:53.574224 containerd[1600]: time="2025-11-04T23:54:53.574177918Z" level=info msg="connecting to shim 7f547b22496c5b77033af3bc9e82d2ea5eb0f551263d31dfc5dd902510064041" address="unix:///run/containerd/s/d127d07b234f103db2b97d421376064d46c3ffc389c9ae25cbd502242f1a04fd" protocol=ttrpc version=3 Nov 4 23:54:53.603657 systemd[1]: Started cri-containerd-7f547b22496c5b77033af3bc9e82d2ea5eb0f551263d31dfc5dd902510064041.scope - libcontainer container 7f547b22496c5b77033af3bc9e82d2ea5eb0f551263d31dfc5dd902510064041. Nov 4 23:54:53.676033 containerd[1600]: time="2025-11-04T23:54:53.675973426Z" level=info msg="StartContainer for \"7f547b22496c5b77033af3bc9e82d2ea5eb0f551263d31dfc5dd902510064041\" returns successfully" Nov 4 23:54:54.015102 kubelet[2776]: E1104 23:54:54.015044 2776 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:54:54.028344 kubelet[2776]: E1104 23:54:54.028283 2776 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:54:54.038387 containerd[1600]: time="2025-11-04T23:54:54.037567377Z" level=info msg="CreateContainer within sandbox \"ea9e5e3e96feee9fd4cf6f8b1944d55ed8a7091ebf57739eed30ce9d8a12be05\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 4 23:54:54.061455 containerd[1600]: time="2025-11-04T23:54:54.061024263Z" level=info msg="Container af14da7812aba29730b24156649f8f1bb7b342a0d3798a6332befcb8c2585719: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:54:54.077018 containerd[1600]: time="2025-11-04T23:54:54.076961990Z" level=info msg="CreateContainer within sandbox \"ea9e5e3e96feee9fd4cf6f8b1944d55ed8a7091ebf57739eed30ce9d8a12be05\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"af14da7812aba29730b24156649f8f1bb7b342a0d3798a6332befcb8c2585719\"" Nov 4 23:54:54.080786 containerd[1600]: time="2025-11-04T23:54:54.080471018Z" level=info msg="StartContainer for \"af14da7812aba29730b24156649f8f1bb7b342a0d3798a6332befcb8c2585719\"" Nov 4 23:54:54.082268 containerd[1600]: time="2025-11-04T23:54:54.082207057Z" level=info msg="connecting to shim af14da7812aba29730b24156649f8f1bb7b342a0d3798a6332befcb8c2585719" address="unix:///run/containerd/s/d516f3b76b9fd6a02b637f012eb4078afdffd69c6a23df294fa6885476e376a8" protocol=ttrpc version=3 Nov 4 23:54:54.144657 systemd[1]: Started cri-containerd-af14da7812aba29730b24156649f8f1bb7b342a0d3798a6332befcb8c2585719.scope - libcontainer container af14da7812aba29730b24156649f8f1bb7b342a0d3798a6332befcb8c2585719. Nov 4 23:54:54.248290 systemd[1]: cri-containerd-af14da7812aba29730b24156649f8f1bb7b342a0d3798a6332befcb8c2585719.scope: Deactivated successfully. Nov 4 23:54:54.252524 containerd[1600]: time="2025-11-04T23:54:54.252271035Z" level=info msg="TaskExit event in podsandbox handler container_id:\"af14da7812aba29730b24156649f8f1bb7b342a0d3798a6332befcb8c2585719\" id:\"af14da7812aba29730b24156649f8f1bb7b342a0d3798a6332befcb8c2585719\" pid:3397 exited_at:{seconds:1762300494 nanos:249600984}" Nov 4 23:54:54.253236 containerd[1600]: time="2025-11-04T23:54:54.252982644Z" level=info msg="received exit event container_id:\"af14da7812aba29730b24156649f8f1bb7b342a0d3798a6332befcb8c2585719\" id:\"af14da7812aba29730b24156649f8f1bb7b342a0d3798a6332befcb8c2585719\" pid:3397 exited_at:{seconds:1762300494 nanos:249600984}" Nov 4 23:54:54.260890 containerd[1600]: time="2025-11-04T23:54:54.260817411Z" level=info msg="StartContainer for \"af14da7812aba29730b24156649f8f1bb7b342a0d3798a6332befcb8c2585719\" returns successfully" Nov 4 23:54:54.269520 kubelet[2776]: I1104 23:54:54.269189 2776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6f9c7c5859-p7qzk" podStartSLOduration=1.991602068 podStartE2EDuration="12.26916096s" podCreationTimestamp="2025-11-04 23:54:42 +0000 UTC" firstStartedPulling="2025-11-04 23:54:43.253296928 +0000 UTC m=+6.620165520" lastFinishedPulling="2025-11-04 23:54:53.530855826 +0000 UTC m=+16.897724412" observedRunningTime="2025-11-04 23:54:54.12866743 +0000 UTC m=+17.495536024" watchObservedRunningTime="2025-11-04 23:54:54.26916096 +0000 UTC m=+17.636029566" Nov 4 23:54:54.388305 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3234328332.mount: Deactivated successfully. Nov 4 23:54:55.039032 kubelet[2776]: E1104 23:54:55.038990 2776 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:54:55.041270 kubelet[2776]: E1104 23:54:55.039697 2776 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:54:55.048647 containerd[1600]: time="2025-11-04T23:54:55.048586275Z" level=info msg="CreateContainer within sandbox \"ea9e5e3e96feee9fd4cf6f8b1944d55ed8a7091ebf57739eed30ce9d8a12be05\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 4 23:54:55.073364 containerd[1600]: time="2025-11-04T23:54:55.071240558Z" level=info msg="Container db6619c78594b66e07d7f08c628fe4f56167b92463e4f324defc44ab1106d6c5: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:54:55.079168 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1003818106.mount: Deactivated successfully. Nov 4 23:54:55.094705 containerd[1600]: time="2025-11-04T23:54:55.094658070Z" level=info msg="CreateContainer within sandbox \"ea9e5e3e96feee9fd4cf6f8b1944d55ed8a7091ebf57739eed30ce9d8a12be05\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"db6619c78594b66e07d7f08c628fe4f56167b92463e4f324defc44ab1106d6c5\"" Nov 4 23:54:55.096359 containerd[1600]: time="2025-11-04T23:54:55.096282277Z" level=info msg="StartContainer for \"db6619c78594b66e07d7f08c628fe4f56167b92463e4f324defc44ab1106d6c5\"" Nov 4 23:54:55.098177 containerd[1600]: time="2025-11-04T23:54:55.098102447Z" level=info msg="connecting to shim db6619c78594b66e07d7f08c628fe4f56167b92463e4f324defc44ab1106d6c5" address="unix:///run/containerd/s/d516f3b76b9fd6a02b637f012eb4078afdffd69c6a23df294fa6885476e376a8" protocol=ttrpc version=3 Nov 4 23:54:55.141219 systemd[1]: Started cri-containerd-db6619c78594b66e07d7f08c628fe4f56167b92463e4f324defc44ab1106d6c5.scope - libcontainer container db6619c78594b66e07d7f08c628fe4f56167b92463e4f324defc44ab1106d6c5. Nov 4 23:54:55.225723 containerd[1600]: time="2025-11-04T23:54:55.225672762Z" level=info msg="StartContainer for \"db6619c78594b66e07d7f08c628fe4f56167b92463e4f324defc44ab1106d6c5\" returns successfully" Nov 4 23:54:55.383845 containerd[1600]: time="2025-11-04T23:54:55.383429901Z" level=info msg="TaskExit event in podsandbox handler container_id:\"db6619c78594b66e07d7f08c628fe4f56167b92463e4f324defc44ab1106d6c5\" id:\"95199fd0f6343da2c91683b132df483be8979f387e5a58a730095fd2bf9854f2\" pid:3467 exited_at:{seconds:1762300495 nanos:382738731}" Nov 4 23:54:55.413392 kubelet[2776]: I1104 23:54:55.412432 2776 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Nov 4 23:54:55.500428 systemd[1]: Created slice kubepods-burstable-podc5cd85c9_a7f6_48c0_aad8_1a3e1081ae08.slice - libcontainer container kubepods-burstable-podc5cd85c9_a7f6_48c0_aad8_1a3e1081ae08.slice. Nov 4 23:54:55.525558 systemd[1]: Created slice kubepods-burstable-pod7dcf0b08_8df9_41f3_85a5_398bba419c11.slice - libcontainer container kubepods-burstable-pod7dcf0b08_8df9_41f3_85a5_398bba419c11.slice. Nov 4 23:54:55.529362 kubelet[2776]: I1104 23:54:55.525622 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c5cd85c9-a7f6-48c0-aad8-1a3e1081ae08-config-volume\") pod \"coredns-66bc5c9577-lkfd5\" (UID: \"c5cd85c9-a7f6-48c0-aad8-1a3e1081ae08\") " pod="kube-system/coredns-66bc5c9577-lkfd5" Nov 4 23:54:55.529362 kubelet[2776]: I1104 23:54:55.525721 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7dcf0b08-8df9-41f3-85a5-398bba419c11-config-volume\") pod \"coredns-66bc5c9577-h8fjn\" (UID: \"7dcf0b08-8df9-41f3-85a5-398bba419c11\") " pod="kube-system/coredns-66bc5c9577-h8fjn" Nov 4 23:54:55.529362 kubelet[2776]: I1104 23:54:55.525757 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxxrn\" (UniqueName: \"kubernetes.io/projected/7dcf0b08-8df9-41f3-85a5-398bba419c11-kube-api-access-hxxrn\") pod \"coredns-66bc5c9577-h8fjn\" (UID: \"7dcf0b08-8df9-41f3-85a5-398bba419c11\") " pod="kube-system/coredns-66bc5c9577-h8fjn" Nov 4 23:54:55.529362 kubelet[2776]: I1104 23:54:55.525800 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wk7w5\" (UniqueName: \"kubernetes.io/projected/c5cd85c9-a7f6-48c0-aad8-1a3e1081ae08-kube-api-access-wk7w5\") pod \"coredns-66bc5c9577-lkfd5\" (UID: \"c5cd85c9-a7f6-48c0-aad8-1a3e1081ae08\") " pod="kube-system/coredns-66bc5c9577-lkfd5" Nov 4 23:54:55.817224 kubelet[2776]: E1104 23:54:55.817003 2776 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:54:55.818283 containerd[1600]: time="2025-11-04T23:54:55.818230372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-lkfd5,Uid:c5cd85c9-a7f6-48c0-aad8-1a3e1081ae08,Namespace:kube-system,Attempt:0,}" Nov 4 23:54:55.842654 kubelet[2776]: E1104 23:54:55.842606 2776 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:54:55.858101 containerd[1600]: time="2025-11-04T23:54:55.858021656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-h8fjn,Uid:7dcf0b08-8df9-41f3-85a5-398bba419c11,Namespace:kube-system,Attempt:0,}" Nov 4 23:54:56.066275 kubelet[2776]: E1104 23:54:56.066165 2776 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:54:56.109250 kubelet[2776]: I1104 23:54:56.108737 2776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-sblsz" podStartSLOduration=5.986666234 podStartE2EDuration="14.108702814s" podCreationTimestamp="2025-11-04 23:54:42 +0000 UTC" firstStartedPulling="2025-11-04 23:54:43.096436961 +0000 UTC m=+6.463305543" lastFinishedPulling="2025-11-04 23:54:51.218473557 +0000 UTC m=+14.585342123" observedRunningTime="2025-11-04 23:54:56.105276887 +0000 UTC m=+19.472145479" watchObservedRunningTime="2025-11-04 23:54:56.108702814 +0000 UTC m=+19.475571410" Nov 4 23:54:57.069339 kubelet[2776]: E1104 23:54:57.069251 2776 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:54:57.957103 systemd-networkd[1497]: cilium_host: Link UP Nov 4 23:54:57.958182 systemd-networkd[1497]: cilium_net: Link UP Nov 4 23:54:57.959711 systemd-networkd[1497]: cilium_net: Gained carrier Nov 4 23:54:57.959886 systemd-networkd[1497]: cilium_host: Gained carrier Nov 4 23:54:58.071942 kubelet[2776]: E1104 23:54:58.071874 2776 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:54:58.152185 systemd-networkd[1497]: cilium_vxlan: Link UP Nov 4 23:54:58.152201 systemd-networkd[1497]: cilium_vxlan: Gained carrier Nov 4 23:54:58.391669 systemd-networkd[1497]: cilium_net: Gained IPv6LL Nov 4 23:54:58.584376 kernel: NET: Registered PF_ALG protocol family Nov 4 23:54:58.911863 systemd-networkd[1497]: cilium_host: Gained IPv6LL Nov 4 23:54:59.603274 systemd-networkd[1497]: lxc_health: Link UP Nov 4 23:54:59.613740 systemd-networkd[1497]: lxc_health: Gained carrier Nov 4 23:54:59.808504 systemd-networkd[1497]: cilium_vxlan: Gained IPv6LL Nov 4 23:54:59.936669 kernel: eth0: renamed from tmp1df48 Nov 4 23:54:59.940734 systemd-networkd[1497]: lxc7179ca49c510: Link UP Nov 4 23:54:59.943800 systemd-networkd[1497]: lxc7179ca49c510: Gained carrier Nov 4 23:54:59.977764 systemd-networkd[1497]: lxc885400a46f96: Link UP Nov 4 23:54:59.986442 kernel: eth0: renamed from tmp61126 Nov 4 23:54:59.988733 systemd-networkd[1497]: lxc885400a46f96: Gained carrier Nov 4 23:55:00.905936 kubelet[2776]: E1104 23:55:00.905874 2776 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:55:01.407630 systemd-networkd[1497]: lxc7179ca49c510: Gained IPv6LL Nov 4 23:55:01.461658 kubelet[2776]: I1104 23:55:01.461460 2776 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 4 23:55:01.463728 kubelet[2776]: E1104 23:55:01.463666 2776 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:55:01.471549 systemd-networkd[1497]: lxc_health: Gained IPv6LL Nov 4 23:55:01.855728 systemd-networkd[1497]: lxc885400a46f96: Gained IPv6LL Nov 4 23:55:02.090885 kubelet[2776]: E1104 23:55:02.090816 2776 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:55:06.550665 containerd[1600]: time="2025-11-04T23:55:06.550560987Z" level=info msg="connecting to shim 1df486c376d6683b8fc1c07a7013eebc259f1f1070c31ef13ad0f27958b23821" address="unix:///run/containerd/s/1087bff6a8ba07459ef9aae95bdf0b2cd320b8bc954b3a49a0b9e8d7621c95a2" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:55:06.594728 containerd[1600]: time="2025-11-04T23:55:06.594616477Z" level=info msg="connecting to shim 6112675854dd372f45fbdacd924bc97e7864cdcc3ca577a93f99999e1fb7581e" address="unix:///run/containerd/s/4bbc2ed4a31533c08ae2835c7f08c73cbc0c4f250601927253eeb08e605ef8e2" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:55:06.620915 systemd[1]: Started cri-containerd-1df486c376d6683b8fc1c07a7013eebc259f1f1070c31ef13ad0f27958b23821.scope - libcontainer container 1df486c376d6683b8fc1c07a7013eebc259f1f1070c31ef13ad0f27958b23821. Nov 4 23:55:06.667240 systemd[1]: Started cri-containerd-6112675854dd372f45fbdacd924bc97e7864cdcc3ca577a93f99999e1fb7581e.scope - libcontainer container 6112675854dd372f45fbdacd924bc97e7864cdcc3ca577a93f99999e1fb7581e. Nov 4 23:55:06.749477 containerd[1600]: time="2025-11-04T23:55:06.749431107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-lkfd5,Uid:c5cd85c9-a7f6-48c0-aad8-1a3e1081ae08,Namespace:kube-system,Attempt:0,} returns sandbox id \"1df486c376d6683b8fc1c07a7013eebc259f1f1070c31ef13ad0f27958b23821\"" Nov 4 23:55:06.762284 kubelet[2776]: E1104 23:55:06.762171 2776 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:55:06.767145 containerd[1600]: time="2025-11-04T23:55:06.767061824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-h8fjn,Uid:7dcf0b08-8df9-41f3-85a5-398bba419c11,Namespace:kube-system,Attempt:0,} returns sandbox id \"6112675854dd372f45fbdacd924bc97e7864cdcc3ca577a93f99999e1fb7581e\"" Nov 4 23:55:06.771572 kubelet[2776]: E1104 23:55:06.770761 2776 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:55:06.791961 containerd[1600]: time="2025-11-04T23:55:06.791880697Z" level=info msg="CreateContainer within sandbox \"1df486c376d6683b8fc1c07a7013eebc259f1f1070c31ef13ad0f27958b23821\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 4 23:55:06.792470 containerd[1600]: time="2025-11-04T23:55:06.791880982Z" level=info msg="CreateContainer within sandbox \"6112675854dd372f45fbdacd924bc97e7864cdcc3ca577a93f99999e1fb7581e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 4 23:55:06.822574 containerd[1600]: time="2025-11-04T23:55:06.822272256Z" level=info msg="Container 3170d09b5445c54218a0bff0c5e5a1a68ccf94b73810dd7e0c50be110c1176f3: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:55:06.829274 containerd[1600]: time="2025-11-04T23:55:06.829198696Z" level=info msg="Container 4a3f48e97cd1ebf7ee55d5b2ec147d4a1eb077947444ee8bb99bee5368ef1b04: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:55:06.841270 containerd[1600]: time="2025-11-04T23:55:06.841200864Z" level=info msg="CreateContainer within sandbox \"6112675854dd372f45fbdacd924bc97e7864cdcc3ca577a93f99999e1fb7581e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3170d09b5445c54218a0bff0c5e5a1a68ccf94b73810dd7e0c50be110c1176f3\"" Nov 4 23:55:06.845655 containerd[1600]: time="2025-11-04T23:55:06.844696721Z" level=info msg="StartContainer for \"3170d09b5445c54218a0bff0c5e5a1a68ccf94b73810dd7e0c50be110c1176f3\"" Nov 4 23:55:06.846134 containerd[1600]: time="2025-11-04T23:55:06.845887064Z" level=info msg="CreateContainer within sandbox \"1df486c376d6683b8fc1c07a7013eebc259f1f1070c31ef13ad0f27958b23821\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4a3f48e97cd1ebf7ee55d5b2ec147d4a1eb077947444ee8bb99bee5368ef1b04\"" Nov 4 23:55:06.848029 containerd[1600]: time="2025-11-04T23:55:06.847983526Z" level=info msg="StartContainer for \"4a3f48e97cd1ebf7ee55d5b2ec147d4a1eb077947444ee8bb99bee5368ef1b04\"" Nov 4 23:55:06.848775 containerd[1600]: time="2025-11-04T23:55:06.848731022Z" level=info msg="connecting to shim 3170d09b5445c54218a0bff0c5e5a1a68ccf94b73810dd7e0c50be110c1176f3" address="unix:///run/containerd/s/4bbc2ed4a31533c08ae2835c7f08c73cbc0c4f250601927253eeb08e605ef8e2" protocol=ttrpc version=3 Nov 4 23:55:06.852019 containerd[1600]: time="2025-11-04T23:55:06.850451700Z" level=info msg="connecting to shim 4a3f48e97cd1ebf7ee55d5b2ec147d4a1eb077947444ee8bb99bee5368ef1b04" address="unix:///run/containerd/s/1087bff6a8ba07459ef9aae95bdf0b2cd320b8bc954b3a49a0b9e8d7621c95a2" protocol=ttrpc version=3 Nov 4 23:55:06.910753 systemd[1]: Started cri-containerd-3170d09b5445c54218a0bff0c5e5a1a68ccf94b73810dd7e0c50be110c1176f3.scope - libcontainer container 3170d09b5445c54218a0bff0c5e5a1a68ccf94b73810dd7e0c50be110c1176f3. Nov 4 23:55:06.914354 systemd[1]: Started cri-containerd-4a3f48e97cd1ebf7ee55d5b2ec147d4a1eb077947444ee8bb99bee5368ef1b04.scope - libcontainer container 4a3f48e97cd1ebf7ee55d5b2ec147d4a1eb077947444ee8bb99bee5368ef1b04. Nov 4 23:55:06.996630 containerd[1600]: time="2025-11-04T23:55:06.996551144Z" level=info msg="StartContainer for \"3170d09b5445c54218a0bff0c5e5a1a68ccf94b73810dd7e0c50be110c1176f3\" returns successfully" Nov 4 23:55:07.000709 containerd[1600]: time="2025-11-04T23:55:07.000653956Z" level=info msg="StartContainer for \"4a3f48e97cd1ebf7ee55d5b2ec147d4a1eb077947444ee8bb99bee5368ef1b04\" returns successfully" Nov 4 23:55:07.160980 kubelet[2776]: E1104 23:55:07.158069 2776 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:55:07.169350 kubelet[2776]: E1104 23:55:07.168926 2776 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:55:07.202556 kubelet[2776]: I1104 23:55:07.199853 2776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-h8fjn" podStartSLOduration=25.199729106 podStartE2EDuration="25.199729106s" podCreationTimestamp="2025-11-04 23:54:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 23:55:07.19916718 +0000 UTC m=+30.566035783" watchObservedRunningTime="2025-11-04 23:55:07.199729106 +0000 UTC m=+30.566597707" Nov 4 23:55:07.248204 kubelet[2776]: I1104 23:55:07.248106 2776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-lkfd5" podStartSLOduration=25.247963912 podStartE2EDuration="25.247963912s" podCreationTimestamp="2025-11-04 23:54:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 23:55:07.245018796 +0000 UTC m=+30.611887406" watchObservedRunningTime="2025-11-04 23:55:07.247963912 +0000 UTC m=+30.614832512" Nov 4 23:55:07.529601 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1140714899.mount: Deactivated successfully. Nov 4 23:55:08.174546 kubelet[2776]: E1104 23:55:08.174225 2776 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:55:08.176638 kubelet[2776]: E1104 23:55:08.175832 2776 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:55:09.174316 kubelet[2776]: E1104 23:55:09.174248 2776 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:55:09.174892 kubelet[2776]: E1104 23:55:09.174847 2776 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:55:18.737194 systemd[1]: Started sshd@7-64.23.154.5:22-139.178.89.65:54502.service - OpenSSH per-connection server daemon (139.178.89.65:54502). Nov 4 23:55:18.892777 sshd[4121]: Accepted publickey for core from 139.178.89.65 port 54502 ssh2: RSA SHA256:Rq5CXoWTIcdYifnntDTUaY9VjA9cJ84ZY23eH9iA0qk Nov 4 23:55:18.895889 sshd-session[4121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:55:18.903509 systemd-logind[1571]: New session 8 of user core. Nov 4 23:55:18.909712 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 4 23:55:19.568884 sshd[4124]: Connection closed by 139.178.89.65 port 54502 Nov 4 23:55:19.569899 sshd-session[4121]: pam_unix(sshd:session): session closed for user core Nov 4 23:55:19.589560 systemd[1]: sshd@7-64.23.154.5:22-139.178.89.65:54502.service: Deactivated successfully. Nov 4 23:55:19.593244 systemd[1]: session-8.scope: Deactivated successfully. Nov 4 23:55:19.597020 systemd-logind[1571]: Session 8 logged out. Waiting for processes to exit. Nov 4 23:55:19.599039 systemd-logind[1571]: Removed session 8. Nov 4 23:55:24.597622 systemd[1]: Started sshd@8-64.23.154.5:22-139.178.89.65:54510.service - OpenSSH per-connection server daemon (139.178.89.65:54510). Nov 4 23:55:24.693564 sshd[4138]: Accepted publickey for core from 139.178.89.65 port 54510 ssh2: RSA SHA256:Rq5CXoWTIcdYifnntDTUaY9VjA9cJ84ZY23eH9iA0qk Nov 4 23:55:24.695979 sshd-session[4138]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:55:24.702834 systemd-logind[1571]: New session 9 of user core. Nov 4 23:55:24.721722 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 4 23:55:24.898376 sshd[4141]: Connection closed by 139.178.89.65 port 54510 Nov 4 23:55:24.897299 sshd-session[4138]: pam_unix(sshd:session): session closed for user core Nov 4 23:55:24.904351 systemd[1]: sshd@8-64.23.154.5:22-139.178.89.65:54510.service: Deactivated successfully. Nov 4 23:55:24.907741 systemd[1]: session-9.scope: Deactivated successfully. Nov 4 23:55:24.912773 systemd-logind[1571]: Session 9 logged out. Waiting for processes to exit. Nov 4 23:55:24.914421 systemd-logind[1571]: Removed session 9. Nov 4 23:55:29.931875 systemd[1]: Started sshd@9-64.23.154.5:22-139.178.89.65:53104.service - OpenSSH per-connection server daemon (139.178.89.65:53104). Nov 4 23:55:30.013787 sshd[4154]: Accepted publickey for core from 139.178.89.65 port 53104 ssh2: RSA SHA256:Rq5CXoWTIcdYifnntDTUaY9VjA9cJ84ZY23eH9iA0qk Nov 4 23:55:30.018052 sshd-session[4154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:55:30.036042 systemd-logind[1571]: New session 10 of user core. Nov 4 23:55:30.051544 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 4 23:55:30.292780 sshd[4157]: Connection closed by 139.178.89.65 port 53104 Nov 4 23:55:30.293959 sshd-session[4154]: pam_unix(sshd:session): session closed for user core Nov 4 23:55:30.303247 systemd[1]: sshd@9-64.23.154.5:22-139.178.89.65:53104.service: Deactivated successfully. Nov 4 23:55:30.308942 systemd[1]: session-10.scope: Deactivated successfully. Nov 4 23:55:30.311552 systemd-logind[1571]: Session 10 logged out. Waiting for processes to exit. Nov 4 23:55:30.314873 systemd-logind[1571]: Removed session 10. Nov 4 23:55:35.306022 systemd[1]: Started sshd@10-64.23.154.5:22-139.178.89.65:53108.service - OpenSSH per-connection server daemon (139.178.89.65:53108). Nov 4 23:55:35.393866 sshd[4169]: Accepted publickey for core from 139.178.89.65 port 53108 ssh2: RSA SHA256:Rq5CXoWTIcdYifnntDTUaY9VjA9cJ84ZY23eH9iA0qk Nov 4 23:55:35.396306 sshd-session[4169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:55:35.405140 systemd-logind[1571]: New session 11 of user core. Nov 4 23:55:35.415174 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 4 23:55:35.568056 sshd[4172]: Connection closed by 139.178.89.65 port 53108 Nov 4 23:55:35.568982 sshd-session[4169]: pam_unix(sshd:session): session closed for user core Nov 4 23:55:35.582314 systemd[1]: sshd@10-64.23.154.5:22-139.178.89.65:53108.service: Deactivated successfully. Nov 4 23:55:35.587142 systemd[1]: session-11.scope: Deactivated successfully. Nov 4 23:55:35.590100 systemd-logind[1571]: Session 11 logged out. Waiting for processes to exit. Nov 4 23:55:35.597552 systemd[1]: Started sshd@11-64.23.154.5:22-139.178.89.65:53124.service - OpenSSH per-connection server daemon (139.178.89.65:53124). Nov 4 23:55:35.600081 systemd-logind[1571]: Removed session 11. Nov 4 23:55:35.694767 sshd[4185]: Accepted publickey for core from 139.178.89.65 port 53124 ssh2: RSA SHA256:Rq5CXoWTIcdYifnntDTUaY9VjA9cJ84ZY23eH9iA0qk Nov 4 23:55:35.697060 sshd-session[4185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:55:35.704666 systemd-logind[1571]: New session 12 of user core. Nov 4 23:55:35.717748 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 4 23:55:35.992091 sshd[4188]: Connection closed by 139.178.89.65 port 53124 Nov 4 23:55:35.994842 sshd-session[4185]: pam_unix(sshd:session): session closed for user core Nov 4 23:55:36.014408 systemd[1]: sshd@11-64.23.154.5:22-139.178.89.65:53124.service: Deactivated successfully. Nov 4 23:55:36.021267 systemd[1]: session-12.scope: Deactivated successfully. Nov 4 23:55:36.023393 systemd-logind[1571]: Session 12 logged out. Waiting for processes to exit. Nov 4 23:55:36.033070 systemd[1]: Started sshd@12-64.23.154.5:22-139.178.89.65:47906.service - OpenSSH per-connection server daemon (139.178.89.65:47906). Nov 4 23:55:36.035839 systemd-logind[1571]: Removed session 12. Nov 4 23:55:36.150897 sshd[4198]: Accepted publickey for core from 139.178.89.65 port 47906 ssh2: RSA SHA256:Rq5CXoWTIcdYifnntDTUaY9VjA9cJ84ZY23eH9iA0qk Nov 4 23:55:36.152537 sshd-session[4198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:55:36.159494 systemd-logind[1571]: New session 13 of user core. Nov 4 23:55:36.178713 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 4 23:55:36.339831 sshd[4201]: Connection closed by 139.178.89.65 port 47906 Nov 4 23:55:36.340764 sshd-session[4198]: pam_unix(sshd:session): session closed for user core Nov 4 23:55:36.350305 systemd[1]: sshd@12-64.23.154.5:22-139.178.89.65:47906.service: Deactivated successfully. Nov 4 23:55:36.354215 systemd[1]: session-13.scope: Deactivated successfully. Nov 4 23:55:36.357283 systemd-logind[1571]: Session 13 logged out. Waiting for processes to exit. Nov 4 23:55:36.358696 systemd-logind[1571]: Removed session 13. Nov 4 23:55:41.369999 systemd[1]: Started sshd@13-64.23.154.5:22-139.178.89.65:47914.service - OpenSSH per-connection server daemon (139.178.89.65:47914). Nov 4 23:55:41.468397 sshd[4216]: Accepted publickey for core from 139.178.89.65 port 47914 ssh2: RSA SHA256:Rq5CXoWTIcdYifnntDTUaY9VjA9cJ84ZY23eH9iA0qk Nov 4 23:55:41.470307 sshd-session[4216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:55:41.477041 systemd-logind[1571]: New session 14 of user core. Nov 4 23:55:41.485675 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 4 23:55:41.634713 sshd[4219]: Connection closed by 139.178.89.65 port 47914 Nov 4 23:55:41.635725 sshd-session[4216]: pam_unix(sshd:session): session closed for user core Nov 4 23:55:41.642346 systemd[1]: sshd@13-64.23.154.5:22-139.178.89.65:47914.service: Deactivated successfully. Nov 4 23:55:41.644859 systemd[1]: session-14.scope: Deactivated successfully. Nov 4 23:55:41.647503 systemd-logind[1571]: Session 14 logged out. Waiting for processes to exit. Nov 4 23:55:41.649061 systemd-logind[1571]: Removed session 14. Nov 4 23:55:46.653686 systemd[1]: Started sshd@14-64.23.154.5:22-139.178.89.65:50680.service - OpenSSH per-connection server daemon (139.178.89.65:50680). Nov 4 23:55:46.733538 sshd[4233]: Accepted publickey for core from 139.178.89.65 port 50680 ssh2: RSA SHA256:Rq5CXoWTIcdYifnntDTUaY9VjA9cJ84ZY23eH9iA0qk Nov 4 23:55:46.735831 sshd-session[4233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:55:46.744470 systemd-logind[1571]: New session 15 of user core. Nov 4 23:55:46.748644 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 4 23:55:46.878797 kubelet[2776]: E1104 23:55:46.878074 2776 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:55:46.914798 sshd[4236]: Connection closed by 139.178.89.65 port 50680 Nov 4 23:55:46.916042 sshd-session[4233]: pam_unix(sshd:session): session closed for user core Nov 4 23:55:46.928452 systemd[1]: sshd@14-64.23.154.5:22-139.178.89.65:50680.service: Deactivated successfully. Nov 4 23:55:46.930769 systemd[1]: session-15.scope: Deactivated successfully. Nov 4 23:55:46.931999 systemd-logind[1571]: Session 15 logged out. Waiting for processes to exit. Nov 4 23:55:46.935848 systemd[1]: Started sshd@15-64.23.154.5:22-139.178.89.65:50686.service - OpenSSH per-connection server daemon (139.178.89.65:50686). Nov 4 23:55:46.937541 systemd-logind[1571]: Removed session 15. Nov 4 23:55:47.008249 sshd[4248]: Accepted publickey for core from 139.178.89.65 port 50686 ssh2: RSA SHA256:Rq5CXoWTIcdYifnntDTUaY9VjA9cJ84ZY23eH9iA0qk Nov 4 23:55:47.010572 sshd-session[4248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:55:47.022491 systemd-logind[1571]: New session 16 of user core. Nov 4 23:55:47.028610 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 4 23:55:47.348798 sshd[4251]: Connection closed by 139.178.89.65 port 50686 Nov 4 23:55:47.348992 sshd-session[4248]: pam_unix(sshd:session): session closed for user core Nov 4 23:55:47.362639 systemd[1]: sshd@15-64.23.154.5:22-139.178.89.65:50686.service: Deactivated successfully. Nov 4 23:55:47.365529 systemd[1]: session-16.scope: Deactivated successfully. Nov 4 23:55:47.367258 systemd-logind[1571]: Session 16 logged out. Waiting for processes to exit. Nov 4 23:55:47.372896 systemd[1]: Started sshd@16-64.23.154.5:22-139.178.89.65:50702.service - OpenSSH per-connection server daemon (139.178.89.65:50702). Nov 4 23:55:47.375529 systemd-logind[1571]: Removed session 16. Nov 4 23:55:47.487282 sshd[4261]: Accepted publickey for core from 139.178.89.65 port 50702 ssh2: RSA SHA256:Rq5CXoWTIcdYifnntDTUaY9VjA9cJ84ZY23eH9iA0qk Nov 4 23:55:47.489394 sshd-session[4261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:55:47.497294 systemd-logind[1571]: New session 17 of user core. Nov 4 23:55:47.507701 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 4 23:55:48.360849 sshd[4264]: Connection closed by 139.178.89.65 port 50702 Nov 4 23:55:48.361268 sshd-session[4261]: pam_unix(sshd:session): session closed for user core Nov 4 23:55:48.382978 systemd[1]: sshd@16-64.23.154.5:22-139.178.89.65:50702.service: Deactivated successfully. Nov 4 23:55:48.387814 systemd[1]: session-17.scope: Deactivated successfully. Nov 4 23:55:48.392456 systemd-logind[1571]: Session 17 logged out. Waiting for processes to exit. Nov 4 23:55:48.400966 systemd[1]: Started sshd@17-64.23.154.5:22-139.178.89.65:50704.service - OpenSSH per-connection server daemon (139.178.89.65:50704). Nov 4 23:55:48.409836 systemd-logind[1571]: Removed session 17. Nov 4 23:55:48.541467 sshd[4279]: Accepted publickey for core from 139.178.89.65 port 50704 ssh2: RSA SHA256:Rq5CXoWTIcdYifnntDTUaY9VjA9cJ84ZY23eH9iA0qk Nov 4 23:55:48.543539 sshd-session[4279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:55:48.552290 systemd-logind[1571]: New session 18 of user core. Nov 4 23:55:48.562673 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 4 23:55:48.913659 sshd[4282]: Connection closed by 139.178.89.65 port 50704 Nov 4 23:55:48.914018 sshd-session[4279]: pam_unix(sshd:session): session closed for user core Nov 4 23:55:48.928470 systemd[1]: sshd@17-64.23.154.5:22-139.178.89.65:50704.service: Deactivated successfully. Nov 4 23:55:48.932328 systemd[1]: session-18.scope: Deactivated successfully. Nov 4 23:55:48.934181 systemd-logind[1571]: Session 18 logged out. Waiting for processes to exit. Nov 4 23:55:48.941780 systemd[1]: Started sshd@18-64.23.154.5:22-139.178.89.65:50710.service - OpenSSH per-connection server daemon (139.178.89.65:50710). Nov 4 23:55:48.946360 systemd-logind[1571]: Removed session 18. Nov 4 23:55:49.017865 sshd[4292]: Accepted publickey for core from 139.178.89.65 port 50710 ssh2: RSA SHA256:Rq5CXoWTIcdYifnntDTUaY9VjA9cJ84ZY23eH9iA0qk Nov 4 23:55:49.019931 sshd-session[4292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:55:49.026690 systemd-logind[1571]: New session 19 of user core. Nov 4 23:55:49.033664 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 4 23:55:49.191987 sshd[4295]: Connection closed by 139.178.89.65 port 50710 Nov 4 23:55:49.192889 sshd-session[4292]: pam_unix(sshd:session): session closed for user core Nov 4 23:55:49.200435 systemd[1]: sshd@18-64.23.154.5:22-139.178.89.65:50710.service: Deactivated successfully. Nov 4 23:55:49.203881 systemd[1]: session-19.scope: Deactivated successfully. Nov 4 23:55:49.207654 systemd-logind[1571]: Session 19 logged out. Waiting for processes to exit. Nov 4 23:55:49.208871 systemd-logind[1571]: Removed session 19. Nov 4 23:55:51.878142 kubelet[2776]: E1104 23:55:51.878082 2776 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:55:54.217592 systemd[1]: Started sshd@19-64.23.154.5:22-139.178.89.65:50718.service - OpenSSH per-connection server daemon (139.178.89.65:50718). Nov 4 23:55:54.308977 sshd[4311]: Accepted publickey for core from 139.178.89.65 port 50718 ssh2: RSA SHA256:Rq5CXoWTIcdYifnntDTUaY9VjA9cJ84ZY23eH9iA0qk Nov 4 23:55:54.311395 sshd-session[4311]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:55:54.321122 systemd-logind[1571]: New session 20 of user core. Nov 4 23:55:54.325612 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 4 23:55:54.498520 sshd[4314]: Connection closed by 139.178.89.65 port 50718 Nov 4 23:55:54.499391 sshd-session[4311]: pam_unix(sshd:session): session closed for user core Nov 4 23:55:54.504490 systemd-logind[1571]: Session 20 logged out. Waiting for processes to exit. Nov 4 23:55:54.504822 systemd[1]: sshd@19-64.23.154.5:22-139.178.89.65:50718.service: Deactivated successfully. Nov 4 23:55:54.507695 systemd[1]: session-20.scope: Deactivated successfully. Nov 4 23:55:54.511451 systemd-logind[1571]: Removed session 20. Nov 4 23:55:57.877957 kubelet[2776]: E1104 23:55:57.877825 2776 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:55:59.517802 systemd[1]: Started sshd@20-64.23.154.5:22-139.178.89.65:39248.service - OpenSSH per-connection server daemon (139.178.89.65:39248). Nov 4 23:55:59.604381 sshd[4326]: Accepted publickey for core from 139.178.89.65 port 39248 ssh2: RSA SHA256:Rq5CXoWTIcdYifnntDTUaY9VjA9cJ84ZY23eH9iA0qk Nov 4 23:55:59.608101 sshd-session[4326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:55:59.616421 systemd-logind[1571]: New session 21 of user core. Nov 4 23:55:59.625656 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 4 23:55:59.797494 sshd[4330]: Connection closed by 139.178.89.65 port 39248 Nov 4 23:55:59.798346 sshd-session[4326]: pam_unix(sshd:session): session closed for user core Nov 4 23:55:59.803549 systemd-logind[1571]: Session 21 logged out. Waiting for processes to exit. Nov 4 23:55:59.804852 systemd[1]: sshd@20-64.23.154.5:22-139.178.89.65:39248.service: Deactivated successfully. Nov 4 23:55:59.809107 systemd[1]: session-21.scope: Deactivated successfully. Nov 4 23:55:59.812709 systemd-logind[1571]: Removed session 21. Nov 4 23:56:03.878005 kubelet[2776]: E1104 23:56:03.877939 2776 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:56:04.814637 systemd[1]: Started sshd@21-64.23.154.5:22-139.178.89.65:39250.service - OpenSSH per-connection server daemon (139.178.89.65:39250). Nov 4 23:56:04.900504 sshd[4342]: Accepted publickey for core from 139.178.89.65 port 39250 ssh2: RSA SHA256:Rq5CXoWTIcdYifnntDTUaY9VjA9cJ84ZY23eH9iA0qk Nov 4 23:56:04.902591 sshd-session[4342]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:56:04.909263 systemd-logind[1571]: New session 22 of user core. Nov 4 23:56:04.917704 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 4 23:56:05.081252 sshd[4345]: Connection closed by 139.178.89.65 port 39250 Nov 4 23:56:05.083202 sshd-session[4342]: pam_unix(sshd:session): session closed for user core Nov 4 23:56:05.094003 systemd[1]: sshd@21-64.23.154.5:22-139.178.89.65:39250.service: Deactivated successfully. Nov 4 23:56:05.098033 systemd[1]: session-22.scope: Deactivated successfully. Nov 4 23:56:05.099753 systemd-logind[1571]: Session 22 logged out. Waiting for processes to exit. Nov 4 23:56:05.105565 systemd[1]: Started sshd@22-64.23.154.5:22-139.178.89.65:39260.service - OpenSSH per-connection server daemon (139.178.89.65:39260). Nov 4 23:56:05.107876 systemd-logind[1571]: Removed session 22. Nov 4 23:56:05.191492 sshd[4357]: Accepted publickey for core from 139.178.89.65 port 39260 ssh2: RSA SHA256:Rq5CXoWTIcdYifnntDTUaY9VjA9cJ84ZY23eH9iA0qk Nov 4 23:56:05.193892 sshd-session[4357]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:56:05.203466 systemd-logind[1571]: New session 23 of user core. Nov 4 23:56:05.214777 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 4 23:56:07.001190 containerd[1600]: time="2025-11-04T23:56:07.001108987Z" level=info msg="StopContainer for \"7f547b22496c5b77033af3bc9e82d2ea5eb0f551263d31dfc5dd902510064041\" with timeout 30 (s)" Nov 4 23:56:07.005816 containerd[1600]: time="2025-11-04T23:56:07.005696409Z" level=info msg="Stop container \"7f547b22496c5b77033af3bc9e82d2ea5eb0f551263d31dfc5dd902510064041\" with signal terminated" Nov 4 23:56:07.050448 containerd[1600]: time="2025-11-04T23:56:07.050279035Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 4 23:56:07.059142 containerd[1600]: time="2025-11-04T23:56:07.058962659Z" level=info msg="TaskExit event in podsandbox handler container_id:\"db6619c78594b66e07d7f08c628fe4f56167b92463e4f324defc44ab1106d6c5\" id:\"11ac8038439f226a07119efcaa442d5eeabb0858874451005a0369eb98d02a38\" pid:4379 exited_at:{seconds:1762300567 nanos:57085898}" Nov 4 23:56:07.075985 containerd[1600]: time="2025-11-04T23:56:07.075898111Z" level=info msg="StopContainer for \"db6619c78594b66e07d7f08c628fe4f56167b92463e4f324defc44ab1106d6c5\" with timeout 2 (s)" Nov 4 23:56:07.076641 containerd[1600]: time="2025-11-04T23:56:07.076467962Z" level=info msg="Stop container \"db6619c78594b66e07d7f08c628fe4f56167b92463e4f324defc44ab1106d6c5\" with signal terminated" Nov 4 23:56:07.082524 systemd[1]: cri-containerd-7f547b22496c5b77033af3bc9e82d2ea5eb0f551263d31dfc5dd902510064041.scope: Deactivated successfully. Nov 4 23:56:07.089818 containerd[1600]: time="2025-11-04T23:56:07.089764432Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7f547b22496c5b77033af3bc9e82d2ea5eb0f551263d31dfc5dd902510064041\" id:\"7f547b22496c5b77033af3bc9e82d2ea5eb0f551263d31dfc5dd902510064041\" pid:3365 exited_at:{seconds:1762300567 nanos:88453644}" Nov 4 23:56:07.090590 containerd[1600]: time="2025-11-04T23:56:07.090543907Z" level=info msg="received exit event container_id:\"7f547b22496c5b77033af3bc9e82d2ea5eb0f551263d31dfc5dd902510064041\" id:\"7f547b22496c5b77033af3bc9e82d2ea5eb0f551263d31dfc5dd902510064041\" pid:3365 exited_at:{seconds:1762300567 nanos:88453644}" Nov 4 23:56:07.102820 systemd-networkd[1497]: lxc_health: Link DOWN Nov 4 23:56:07.102831 systemd-networkd[1497]: lxc_health: Lost carrier Nov 4 23:56:07.134502 systemd[1]: cri-containerd-db6619c78594b66e07d7f08c628fe4f56167b92463e4f324defc44ab1106d6c5.scope: Deactivated successfully. Nov 4 23:56:07.135511 systemd[1]: cri-containerd-db6619c78594b66e07d7f08c628fe4f56167b92463e4f324defc44ab1106d6c5.scope: Consumed 10.405s CPU time, 196.9M memory peak, 74.4M read from disk, 13.3M written to disk. Nov 4 23:56:07.137002 containerd[1600]: time="2025-11-04T23:56:07.136824379Z" level=info msg="received exit event container_id:\"db6619c78594b66e07d7f08c628fe4f56167b92463e4f324defc44ab1106d6c5\" id:\"db6619c78594b66e07d7f08c628fe4f56167b92463e4f324defc44ab1106d6c5\" pid:3434 exited_at:{seconds:1762300567 nanos:136542834}" Nov 4 23:56:07.138182 containerd[1600]: time="2025-11-04T23:56:07.137222764Z" level=info msg="TaskExit event in podsandbox handler container_id:\"db6619c78594b66e07d7f08c628fe4f56167b92463e4f324defc44ab1106d6c5\" id:\"db6619c78594b66e07d7f08c628fe4f56167b92463e4f324defc44ab1106d6c5\" pid:3434 exited_at:{seconds:1762300567 nanos:136542834}" Nov 4 23:56:07.149899 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7f547b22496c5b77033af3bc9e82d2ea5eb0f551263d31dfc5dd902510064041-rootfs.mount: Deactivated successfully. Nov 4 23:56:07.168045 containerd[1600]: time="2025-11-04T23:56:07.167845568Z" level=info msg="StopContainer for \"7f547b22496c5b77033af3bc9e82d2ea5eb0f551263d31dfc5dd902510064041\" returns successfully" Nov 4 23:56:07.170026 containerd[1600]: time="2025-11-04T23:56:07.169983360Z" level=info msg="StopPodSandbox for \"d2eee780c41c69855c05ce3150dd6023a763769bca94c6158b04a00c3bd91e74\"" Nov 4 23:56:07.170288 containerd[1600]: time="2025-11-04T23:56:07.170071357Z" level=info msg="Container to stop \"7f547b22496c5b77033af3bc9e82d2ea5eb0f551263d31dfc5dd902510064041\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 4 23:56:07.184209 systemd[1]: cri-containerd-d2eee780c41c69855c05ce3150dd6023a763769bca94c6158b04a00c3bd91e74.scope: Deactivated successfully. Nov 4 23:56:07.196055 containerd[1600]: time="2025-11-04T23:56:07.195931738Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d2eee780c41c69855c05ce3150dd6023a763769bca94c6158b04a00c3bd91e74\" id:\"d2eee780c41c69855c05ce3150dd6023a763769bca94c6158b04a00c3bd91e74\" pid:2999 exit_status:137 exited_at:{seconds:1762300567 nanos:192696033}" Nov 4 23:56:07.197583 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-db6619c78594b66e07d7f08c628fe4f56167b92463e4f324defc44ab1106d6c5-rootfs.mount: Deactivated successfully. Nov 4 23:56:07.212561 containerd[1600]: time="2025-11-04T23:56:07.212094848Z" level=info msg="StopContainer for \"db6619c78594b66e07d7f08c628fe4f56167b92463e4f324defc44ab1106d6c5\" returns successfully" Nov 4 23:56:07.212825 containerd[1600]: time="2025-11-04T23:56:07.212711552Z" level=info msg="StopPodSandbox for \"ea9e5e3e96feee9fd4cf6f8b1944d55ed8a7091ebf57739eed30ce9d8a12be05\"" Nov 4 23:56:07.212825 containerd[1600]: time="2025-11-04T23:56:07.212804134Z" level=info msg="Container to stop \"36ca1b1854f3e70e724c0f5495fdc75e3b28ad386cf6d5fa8e74baa0fcba7826\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 4 23:56:07.212825 containerd[1600]: time="2025-11-04T23:56:07.212818853Z" level=info msg="Container to stop \"af14da7812aba29730b24156649f8f1bb7b342a0d3798a6332befcb8c2585719\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 4 23:56:07.212825 containerd[1600]: time="2025-11-04T23:56:07.212829370Z" level=info msg="Container to stop \"db6619c78594b66e07d7f08c628fe4f56167b92463e4f324defc44ab1106d6c5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 4 23:56:07.212987 containerd[1600]: time="2025-11-04T23:56:07.212838982Z" level=info msg="Container to stop \"1ddb88d0d3a216706dc327db775945301ba22760a57000fb67aad2cd060d5c54\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 4 23:56:07.212987 containerd[1600]: time="2025-11-04T23:56:07.212848646Z" level=info msg="Container to stop \"aeb562b5658034b84d51b7abdbc3574e7dae9a67dbf3e35712202928945d89b6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 4 23:56:07.225181 systemd[1]: cri-containerd-ea9e5e3e96feee9fd4cf6f8b1944d55ed8a7091ebf57739eed30ce9d8a12be05.scope: Deactivated successfully. Nov 4 23:56:07.253225 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d2eee780c41c69855c05ce3150dd6023a763769bca94c6158b04a00c3bd91e74-rootfs.mount: Deactivated successfully. Nov 4 23:56:07.258505 containerd[1600]: time="2025-11-04T23:56:07.258440057Z" level=info msg="shim disconnected" id=d2eee780c41c69855c05ce3150dd6023a763769bca94c6158b04a00c3bd91e74 namespace=k8s.io Nov 4 23:56:07.258731 containerd[1600]: time="2025-11-04T23:56:07.258481727Z" level=warning msg="cleaning up after shim disconnected" id=d2eee780c41c69855c05ce3150dd6023a763769bca94c6158b04a00c3bd91e74 namespace=k8s.io Nov 4 23:56:07.272618 containerd[1600]: time="2025-11-04T23:56:07.258592771Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 4 23:56:07.279698 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ea9e5e3e96feee9fd4cf6f8b1944d55ed8a7091ebf57739eed30ce9d8a12be05-rootfs.mount: Deactivated successfully. Nov 4 23:56:07.284276 containerd[1600]: time="2025-11-04T23:56:07.283989008Z" level=info msg="shim disconnected" id=ea9e5e3e96feee9fd4cf6f8b1944d55ed8a7091ebf57739eed30ce9d8a12be05 namespace=k8s.io Nov 4 23:56:07.284276 containerd[1600]: time="2025-11-04T23:56:07.284036549Z" level=warning msg="cleaning up after shim disconnected" id=ea9e5e3e96feee9fd4cf6f8b1944d55ed8a7091ebf57739eed30ce9d8a12be05 namespace=k8s.io Nov 4 23:56:07.284276 containerd[1600]: time="2025-11-04T23:56:07.284048100Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 4 23:56:07.320172 containerd[1600]: time="2025-11-04T23:56:07.320114937Z" level=info msg="received exit event sandbox_id:\"d2eee780c41c69855c05ce3150dd6023a763769bca94c6158b04a00c3bd91e74\" exit_status:137 exited_at:{seconds:1762300567 nanos:192696033}" Nov 4 23:56:07.320739 containerd[1600]: time="2025-11-04T23:56:07.320708724Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ea9e5e3e96feee9fd4cf6f8b1944d55ed8a7091ebf57739eed30ce9d8a12be05\" id:\"ea9e5e3e96feee9fd4cf6f8b1944d55ed8a7091ebf57739eed30ce9d8a12be05\" pid:2929 exit_status:137 exited_at:{seconds:1762300567 nanos:229255281}" Nov 4 23:56:07.321191 containerd[1600]: time="2025-11-04T23:56:07.320847112Z" level=info msg="received exit event sandbox_id:\"ea9e5e3e96feee9fd4cf6f8b1944d55ed8a7091ebf57739eed30ce9d8a12be05\" exit_status:137 exited_at:{seconds:1762300567 nanos:229255281}" Nov 4 23:56:07.324720 containerd[1600]: time="2025-11-04T23:56:07.324680559Z" level=info msg="TearDown network for sandbox \"d2eee780c41c69855c05ce3150dd6023a763769bca94c6158b04a00c3bd91e74\" successfully" Nov 4 23:56:07.326028 containerd[1600]: time="2025-11-04T23:56:07.324854519Z" level=info msg="StopPodSandbox for \"d2eee780c41c69855c05ce3150dd6023a763769bca94c6158b04a00c3bd91e74\" returns successfully" Nov 4 23:56:07.326028 containerd[1600]: time="2025-11-04T23:56:07.325693650Z" level=info msg="TearDown network for sandbox \"ea9e5e3e96feee9fd4cf6f8b1944d55ed8a7091ebf57739eed30ce9d8a12be05\" successfully" Nov 4 23:56:07.326028 containerd[1600]: time="2025-11-04T23:56:07.325718087Z" level=info msg="StopPodSandbox for \"ea9e5e3e96feee9fd4cf6f8b1944d55ed8a7091ebf57739eed30ce9d8a12be05\" returns successfully" Nov 4 23:56:07.324909 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d2eee780c41c69855c05ce3150dd6023a763769bca94c6158b04a00c3bd91e74-shm.mount: Deactivated successfully. Nov 4 23:56:07.325048 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ea9e5e3e96feee9fd4cf6f8b1944d55ed8a7091ebf57739eed30ce9d8a12be05-shm.mount: Deactivated successfully. Nov 4 23:56:07.376809 kubelet[2776]: I1104 23:56:07.376685 2776 scope.go:117] "RemoveContainer" containerID="7f547b22496c5b77033af3bc9e82d2ea5eb0f551263d31dfc5dd902510064041" Nov 4 23:56:07.384059 containerd[1600]: time="2025-11-04T23:56:07.384007181Z" level=info msg="RemoveContainer for \"7f547b22496c5b77033af3bc9e82d2ea5eb0f551263d31dfc5dd902510064041\"" Nov 4 23:56:07.394849 containerd[1600]: time="2025-11-04T23:56:07.394778755Z" level=info msg="RemoveContainer for \"7f547b22496c5b77033af3bc9e82d2ea5eb0f551263d31dfc5dd902510064041\" returns successfully" Nov 4 23:56:07.399255 kubelet[2776]: I1104 23:56:07.399197 2776 scope.go:117] "RemoveContainer" containerID="7f547b22496c5b77033af3bc9e82d2ea5eb0f551263d31dfc5dd902510064041" Nov 4 23:56:07.400441 containerd[1600]: time="2025-11-04T23:56:07.399913360Z" level=error msg="ContainerStatus for \"7f547b22496c5b77033af3bc9e82d2ea5eb0f551263d31dfc5dd902510064041\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7f547b22496c5b77033af3bc9e82d2ea5eb0f551263d31dfc5dd902510064041\": not found" Nov 4 23:56:07.400890 kubelet[2776]: E1104 23:56:07.400844 2776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7f547b22496c5b77033af3bc9e82d2ea5eb0f551263d31dfc5dd902510064041\": not found" containerID="7f547b22496c5b77033af3bc9e82d2ea5eb0f551263d31dfc5dd902510064041" Nov 4 23:56:07.401051 kubelet[2776]: I1104 23:56:07.400975 2776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7f547b22496c5b77033af3bc9e82d2ea5eb0f551263d31dfc5dd902510064041"} err="failed to get container status \"7f547b22496c5b77033af3bc9e82d2ea5eb0f551263d31dfc5dd902510064041\": rpc error: code = NotFound desc = an error occurred when try to find container \"7f547b22496c5b77033af3bc9e82d2ea5eb0f551263d31dfc5dd902510064041\": not found" Nov 4 23:56:07.401621 kubelet[2776]: I1104 23:56:07.401050 2776 scope.go:117] "RemoveContainer" containerID="db6619c78594b66e07d7f08c628fe4f56167b92463e4f324defc44ab1106d6c5" Nov 4 23:56:07.406562 containerd[1600]: time="2025-11-04T23:56:07.406490433Z" level=info msg="RemoveContainer for \"db6619c78594b66e07d7f08c628fe4f56167b92463e4f324defc44ab1106d6c5\"" Nov 4 23:56:07.418036 containerd[1600]: time="2025-11-04T23:56:07.416449785Z" level=info msg="RemoveContainer for \"db6619c78594b66e07d7f08c628fe4f56167b92463e4f324defc44ab1106d6c5\" returns successfully" Nov 4 23:56:07.420187 kubelet[2776]: I1104 23:56:07.418694 2776 scope.go:117] "RemoveContainer" containerID="af14da7812aba29730b24156649f8f1bb7b342a0d3798a6332befcb8c2585719" Nov 4 23:56:07.427556 containerd[1600]: time="2025-11-04T23:56:07.427485200Z" level=info msg="RemoveContainer for \"af14da7812aba29730b24156649f8f1bb7b342a0d3798a6332befcb8c2585719\"" Nov 4 23:56:07.434492 containerd[1600]: time="2025-11-04T23:56:07.434375629Z" level=info msg="RemoveContainer for \"af14da7812aba29730b24156649f8f1bb7b342a0d3798a6332befcb8c2585719\" returns successfully" Nov 4 23:56:07.435254 kubelet[2776]: I1104 23:56:07.435118 2776 scope.go:117] "RemoveContainer" containerID="aeb562b5658034b84d51b7abdbc3574e7dae9a67dbf3e35712202928945d89b6" Nov 4 23:56:07.438551 containerd[1600]: time="2025-11-04T23:56:07.438502482Z" level=info msg="RemoveContainer for \"aeb562b5658034b84d51b7abdbc3574e7dae9a67dbf3e35712202928945d89b6\"" Nov 4 23:56:07.444237 containerd[1600]: time="2025-11-04T23:56:07.444166523Z" level=info msg="RemoveContainer for \"aeb562b5658034b84d51b7abdbc3574e7dae9a67dbf3e35712202928945d89b6\" returns successfully" Nov 4 23:56:07.445099 kubelet[2776]: I1104 23:56:07.444956 2776 scope.go:117] "RemoveContainer" containerID="1ddb88d0d3a216706dc327db775945301ba22760a57000fb67aad2cd060d5c54" Nov 4 23:56:07.448017 containerd[1600]: time="2025-11-04T23:56:07.447391426Z" level=info msg="RemoveContainer for \"1ddb88d0d3a216706dc327db775945301ba22760a57000fb67aad2cd060d5c54\"" Nov 4 23:56:07.451845 containerd[1600]: time="2025-11-04T23:56:07.451785273Z" level=info msg="RemoveContainer for \"1ddb88d0d3a216706dc327db775945301ba22760a57000fb67aad2cd060d5c54\" returns successfully" Nov 4 23:56:07.452538 kubelet[2776]: I1104 23:56:07.452489 2776 scope.go:117] "RemoveContainer" containerID="36ca1b1854f3e70e724c0f5495fdc75e3b28ad386cf6d5fa8e74baa0fcba7826" Nov 4 23:56:07.455403 containerd[1600]: time="2025-11-04T23:56:07.455307058Z" level=info msg="RemoveContainer for \"36ca1b1854f3e70e724c0f5495fdc75e3b28ad386cf6d5fa8e74baa0fcba7826\"" Nov 4 23:56:07.459601 containerd[1600]: time="2025-11-04T23:56:07.459526193Z" level=info msg="RemoveContainer for \"36ca1b1854f3e70e724c0f5495fdc75e3b28ad386cf6d5fa8e74baa0fcba7826\" returns successfully" Nov 4 23:56:07.459951 kubelet[2776]: I1104 23:56:07.459914 2776 scope.go:117] "RemoveContainer" containerID="db6619c78594b66e07d7f08c628fe4f56167b92463e4f324defc44ab1106d6c5" Nov 4 23:56:07.460655 containerd[1600]: time="2025-11-04T23:56:07.460583707Z" level=error msg="ContainerStatus for \"db6619c78594b66e07d7f08c628fe4f56167b92463e4f324defc44ab1106d6c5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"db6619c78594b66e07d7f08c628fe4f56167b92463e4f324defc44ab1106d6c5\": not found" Nov 4 23:56:07.461018 kubelet[2776]: E1104 23:56:07.460930 2776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"db6619c78594b66e07d7f08c628fe4f56167b92463e4f324defc44ab1106d6c5\": not found" containerID="db6619c78594b66e07d7f08c628fe4f56167b92463e4f324defc44ab1106d6c5" Nov 4 23:56:07.461018 kubelet[2776]: I1104 23:56:07.460986 2776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"db6619c78594b66e07d7f08c628fe4f56167b92463e4f324defc44ab1106d6c5"} err="failed to get container status \"db6619c78594b66e07d7f08c628fe4f56167b92463e4f324defc44ab1106d6c5\": rpc error: code = NotFound desc = an error occurred when try to find container \"db6619c78594b66e07d7f08c628fe4f56167b92463e4f324defc44ab1106d6c5\": not found" Nov 4 23:56:07.461018 kubelet[2776]: I1104 23:56:07.461024 2776 scope.go:117] "RemoveContainer" containerID="af14da7812aba29730b24156649f8f1bb7b342a0d3798a6332befcb8c2585719" Nov 4 23:56:07.461345 containerd[1600]: time="2025-11-04T23:56:07.461273260Z" level=error msg="ContainerStatus for \"af14da7812aba29730b24156649f8f1bb7b342a0d3798a6332befcb8c2585719\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"af14da7812aba29730b24156649f8f1bb7b342a0d3798a6332befcb8c2585719\": not found" Nov 4 23:56:07.461784 kubelet[2776]: E1104 23:56:07.461740 2776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"af14da7812aba29730b24156649f8f1bb7b342a0d3798a6332befcb8c2585719\": not found" containerID="af14da7812aba29730b24156649f8f1bb7b342a0d3798a6332befcb8c2585719" Nov 4 23:56:07.461883 kubelet[2776]: I1104 23:56:07.461781 2776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"af14da7812aba29730b24156649f8f1bb7b342a0d3798a6332befcb8c2585719"} err="failed to get container status \"af14da7812aba29730b24156649f8f1bb7b342a0d3798a6332befcb8c2585719\": rpc error: code = NotFound desc = an error occurred when try to find container \"af14da7812aba29730b24156649f8f1bb7b342a0d3798a6332befcb8c2585719\": not found" Nov 4 23:56:07.461883 kubelet[2776]: I1104 23:56:07.461829 2776 scope.go:117] "RemoveContainer" containerID="aeb562b5658034b84d51b7abdbc3574e7dae9a67dbf3e35712202928945d89b6" Nov 4 23:56:07.462384 containerd[1600]: time="2025-11-04T23:56:07.462266632Z" level=error msg="ContainerStatus for \"aeb562b5658034b84d51b7abdbc3574e7dae9a67dbf3e35712202928945d89b6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"aeb562b5658034b84d51b7abdbc3574e7dae9a67dbf3e35712202928945d89b6\": not found" Nov 4 23:56:07.462604 kubelet[2776]: E1104 23:56:07.462560 2776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"aeb562b5658034b84d51b7abdbc3574e7dae9a67dbf3e35712202928945d89b6\": not found" containerID="aeb562b5658034b84d51b7abdbc3574e7dae9a67dbf3e35712202928945d89b6" Nov 4 23:56:07.462734 kubelet[2776]: I1104 23:56:07.462702 2776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"aeb562b5658034b84d51b7abdbc3574e7dae9a67dbf3e35712202928945d89b6"} err="failed to get container status \"aeb562b5658034b84d51b7abdbc3574e7dae9a67dbf3e35712202928945d89b6\": rpc error: code = NotFound desc = an error occurred when try to find container \"aeb562b5658034b84d51b7abdbc3574e7dae9a67dbf3e35712202928945d89b6\": not found" Nov 4 23:56:07.462784 kubelet[2776]: I1104 23:56:07.462775 2776 scope.go:117] "RemoveContainer" containerID="1ddb88d0d3a216706dc327db775945301ba22760a57000fb67aad2cd060d5c54" Nov 4 23:56:07.463079 containerd[1600]: time="2025-11-04T23:56:07.463027489Z" level=error msg="ContainerStatus for \"1ddb88d0d3a216706dc327db775945301ba22760a57000fb67aad2cd060d5c54\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1ddb88d0d3a216706dc327db775945301ba22760a57000fb67aad2cd060d5c54\": not found" Nov 4 23:56:07.463247 kubelet[2776]: E1104 23:56:07.463212 2776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1ddb88d0d3a216706dc327db775945301ba22760a57000fb67aad2cd060d5c54\": not found" containerID="1ddb88d0d3a216706dc327db775945301ba22760a57000fb67aad2cd060d5c54" Nov 4 23:56:07.463305 kubelet[2776]: I1104 23:56:07.463251 2776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1ddb88d0d3a216706dc327db775945301ba22760a57000fb67aad2cd060d5c54"} err="failed to get container status \"1ddb88d0d3a216706dc327db775945301ba22760a57000fb67aad2cd060d5c54\": rpc error: code = NotFound desc = an error occurred when try to find container \"1ddb88d0d3a216706dc327db775945301ba22760a57000fb67aad2cd060d5c54\": not found" Nov 4 23:56:07.463305 kubelet[2776]: I1104 23:56:07.463275 2776 scope.go:117] "RemoveContainer" containerID="36ca1b1854f3e70e724c0f5495fdc75e3b28ad386cf6d5fa8e74baa0fcba7826" Nov 4 23:56:07.463743 containerd[1600]: time="2025-11-04T23:56:07.463550428Z" level=error msg="ContainerStatus for \"36ca1b1854f3e70e724c0f5495fdc75e3b28ad386cf6d5fa8e74baa0fcba7826\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"36ca1b1854f3e70e724c0f5495fdc75e3b28ad386cf6d5fa8e74baa0fcba7826\": not found" Nov 4 23:56:07.464056 kubelet[2776]: E1104 23:56:07.463991 2776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"36ca1b1854f3e70e724c0f5495fdc75e3b28ad386cf6d5fa8e74baa0fcba7826\": not found" containerID="36ca1b1854f3e70e724c0f5495fdc75e3b28ad386cf6d5fa8e74baa0fcba7826" Nov 4 23:56:07.464056 kubelet[2776]: I1104 23:56:07.464026 2776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"36ca1b1854f3e70e724c0f5495fdc75e3b28ad386cf6d5fa8e74baa0fcba7826"} err="failed to get container status \"36ca1b1854f3e70e724c0f5495fdc75e3b28ad386cf6d5fa8e74baa0fcba7826\": rpc error: code = NotFound desc = an error occurred when try to find container \"36ca1b1854f3e70e724c0f5495fdc75e3b28ad386cf6d5fa8e74baa0fcba7826\": not found" Nov 4 23:56:07.523689 kubelet[2776]: I1104 23:56:07.522476 2776 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6bc7019d-9c96-4edf-a83b-2bef1113a48e-cilium-cgroup\") pod \"6bc7019d-9c96-4edf-a83b-2bef1113a48e\" (UID: \"6bc7019d-9c96-4edf-a83b-2bef1113a48e\") " Nov 4 23:56:07.523689 kubelet[2776]: I1104 23:56:07.522599 2776 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6bc7019d-9c96-4edf-a83b-2bef1113a48e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6bc7019d-9c96-4edf-a83b-2bef1113a48e" (UID: "6bc7019d-9c96-4edf-a83b-2bef1113a48e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 4 23:56:07.523689 kubelet[2776]: I1104 23:56:07.522653 2776 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6bc7019d-9c96-4edf-a83b-2bef1113a48e-host-proc-sys-net\") pod \"6bc7019d-9c96-4edf-a83b-2bef1113a48e\" (UID: \"6bc7019d-9c96-4edf-a83b-2bef1113a48e\") " Nov 4 23:56:07.523689 kubelet[2776]: I1104 23:56:07.522684 2776 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6bc7019d-9c96-4edf-a83b-2bef1113a48e-clustermesh-secrets\") pod \"6bc7019d-9c96-4edf-a83b-2bef1113a48e\" (UID: \"6bc7019d-9c96-4edf-a83b-2bef1113a48e\") " Nov 4 23:56:07.523689 kubelet[2776]: I1104 23:56:07.522710 2776 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6bc7019d-9c96-4edf-a83b-2bef1113a48e-cni-path\") pod \"6bc7019d-9c96-4edf-a83b-2bef1113a48e\" (UID: \"6bc7019d-9c96-4edf-a83b-2bef1113a48e\") " Nov 4 23:56:07.523689 kubelet[2776]: I1104 23:56:07.522728 2776 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9456\" (UniqueName: \"kubernetes.io/projected/fde141ce-658c-4947-84ce-8de61f26a185-kube-api-access-h9456\") pod \"fde141ce-658c-4947-84ce-8de61f26a185\" (UID: \"fde141ce-658c-4947-84ce-8de61f26a185\") " Nov 4 23:56:07.524059 kubelet[2776]: I1104 23:56:07.522761 2776 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6bc7019d-9c96-4edf-a83b-2bef1113a48e-etc-cni-netd\") pod \"6bc7019d-9c96-4edf-a83b-2bef1113a48e\" (UID: \"6bc7019d-9c96-4edf-a83b-2bef1113a48e\") " Nov 4 23:56:07.524059 kubelet[2776]: I1104 23:56:07.522776 2776 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6bc7019d-9c96-4edf-a83b-2bef1113a48e-cilium-run\") pod \"6bc7019d-9c96-4edf-a83b-2bef1113a48e\" (UID: \"6bc7019d-9c96-4edf-a83b-2bef1113a48e\") " Nov 4 23:56:07.524059 kubelet[2776]: I1104 23:56:07.522795 2776 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6bc7019d-9c96-4edf-a83b-2bef1113a48e-bpf-maps\") pod \"6bc7019d-9c96-4edf-a83b-2bef1113a48e\" (UID: \"6bc7019d-9c96-4edf-a83b-2bef1113a48e\") " Nov 4 23:56:07.524059 kubelet[2776]: I1104 23:56:07.522813 2776 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwk6p\" (UniqueName: \"kubernetes.io/projected/6bc7019d-9c96-4edf-a83b-2bef1113a48e-kube-api-access-xwk6p\") pod \"6bc7019d-9c96-4edf-a83b-2bef1113a48e\" (UID: \"6bc7019d-9c96-4edf-a83b-2bef1113a48e\") " Nov 4 23:56:07.524059 kubelet[2776]: I1104 23:56:07.522831 2776 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6bc7019d-9c96-4edf-a83b-2bef1113a48e-xtables-lock\") pod \"6bc7019d-9c96-4edf-a83b-2bef1113a48e\" (UID: \"6bc7019d-9c96-4edf-a83b-2bef1113a48e\") " Nov 4 23:56:07.524059 kubelet[2776]: I1104 23:56:07.522847 2776 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6bc7019d-9c96-4edf-a83b-2bef1113a48e-hostproc\") pod \"6bc7019d-9c96-4edf-a83b-2bef1113a48e\" (UID: \"6bc7019d-9c96-4edf-a83b-2bef1113a48e\") " Nov 4 23:56:07.524337 kubelet[2776]: I1104 23:56:07.522868 2776 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6bc7019d-9c96-4edf-a83b-2bef1113a48e-host-proc-sys-kernel\") pod \"6bc7019d-9c96-4edf-a83b-2bef1113a48e\" (UID: \"6bc7019d-9c96-4edf-a83b-2bef1113a48e\") " Nov 4 23:56:07.524337 kubelet[2776]: I1104 23:56:07.522885 2776 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6bc7019d-9c96-4edf-a83b-2bef1113a48e-lib-modules\") pod \"6bc7019d-9c96-4edf-a83b-2bef1113a48e\" (UID: \"6bc7019d-9c96-4edf-a83b-2bef1113a48e\") " Nov 4 23:56:07.524337 kubelet[2776]: I1104 23:56:07.522987 2776 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6bc7019d-9c96-4edf-a83b-2bef1113a48e-hubble-tls\") pod \"6bc7019d-9c96-4edf-a83b-2bef1113a48e\" (UID: \"6bc7019d-9c96-4edf-a83b-2bef1113a48e\") " Nov 4 23:56:07.524337 kubelet[2776]: I1104 23:56:07.523020 2776 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6bc7019d-9c96-4edf-a83b-2bef1113a48e-cilium-config-path\") pod \"6bc7019d-9c96-4edf-a83b-2bef1113a48e\" (UID: \"6bc7019d-9c96-4edf-a83b-2bef1113a48e\") " Nov 4 23:56:07.524337 kubelet[2776]: I1104 23:56:07.523048 2776 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fde141ce-658c-4947-84ce-8de61f26a185-cilium-config-path\") pod \"fde141ce-658c-4947-84ce-8de61f26a185\" (UID: \"fde141ce-658c-4947-84ce-8de61f26a185\") " Nov 4 23:56:07.524337 kubelet[2776]: I1104 23:56:07.523110 2776 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6bc7019d-9c96-4edf-a83b-2bef1113a48e-cilium-cgroup\") on node \"ci-4487.0.0-n-50b5667972\" DevicePath \"\"" Nov 4 23:56:07.525650 kubelet[2776]: I1104 23:56:07.523379 2776 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6bc7019d-9c96-4edf-a83b-2bef1113a48e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6bc7019d-9c96-4edf-a83b-2bef1113a48e" (UID: "6bc7019d-9c96-4edf-a83b-2bef1113a48e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 4 23:56:07.525650 kubelet[2776]: I1104 23:56:07.523407 2776 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6bc7019d-9c96-4edf-a83b-2bef1113a48e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6bc7019d-9c96-4edf-a83b-2bef1113a48e" (UID: "6bc7019d-9c96-4edf-a83b-2bef1113a48e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 4 23:56:07.529419 kubelet[2776]: I1104 23:56:07.526019 2776 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6bc7019d-9c96-4edf-a83b-2bef1113a48e-cni-path" (OuterVolumeSpecName: "cni-path") pod "6bc7019d-9c96-4edf-a83b-2bef1113a48e" (UID: "6bc7019d-9c96-4edf-a83b-2bef1113a48e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 4 23:56:07.530384 kubelet[2776]: I1104 23:56:07.529896 2776 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6bc7019d-9c96-4edf-a83b-2bef1113a48e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6bc7019d-9c96-4edf-a83b-2bef1113a48e" (UID: "6bc7019d-9c96-4edf-a83b-2bef1113a48e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 4 23:56:07.530384 kubelet[2776]: I1104 23:56:07.529930 2776 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6bc7019d-9c96-4edf-a83b-2bef1113a48e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6bc7019d-9c96-4edf-a83b-2bef1113a48e" (UID: "6bc7019d-9c96-4edf-a83b-2bef1113a48e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 4 23:56:07.530384 kubelet[2776]: I1104 23:56:07.529896 2776 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6bc7019d-9c96-4edf-a83b-2bef1113a48e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6bc7019d-9c96-4edf-a83b-2bef1113a48e" (UID: "6bc7019d-9c96-4edf-a83b-2bef1113a48e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 4 23:56:07.530384 kubelet[2776]: I1104 23:56:07.529988 2776 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6bc7019d-9c96-4edf-a83b-2bef1113a48e-hostproc" (OuterVolumeSpecName: "hostproc") pod "6bc7019d-9c96-4edf-a83b-2bef1113a48e" (UID: "6bc7019d-9c96-4edf-a83b-2bef1113a48e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 4 23:56:07.530384 kubelet[2776]: I1104 23:56:07.530012 2776 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6bc7019d-9c96-4edf-a83b-2bef1113a48e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6bc7019d-9c96-4edf-a83b-2bef1113a48e" (UID: "6bc7019d-9c96-4edf-a83b-2bef1113a48e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 4 23:56:07.530717 kubelet[2776]: I1104 23:56:07.530036 2776 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6bc7019d-9c96-4edf-a83b-2bef1113a48e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6bc7019d-9c96-4edf-a83b-2bef1113a48e" (UID: "6bc7019d-9c96-4edf-a83b-2bef1113a48e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 4 23:56:07.540101 kubelet[2776]: I1104 23:56:07.539803 2776 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bc7019d-9c96-4edf-a83b-2bef1113a48e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6bc7019d-9c96-4edf-a83b-2bef1113a48e" (UID: "6bc7019d-9c96-4edf-a83b-2bef1113a48e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 4 23:56:07.542033 kubelet[2776]: I1104 23:56:07.541938 2776 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fde141ce-658c-4947-84ce-8de61f26a185-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "fde141ce-658c-4947-84ce-8de61f26a185" (UID: "fde141ce-658c-4947-84ce-8de61f26a185"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 4 23:56:07.546644 kubelet[2776]: I1104 23:56:07.546538 2776 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bc7019d-9c96-4edf-a83b-2bef1113a48e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6bc7019d-9c96-4edf-a83b-2bef1113a48e" (UID: "6bc7019d-9c96-4edf-a83b-2bef1113a48e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 4 23:56:07.547220 kubelet[2776]: I1104 23:56:07.547136 2776 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bc7019d-9c96-4edf-a83b-2bef1113a48e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6bc7019d-9c96-4edf-a83b-2bef1113a48e" (UID: "6bc7019d-9c96-4edf-a83b-2bef1113a48e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 4 23:56:07.548248 kubelet[2776]: I1104 23:56:07.548184 2776 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fde141ce-658c-4947-84ce-8de61f26a185-kube-api-access-h9456" (OuterVolumeSpecName: "kube-api-access-h9456") pod "fde141ce-658c-4947-84ce-8de61f26a185" (UID: "fde141ce-658c-4947-84ce-8de61f26a185"). InnerVolumeSpecName "kube-api-access-h9456". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 4 23:56:07.549248 kubelet[2776]: I1104 23:56:07.549214 2776 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bc7019d-9c96-4edf-a83b-2bef1113a48e-kube-api-access-xwk6p" (OuterVolumeSpecName: "kube-api-access-xwk6p") pod "6bc7019d-9c96-4edf-a83b-2bef1113a48e" (UID: "6bc7019d-9c96-4edf-a83b-2bef1113a48e"). InnerVolumeSpecName "kube-api-access-xwk6p". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 4 23:56:07.623656 kubelet[2776]: I1104 23:56:07.623417 2776 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6bc7019d-9c96-4edf-a83b-2bef1113a48e-clustermesh-secrets\") on node \"ci-4487.0.0-n-50b5667972\" DevicePath \"\"" Nov 4 23:56:07.623656 kubelet[2776]: I1104 23:56:07.623500 2776 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6bc7019d-9c96-4edf-a83b-2bef1113a48e-cni-path\") on node \"ci-4487.0.0-n-50b5667972\" DevicePath \"\"" Nov 4 23:56:07.623656 kubelet[2776]: I1104 23:56:07.623515 2776 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-h9456\" (UniqueName: \"kubernetes.io/projected/fde141ce-658c-4947-84ce-8de61f26a185-kube-api-access-h9456\") on node \"ci-4487.0.0-n-50b5667972\" DevicePath \"\"" Nov 4 23:56:07.623656 kubelet[2776]: I1104 23:56:07.623526 2776 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6bc7019d-9c96-4edf-a83b-2bef1113a48e-etc-cni-netd\") on node \"ci-4487.0.0-n-50b5667972\" DevicePath \"\"" Nov 4 23:56:07.623656 kubelet[2776]: I1104 23:56:07.623535 2776 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6bc7019d-9c96-4edf-a83b-2bef1113a48e-cilium-run\") on node \"ci-4487.0.0-n-50b5667972\" DevicePath \"\"" Nov 4 23:56:07.623656 kubelet[2776]: I1104 23:56:07.623543 2776 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6bc7019d-9c96-4edf-a83b-2bef1113a48e-bpf-maps\") on node \"ci-4487.0.0-n-50b5667972\" DevicePath \"\"" Nov 4 23:56:07.623656 kubelet[2776]: I1104 23:56:07.623551 2776 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xwk6p\" (UniqueName: \"kubernetes.io/projected/6bc7019d-9c96-4edf-a83b-2bef1113a48e-kube-api-access-xwk6p\") on node \"ci-4487.0.0-n-50b5667972\" DevicePath \"\"" Nov 4 23:56:07.623656 kubelet[2776]: I1104 23:56:07.623562 2776 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6bc7019d-9c96-4edf-a83b-2bef1113a48e-xtables-lock\") on node \"ci-4487.0.0-n-50b5667972\" DevicePath \"\"" Nov 4 23:56:07.624065 kubelet[2776]: I1104 23:56:07.623573 2776 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6bc7019d-9c96-4edf-a83b-2bef1113a48e-hostproc\") on node \"ci-4487.0.0-n-50b5667972\" DevicePath \"\"" Nov 4 23:56:07.624065 kubelet[2776]: I1104 23:56:07.623582 2776 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6bc7019d-9c96-4edf-a83b-2bef1113a48e-host-proc-sys-kernel\") on node \"ci-4487.0.0-n-50b5667972\" DevicePath \"\"" Nov 4 23:56:07.624065 kubelet[2776]: I1104 23:56:07.623590 2776 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6bc7019d-9c96-4edf-a83b-2bef1113a48e-lib-modules\") on node \"ci-4487.0.0-n-50b5667972\" DevicePath \"\"" Nov 4 23:56:07.624065 kubelet[2776]: I1104 23:56:07.623599 2776 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6bc7019d-9c96-4edf-a83b-2bef1113a48e-hubble-tls\") on node \"ci-4487.0.0-n-50b5667972\" DevicePath \"\"" Nov 4 23:56:07.624065 kubelet[2776]: I1104 23:56:07.623606 2776 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6bc7019d-9c96-4edf-a83b-2bef1113a48e-cilium-config-path\") on node \"ci-4487.0.0-n-50b5667972\" DevicePath \"\"" Nov 4 23:56:07.624065 kubelet[2776]: I1104 23:56:07.623614 2776 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fde141ce-658c-4947-84ce-8de61f26a185-cilium-config-path\") on node \"ci-4487.0.0-n-50b5667972\" DevicePath \"\"" Nov 4 23:56:07.624065 kubelet[2776]: I1104 23:56:07.623625 2776 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6bc7019d-9c96-4edf-a83b-2bef1113a48e-host-proc-sys-net\") on node \"ci-4487.0.0-n-50b5667972\" DevicePath \"\"" Nov 4 23:56:07.682213 systemd[1]: Removed slice kubepods-besteffort-podfde141ce_658c_4947_84ce_8de61f26a185.slice - libcontainer container kubepods-besteffort-podfde141ce_658c_4947_84ce_8de61f26a185.slice. Nov 4 23:56:07.710447 systemd[1]: Removed slice kubepods-burstable-pod6bc7019d_9c96_4edf_a83b_2bef1113a48e.slice - libcontainer container kubepods-burstable-pod6bc7019d_9c96_4edf_a83b_2bef1113a48e.slice. Nov 4 23:56:07.710620 systemd[1]: kubepods-burstable-pod6bc7019d_9c96_4edf_a83b_2bef1113a48e.slice: Consumed 10.562s CPU time, 197.3M memory peak, 75.6M read from disk, 13.3M written to disk. Nov 4 23:56:08.150341 systemd[1]: var-lib-kubelet-pods-fde141ce\x2d658c\x2d4947\x2d84ce\x2d8de61f26a185-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dh9456.mount: Deactivated successfully. Nov 4 23:56:08.150575 systemd[1]: var-lib-kubelet-pods-6bc7019d\x2d9c96\x2d4edf\x2da83b\x2d2bef1113a48e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxwk6p.mount: Deactivated successfully. Nov 4 23:56:08.150675 systemd[1]: var-lib-kubelet-pods-6bc7019d\x2d9c96\x2d4edf\x2da83b\x2d2bef1113a48e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 4 23:56:08.150780 systemd[1]: var-lib-kubelet-pods-6bc7019d\x2d9c96\x2d4edf\x2da83b\x2d2bef1113a48e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 4 23:56:08.837884 sshd[4360]: Connection closed by 139.178.89.65 port 39260 Nov 4 23:56:08.840606 sshd-session[4357]: pam_unix(sshd:session): session closed for user core Nov 4 23:56:08.849996 systemd[1]: sshd@22-64.23.154.5:22-139.178.89.65:39260.service: Deactivated successfully. Nov 4 23:56:08.852875 systemd[1]: session-23.scope: Deactivated successfully. Nov 4 23:56:08.854139 systemd-logind[1571]: Session 23 logged out. Waiting for processes to exit. Nov 4 23:56:08.856594 systemd-logind[1571]: Removed session 23. Nov 4 23:56:08.858510 systemd[1]: Started sshd@23-64.23.154.5:22-139.178.89.65:35102.service - OpenSSH per-connection server daemon (139.178.89.65:35102). Nov 4 23:56:08.880976 kubelet[2776]: I1104 23:56:08.880890 2776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bc7019d-9c96-4edf-a83b-2bef1113a48e" path="/var/lib/kubelet/pods/6bc7019d-9c96-4edf-a83b-2bef1113a48e/volumes" Nov 4 23:56:08.881892 kubelet[2776]: I1104 23:56:08.881813 2776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fde141ce-658c-4947-84ce-8de61f26a185" path="/var/lib/kubelet/pods/fde141ce-658c-4947-84ce-8de61f26a185/volumes" Nov 4 23:56:08.951064 sshd[4515]: Accepted publickey for core from 139.178.89.65 port 35102 ssh2: RSA SHA256:Rq5CXoWTIcdYifnntDTUaY9VjA9cJ84ZY23eH9iA0qk Nov 4 23:56:08.952717 sshd-session[4515]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:56:08.960806 systemd-logind[1571]: New session 24 of user core. Nov 4 23:56:08.971817 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 4 23:56:09.917362 sshd[4518]: Connection closed by 139.178.89.65 port 35102 Nov 4 23:56:09.917028 sshd-session[4515]: pam_unix(sshd:session): session closed for user core Nov 4 23:56:09.932286 systemd[1]: sshd@23-64.23.154.5:22-139.178.89.65:35102.service: Deactivated successfully. Nov 4 23:56:09.938154 systemd[1]: session-24.scope: Deactivated successfully. Nov 4 23:56:09.940025 systemd-logind[1571]: Session 24 logged out. Waiting for processes to exit. Nov 4 23:56:09.949810 systemd[1]: Started sshd@24-64.23.154.5:22-139.178.89.65:35106.service - OpenSSH per-connection server daemon (139.178.89.65:35106). Nov 4 23:56:09.954592 systemd-logind[1571]: Removed session 24. Nov 4 23:56:10.044664 sshd[4529]: Accepted publickey for core from 139.178.89.65 port 35106 ssh2: RSA SHA256:Rq5CXoWTIcdYifnntDTUaY9VjA9cJ84ZY23eH9iA0qk Nov 4 23:56:10.048572 sshd-session[4529]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:56:10.059359 systemd-logind[1571]: New session 25 of user core. Nov 4 23:56:10.065052 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 4 23:56:10.126832 systemd[1]: Created slice kubepods-burstable-podab9d0c36_9ae3_4641_958c_34439f992036.slice - libcontainer container kubepods-burstable-podab9d0c36_9ae3_4641_958c_34439f992036.slice. Nov 4 23:56:10.135354 sshd[4532]: Connection closed by 139.178.89.65 port 35106 Nov 4 23:56:10.136373 sshd-session[4529]: pam_unix(sshd:session): session closed for user core Nov 4 23:56:10.154472 systemd[1]: sshd@24-64.23.154.5:22-139.178.89.65:35106.service: Deactivated successfully. Nov 4 23:56:10.161646 systemd[1]: session-25.scope: Deactivated successfully. Nov 4 23:56:10.168921 systemd-logind[1571]: Session 25 logged out. Waiting for processes to exit. Nov 4 23:56:10.174848 systemd[1]: Started sshd@25-64.23.154.5:22-139.178.89.65:35120.service - OpenSSH per-connection server daemon (139.178.89.65:35120). Nov 4 23:56:10.182526 systemd-logind[1571]: Removed session 25. Nov 4 23:56:10.246352 kubelet[2776]: I1104 23:56:10.244592 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ab9d0c36-9ae3-4641-958c-34439f992036-clustermesh-secrets\") pod \"cilium-xhg2c\" (UID: \"ab9d0c36-9ae3-4641-958c-34439f992036\") " pod="kube-system/cilium-xhg2c" Nov 4 23:56:10.246352 kubelet[2776]: I1104 23:56:10.244648 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ab9d0c36-9ae3-4641-958c-34439f992036-cilium-ipsec-secrets\") pod \"cilium-xhg2c\" (UID: \"ab9d0c36-9ae3-4641-958c-34439f992036\") " pod="kube-system/cilium-xhg2c" Nov 4 23:56:10.246352 kubelet[2776]: I1104 23:56:10.244682 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gj9nh\" (UniqueName: \"kubernetes.io/projected/ab9d0c36-9ae3-4641-958c-34439f992036-kube-api-access-gj9nh\") pod \"cilium-xhg2c\" (UID: \"ab9d0c36-9ae3-4641-958c-34439f992036\") " pod="kube-system/cilium-xhg2c" Nov 4 23:56:10.246352 kubelet[2776]: I1104 23:56:10.244711 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ab9d0c36-9ae3-4641-958c-34439f992036-hostproc\") pod \"cilium-xhg2c\" (UID: \"ab9d0c36-9ae3-4641-958c-34439f992036\") " pod="kube-system/cilium-xhg2c" Nov 4 23:56:10.246352 kubelet[2776]: I1104 23:56:10.244734 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ab9d0c36-9ae3-4641-958c-34439f992036-cilium-run\") pod \"cilium-xhg2c\" (UID: \"ab9d0c36-9ae3-4641-958c-34439f992036\") " pod="kube-system/cilium-xhg2c" Nov 4 23:56:10.246352 kubelet[2776]: I1104 23:56:10.244761 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ab9d0c36-9ae3-4641-958c-34439f992036-bpf-maps\") pod \"cilium-xhg2c\" (UID: \"ab9d0c36-9ae3-4641-958c-34439f992036\") " pod="kube-system/cilium-xhg2c" Nov 4 23:56:10.247130 kubelet[2776]: I1104 23:56:10.244783 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ab9d0c36-9ae3-4641-958c-34439f992036-xtables-lock\") pod \"cilium-xhg2c\" (UID: \"ab9d0c36-9ae3-4641-958c-34439f992036\") " pod="kube-system/cilium-xhg2c" Nov 4 23:56:10.247130 kubelet[2776]: I1104 23:56:10.244806 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ab9d0c36-9ae3-4641-958c-34439f992036-hubble-tls\") pod \"cilium-xhg2c\" (UID: \"ab9d0c36-9ae3-4641-958c-34439f992036\") " pod="kube-system/cilium-xhg2c" Nov 4 23:56:10.247130 kubelet[2776]: I1104 23:56:10.244838 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ab9d0c36-9ae3-4641-958c-34439f992036-cilium-cgroup\") pod \"cilium-xhg2c\" (UID: \"ab9d0c36-9ae3-4641-958c-34439f992036\") " pod="kube-system/cilium-xhg2c" Nov 4 23:56:10.247130 kubelet[2776]: I1104 23:56:10.244861 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ab9d0c36-9ae3-4641-958c-34439f992036-lib-modules\") pod \"cilium-xhg2c\" (UID: \"ab9d0c36-9ae3-4641-958c-34439f992036\") " pod="kube-system/cilium-xhg2c" Nov 4 23:56:10.247130 kubelet[2776]: I1104 23:56:10.244882 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ab9d0c36-9ae3-4641-958c-34439f992036-host-proc-sys-kernel\") pod \"cilium-xhg2c\" (UID: \"ab9d0c36-9ae3-4641-958c-34439f992036\") " pod="kube-system/cilium-xhg2c" Nov 4 23:56:10.247130 kubelet[2776]: I1104 23:56:10.244908 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ab9d0c36-9ae3-4641-958c-34439f992036-host-proc-sys-net\") pod \"cilium-xhg2c\" (UID: \"ab9d0c36-9ae3-4641-958c-34439f992036\") " pod="kube-system/cilium-xhg2c" Nov 4 23:56:10.247410 kubelet[2776]: I1104 23:56:10.244930 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ab9d0c36-9ae3-4641-958c-34439f992036-cni-path\") pod \"cilium-xhg2c\" (UID: \"ab9d0c36-9ae3-4641-958c-34439f992036\") " pod="kube-system/cilium-xhg2c" Nov 4 23:56:10.247410 kubelet[2776]: I1104 23:56:10.244958 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ab9d0c36-9ae3-4641-958c-34439f992036-etc-cni-netd\") pod \"cilium-xhg2c\" (UID: \"ab9d0c36-9ae3-4641-958c-34439f992036\") " pod="kube-system/cilium-xhg2c" Nov 4 23:56:10.247410 kubelet[2776]: I1104 23:56:10.244981 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ab9d0c36-9ae3-4641-958c-34439f992036-cilium-config-path\") pod \"cilium-xhg2c\" (UID: \"ab9d0c36-9ae3-4641-958c-34439f992036\") " pod="kube-system/cilium-xhg2c" Nov 4 23:56:10.312260 sshd[4539]: Accepted publickey for core from 139.178.89.65 port 35120 ssh2: RSA SHA256:Rq5CXoWTIcdYifnntDTUaY9VjA9cJ84ZY23eH9iA0qk Nov 4 23:56:10.315876 sshd-session[4539]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:56:10.327684 systemd-logind[1571]: New session 26 of user core. Nov 4 23:56:10.335702 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 4 23:56:10.439091 kubelet[2776]: E1104 23:56:10.439036 2776 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:56:10.442353 containerd[1600]: time="2025-11-04T23:56:10.441738359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xhg2c,Uid:ab9d0c36-9ae3-4641-958c-34439f992036,Namespace:kube-system,Attempt:0,}" Nov 4 23:56:10.488360 containerd[1600]: time="2025-11-04T23:56:10.488270637Z" level=info msg="connecting to shim 04545066e8a02b6dbf6d229feb2357e1af0fffb322ba0e7e7a798631cb5d56c0" address="unix:///run/containerd/s/94b850e430af2bb91cc93cba28a6d504143d56faab8abd9750e1f7db5563cfc7" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:56:10.540586 systemd[1]: Started cri-containerd-04545066e8a02b6dbf6d229feb2357e1af0fffb322ba0e7e7a798631cb5d56c0.scope - libcontainer container 04545066e8a02b6dbf6d229feb2357e1af0fffb322ba0e7e7a798631cb5d56c0. Nov 4 23:56:10.606545 containerd[1600]: time="2025-11-04T23:56:10.606491119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xhg2c,Uid:ab9d0c36-9ae3-4641-958c-34439f992036,Namespace:kube-system,Attempt:0,} returns sandbox id \"04545066e8a02b6dbf6d229feb2357e1af0fffb322ba0e7e7a798631cb5d56c0\"" Nov 4 23:56:10.608483 kubelet[2776]: E1104 23:56:10.608441 2776 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:56:10.617936 containerd[1600]: time="2025-11-04T23:56:10.617851121Z" level=info msg="CreateContainer within sandbox \"04545066e8a02b6dbf6d229feb2357e1af0fffb322ba0e7e7a798631cb5d56c0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 4 23:56:10.634980 containerd[1600]: time="2025-11-04T23:56:10.634857058Z" level=info msg="Container 21b2955102793710f9c51e925e2382c3215980912c6432dc08dfd13eb4f4795d: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:56:10.644376 containerd[1600]: time="2025-11-04T23:56:10.643812795Z" level=info msg="CreateContainer within sandbox \"04545066e8a02b6dbf6d229feb2357e1af0fffb322ba0e7e7a798631cb5d56c0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"21b2955102793710f9c51e925e2382c3215980912c6432dc08dfd13eb4f4795d\"" Nov 4 23:56:10.644769 containerd[1600]: time="2025-11-04T23:56:10.644734683Z" level=info msg="StartContainer for \"21b2955102793710f9c51e925e2382c3215980912c6432dc08dfd13eb4f4795d\"" Nov 4 23:56:10.647381 containerd[1600]: time="2025-11-04T23:56:10.647290864Z" level=info msg="connecting to shim 21b2955102793710f9c51e925e2382c3215980912c6432dc08dfd13eb4f4795d" address="unix:///run/containerd/s/94b850e430af2bb91cc93cba28a6d504143d56faab8abd9750e1f7db5563cfc7" protocol=ttrpc version=3 Nov 4 23:56:10.673661 systemd[1]: Started cri-containerd-21b2955102793710f9c51e925e2382c3215980912c6432dc08dfd13eb4f4795d.scope - libcontainer container 21b2955102793710f9c51e925e2382c3215980912c6432dc08dfd13eb4f4795d. Nov 4 23:56:10.716149 containerd[1600]: time="2025-11-04T23:56:10.716015544Z" level=info msg="StartContainer for \"21b2955102793710f9c51e925e2382c3215980912c6432dc08dfd13eb4f4795d\" returns successfully" Nov 4 23:56:10.731436 systemd[1]: cri-containerd-21b2955102793710f9c51e925e2382c3215980912c6432dc08dfd13eb4f4795d.scope: Deactivated successfully. Nov 4 23:56:10.731769 systemd[1]: cri-containerd-21b2955102793710f9c51e925e2382c3215980912c6432dc08dfd13eb4f4795d.scope: Consumed 28ms CPU time, 9.9M memory peak, 3.2M read from disk. Nov 4 23:56:10.735447 containerd[1600]: time="2025-11-04T23:56:10.735375532Z" level=info msg="TaskExit event in podsandbox handler container_id:\"21b2955102793710f9c51e925e2382c3215980912c6432dc08dfd13eb4f4795d\" id:\"21b2955102793710f9c51e925e2382c3215980912c6432dc08dfd13eb4f4795d\" pid:4611 exited_at:{seconds:1762300570 nanos:734128348}" Nov 4 23:56:10.735447 containerd[1600]: time="2025-11-04T23:56:10.735312728Z" level=info msg="received exit event container_id:\"21b2955102793710f9c51e925e2382c3215980912c6432dc08dfd13eb4f4795d\" id:\"21b2955102793710f9c51e925e2382c3215980912c6432dc08dfd13eb4f4795d\" pid:4611 exited_at:{seconds:1762300570 nanos:734128348}" Nov 4 23:56:11.420165 kubelet[2776]: E1104 23:56:11.420118 2776 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:56:11.428172 containerd[1600]: time="2025-11-04T23:56:11.427471354Z" level=info msg="CreateContainer within sandbox \"04545066e8a02b6dbf6d229feb2357e1af0fffb322ba0e7e7a798631cb5d56c0\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 4 23:56:11.444464 containerd[1600]: time="2025-11-04T23:56:11.444294694Z" level=info msg="Container 85b50d991ce4a20e800272bbee3fde9f4a481141893a54e9a05090d6cbd66fb5: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:56:11.455984 containerd[1600]: time="2025-11-04T23:56:11.455614424Z" level=info msg="CreateContainer within sandbox \"04545066e8a02b6dbf6d229feb2357e1af0fffb322ba0e7e7a798631cb5d56c0\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"85b50d991ce4a20e800272bbee3fde9f4a481141893a54e9a05090d6cbd66fb5\"" Nov 4 23:56:11.458699 containerd[1600]: time="2025-11-04T23:56:11.458187421Z" level=info msg="StartContainer for \"85b50d991ce4a20e800272bbee3fde9f4a481141893a54e9a05090d6cbd66fb5\"" Nov 4 23:56:11.462248 containerd[1600]: time="2025-11-04T23:56:11.462138841Z" level=info msg="connecting to shim 85b50d991ce4a20e800272bbee3fde9f4a481141893a54e9a05090d6cbd66fb5" address="unix:///run/containerd/s/94b850e430af2bb91cc93cba28a6d504143d56faab8abd9750e1f7db5563cfc7" protocol=ttrpc version=3 Nov 4 23:56:11.499711 systemd[1]: Started cri-containerd-85b50d991ce4a20e800272bbee3fde9f4a481141893a54e9a05090d6cbd66fb5.scope - libcontainer container 85b50d991ce4a20e800272bbee3fde9f4a481141893a54e9a05090d6cbd66fb5. Nov 4 23:56:11.545200 containerd[1600]: time="2025-11-04T23:56:11.545085701Z" level=info msg="StartContainer for \"85b50d991ce4a20e800272bbee3fde9f4a481141893a54e9a05090d6cbd66fb5\" returns successfully" Nov 4 23:56:11.560785 systemd[1]: cri-containerd-85b50d991ce4a20e800272bbee3fde9f4a481141893a54e9a05090d6cbd66fb5.scope: Deactivated successfully. Nov 4 23:56:11.562397 containerd[1600]: time="2025-11-04T23:56:11.561272208Z" level=info msg="received exit event container_id:\"85b50d991ce4a20e800272bbee3fde9f4a481141893a54e9a05090d6cbd66fb5\" id:\"85b50d991ce4a20e800272bbee3fde9f4a481141893a54e9a05090d6cbd66fb5\" pid:4658 exited_at:{seconds:1762300571 nanos:560858851}" Nov 4 23:56:11.562397 containerd[1600]: time="2025-11-04T23:56:11.561684700Z" level=info msg="TaskExit event in podsandbox handler container_id:\"85b50d991ce4a20e800272bbee3fde9f4a481141893a54e9a05090d6cbd66fb5\" id:\"85b50d991ce4a20e800272bbee3fde9f4a481141893a54e9a05090d6cbd66fb5\" pid:4658 exited_at:{seconds:1762300571 nanos:560858851}" Nov 4 23:56:11.561824 systemd[1]: cri-containerd-85b50d991ce4a20e800272bbee3fde9f4a481141893a54e9a05090d6cbd66fb5.scope: Consumed 26ms CPU time, 7.5M memory peak, 2.2M read from disk. Nov 4 23:56:11.592293 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-85b50d991ce4a20e800272bbee3fde9f4a481141893a54e9a05090d6cbd66fb5-rootfs.mount: Deactivated successfully. Nov 4 23:56:12.054663 kubelet[2776]: E1104 23:56:12.054591 2776 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 4 23:56:12.430377 kubelet[2776]: E1104 23:56:12.429820 2776 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:56:12.441835 containerd[1600]: time="2025-11-04T23:56:12.441734765Z" level=info msg="CreateContainer within sandbox \"04545066e8a02b6dbf6d229feb2357e1af0fffb322ba0e7e7a798631cb5d56c0\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 4 23:56:12.463526 containerd[1600]: time="2025-11-04T23:56:12.463426503Z" level=info msg="Container 2db7278ec964c4c49f1446c7487c8155c8520a071296cb59275355d28fec9a55: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:56:12.465156 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount263032894.mount: Deactivated successfully. Nov 4 23:56:12.489294 containerd[1600]: time="2025-11-04T23:56:12.489230719Z" level=info msg="CreateContainer within sandbox \"04545066e8a02b6dbf6d229feb2357e1af0fffb322ba0e7e7a798631cb5d56c0\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2db7278ec964c4c49f1446c7487c8155c8520a071296cb59275355d28fec9a55\"" Nov 4 23:56:12.493883 containerd[1600]: time="2025-11-04T23:56:12.493818362Z" level=info msg="StartContainer for \"2db7278ec964c4c49f1446c7487c8155c8520a071296cb59275355d28fec9a55\"" Nov 4 23:56:12.496614 containerd[1600]: time="2025-11-04T23:56:12.496548475Z" level=info msg="connecting to shim 2db7278ec964c4c49f1446c7487c8155c8520a071296cb59275355d28fec9a55" address="unix:///run/containerd/s/94b850e430af2bb91cc93cba28a6d504143d56faab8abd9750e1f7db5563cfc7" protocol=ttrpc version=3 Nov 4 23:56:12.529725 systemd[1]: Started cri-containerd-2db7278ec964c4c49f1446c7487c8155c8520a071296cb59275355d28fec9a55.scope - libcontainer container 2db7278ec964c4c49f1446c7487c8155c8520a071296cb59275355d28fec9a55. Nov 4 23:56:12.599692 containerd[1600]: time="2025-11-04T23:56:12.599632411Z" level=info msg="StartContainer for \"2db7278ec964c4c49f1446c7487c8155c8520a071296cb59275355d28fec9a55\" returns successfully" Nov 4 23:56:12.611301 systemd[1]: cri-containerd-2db7278ec964c4c49f1446c7487c8155c8520a071296cb59275355d28fec9a55.scope: Deactivated successfully. Nov 4 23:56:12.615805 containerd[1600]: time="2025-11-04T23:56:12.615608510Z" level=info msg="received exit event container_id:\"2db7278ec964c4c49f1446c7487c8155c8520a071296cb59275355d28fec9a55\" id:\"2db7278ec964c4c49f1446c7487c8155c8520a071296cb59275355d28fec9a55\" pid:4701 exited_at:{seconds:1762300572 nanos:615084819}" Nov 4 23:56:12.616224 containerd[1600]: time="2025-11-04T23:56:12.616193396Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2db7278ec964c4c49f1446c7487c8155c8520a071296cb59275355d28fec9a55\" id:\"2db7278ec964c4c49f1446c7487c8155c8520a071296cb59275355d28fec9a55\" pid:4701 exited_at:{seconds:1762300572 nanos:615084819}" Nov 4 23:56:12.655571 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2db7278ec964c4c49f1446c7487c8155c8520a071296cb59275355d28fec9a55-rootfs.mount: Deactivated successfully. Nov 4 23:56:13.439020 kubelet[2776]: E1104 23:56:13.438814 2776 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:56:13.449810 containerd[1600]: time="2025-11-04T23:56:13.449756453Z" level=info msg="CreateContainer within sandbox \"04545066e8a02b6dbf6d229feb2357e1af0fffb322ba0e7e7a798631cb5d56c0\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 4 23:56:13.467366 containerd[1600]: time="2025-11-04T23:56:13.465578184Z" level=info msg="Container 2aa32f08d51124d31805cf65b4c866b925a003f52af444d202da49640a3c9c8b: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:56:13.476709 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1760180151.mount: Deactivated successfully. Nov 4 23:56:13.481720 containerd[1600]: time="2025-11-04T23:56:13.481350593Z" level=info msg="CreateContainer within sandbox \"04545066e8a02b6dbf6d229feb2357e1af0fffb322ba0e7e7a798631cb5d56c0\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2aa32f08d51124d31805cf65b4c866b925a003f52af444d202da49640a3c9c8b\"" Nov 4 23:56:13.482549 containerd[1600]: time="2025-11-04T23:56:13.482508358Z" level=info msg="StartContainer for \"2aa32f08d51124d31805cf65b4c866b925a003f52af444d202da49640a3c9c8b\"" Nov 4 23:56:13.484283 containerd[1600]: time="2025-11-04T23:56:13.484233446Z" level=info msg="connecting to shim 2aa32f08d51124d31805cf65b4c866b925a003f52af444d202da49640a3c9c8b" address="unix:///run/containerd/s/94b850e430af2bb91cc93cba28a6d504143d56faab8abd9750e1f7db5563cfc7" protocol=ttrpc version=3 Nov 4 23:56:13.534706 systemd[1]: Started cri-containerd-2aa32f08d51124d31805cf65b4c866b925a003f52af444d202da49640a3c9c8b.scope - libcontainer container 2aa32f08d51124d31805cf65b4c866b925a003f52af444d202da49640a3c9c8b. Nov 4 23:56:13.576770 systemd[1]: cri-containerd-2aa32f08d51124d31805cf65b4c866b925a003f52af444d202da49640a3c9c8b.scope: Deactivated successfully. Nov 4 23:56:13.580299 containerd[1600]: time="2025-11-04T23:56:13.580204298Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2aa32f08d51124d31805cf65b4c866b925a003f52af444d202da49640a3c9c8b\" id:\"2aa32f08d51124d31805cf65b4c866b925a003f52af444d202da49640a3c9c8b\" pid:4739 exited_at:{seconds:1762300573 nanos:579256390}" Nov 4 23:56:13.583265 containerd[1600]: time="2025-11-04T23:56:13.583054195Z" level=info msg="received exit event container_id:\"2aa32f08d51124d31805cf65b4c866b925a003f52af444d202da49640a3c9c8b\" id:\"2aa32f08d51124d31805cf65b4c866b925a003f52af444d202da49640a3c9c8b\" pid:4739 exited_at:{seconds:1762300573 nanos:579256390}" Nov 4 23:56:13.594616 containerd[1600]: time="2025-11-04T23:56:13.594523387Z" level=info msg="StartContainer for \"2aa32f08d51124d31805cf65b4c866b925a003f52af444d202da49640a3c9c8b\" returns successfully" Nov 4 23:56:13.614036 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2aa32f08d51124d31805cf65b4c866b925a003f52af444d202da49640a3c9c8b-rootfs.mount: Deactivated successfully. Nov 4 23:56:14.445887 kubelet[2776]: E1104 23:56:14.445455 2776 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:56:14.461695 containerd[1600]: time="2025-11-04T23:56:14.461649846Z" level=info msg="CreateContainer within sandbox \"04545066e8a02b6dbf6d229feb2357e1af0fffb322ba0e7e7a798631cb5d56c0\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 4 23:56:14.498092 containerd[1600]: time="2025-11-04T23:56:14.495009661Z" level=info msg="Container a34a728499babe088418eb507f385752605f058cd7d3d6272d159943f77f89ce: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:56:14.497438 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1350215841.mount: Deactivated successfully. Nov 4 23:56:14.504590 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4086098586.mount: Deactivated successfully. Nov 4 23:56:14.513554 containerd[1600]: time="2025-11-04T23:56:14.513473029Z" level=info msg="CreateContainer within sandbox \"04545066e8a02b6dbf6d229feb2357e1af0fffb322ba0e7e7a798631cb5d56c0\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a34a728499babe088418eb507f385752605f058cd7d3d6272d159943f77f89ce\"" Nov 4 23:56:14.515618 containerd[1600]: time="2025-11-04T23:56:14.515471499Z" level=info msg="StartContainer for \"a34a728499babe088418eb507f385752605f058cd7d3d6272d159943f77f89ce\"" Nov 4 23:56:14.518428 containerd[1600]: time="2025-11-04T23:56:14.518359917Z" level=info msg="connecting to shim a34a728499babe088418eb507f385752605f058cd7d3d6272d159943f77f89ce" address="unix:///run/containerd/s/94b850e430af2bb91cc93cba28a6d504143d56faab8abd9750e1f7db5563cfc7" protocol=ttrpc version=3 Nov 4 23:56:14.552681 systemd[1]: Started cri-containerd-a34a728499babe088418eb507f385752605f058cd7d3d6272d159943f77f89ce.scope - libcontainer container a34a728499babe088418eb507f385752605f058cd7d3d6272d159943f77f89ce. Nov 4 23:56:14.621249 containerd[1600]: time="2025-11-04T23:56:14.621116145Z" level=info msg="StartContainer for \"a34a728499babe088418eb507f385752605f058cd7d3d6272d159943f77f89ce\" returns successfully" Nov 4 23:56:14.758770 containerd[1600]: time="2025-11-04T23:56:14.757979774Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a34a728499babe088418eb507f385752605f058cd7d3d6272d159943f77f89ce\" id:\"a2d19c824b6acdc6a4b264ecd6c683126e9de429c4f200ef2f260d2a86b119f5\" pid:4809 exited_at:{seconds:1762300574 nanos:755148351}" Nov 4 23:56:15.165374 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Nov 4 23:56:15.463718 kubelet[2776]: E1104 23:56:15.463635 2776 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:56:15.489347 kubelet[2776]: I1104 23:56:15.489263 2776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-xhg2c" podStartSLOduration=5.489242512 podStartE2EDuration="5.489242512s" podCreationTimestamp="2025-11-04 23:56:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 23:56:15.488956392 +0000 UTC m=+98.855824994" watchObservedRunningTime="2025-11-04 23:56:15.489242512 +0000 UTC m=+98.856111103" Nov 4 23:56:16.467311 kubelet[2776]: E1104 23:56:16.467136 2776 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:56:17.073087 containerd[1600]: time="2025-11-04T23:56:17.073026893Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a34a728499babe088418eb507f385752605f058cd7d3d6272d159943f77f89ce\" id:\"ea3b1cf056474d0d1d17749b45471c69f9bb61df297ca3b1393cff3cfcc39775\" pid:4982 exit_status:1 exited_at:{seconds:1762300577 nanos:72354321}" Nov 4 23:56:18.880309 kubelet[2776]: E1104 23:56:18.878181 2776 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:56:18.982916 systemd-networkd[1497]: lxc_health: Link UP Nov 4 23:56:18.998092 systemd-networkd[1497]: lxc_health: Gained carrier Nov 4 23:56:19.511651 containerd[1600]: time="2025-11-04T23:56:19.511580806Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a34a728499babe088418eb507f385752605f058cd7d3d6272d159943f77f89ce\" id:\"d1d9dbbf15fdd72cbe0a249a0727e827c52c4c5bd645eae0efcee31d068a9c72\" pid:5358 exited_at:{seconds:1762300579 nanos:510571678}" Nov 4 23:56:20.437380 kubelet[2776]: E1104 23:56:20.437050 2776 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:56:20.482198 kubelet[2776]: E1104 23:56:20.482108 2776 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:56:20.961467 systemd-networkd[1497]: lxc_health: Gained IPv6LL Nov 4 23:56:21.485707 kubelet[2776]: E1104 23:56:21.485604 2776 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 4 23:56:21.751112 containerd[1600]: time="2025-11-04T23:56:21.750455386Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a34a728499babe088418eb507f385752605f058cd7d3d6272d159943f77f89ce\" id:\"fd79d84d88c94b4742161d0e55698d452249edaf839cf0077bc08b01a189be74\" pid:5397 exited_at:{seconds:1762300581 nanos:749267806}" Nov 4 23:56:23.976823 containerd[1600]: time="2025-11-04T23:56:23.975677487Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a34a728499babe088418eb507f385752605f058cd7d3d6272d159943f77f89ce\" id:\"aabf3d3d59be3dc505eefd691c46deecbb22d7e8157ab739cddc840ca488cea6\" pid:5422 exited_at:{seconds:1762300583 nanos:974977047}" Nov 4 23:56:26.143259 containerd[1600]: time="2025-11-04T23:56:26.142893434Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a34a728499babe088418eb507f385752605f058cd7d3d6272d159943f77f89ce\" id:\"42ef8aa5433c177c767395ac23f4c306dd169736fb190fabdd19e80515bff302\" pid:5451 exited_at:{seconds:1762300586 nanos:141920116}" Nov 4 23:56:26.154638 sshd[4542]: Connection closed by 139.178.89.65 port 35120 Nov 4 23:56:26.156102 sshd-session[4539]: pam_unix(sshd:session): session closed for user core Nov 4 23:56:26.163507 systemd[1]: sshd@25-64.23.154.5:22-139.178.89.65:35120.service: Deactivated successfully. Nov 4 23:56:26.168019 systemd[1]: session-26.scope: Deactivated successfully. Nov 4 23:56:26.172254 systemd-logind[1571]: Session 26 logged out. Waiting for processes to exit. Nov 4 23:56:26.175979 systemd-logind[1571]: Removed session 26.