Jan 29 11:11:05.081342 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 18:58:40 -00 2025 Jan 29 11:11:05.081388 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5 Jan 29 11:11:05.081408 kernel: BIOS-provided physical RAM map: Jan 29 11:11:05.081420 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 29 11:11:05.081430 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 29 11:11:05.081467 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 29 11:11:05.081479 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Jan 29 11:11:05.081491 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Jan 29 11:11:05.081501 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 29 11:11:05.081512 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 29 11:11:05.081574 kernel: NX (Execute Disable) protection: active Jan 29 11:11:05.081586 kernel: APIC: Static calls initialized Jan 29 11:11:05.081602 kernel: SMBIOS 2.8 present. Jan 29 11:11:05.081613 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Jan 29 11:11:05.081626 kernel: Hypervisor detected: KVM Jan 29 11:11:05.081638 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 29 11:11:05.081661 kernel: kvm-clock: using sched offset of 4722505201 cycles Jan 29 11:11:05.081674 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 29 11:11:05.081685 kernel: tsc: Detected 2294.606 MHz processor Jan 29 11:11:05.081698 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 29 11:11:05.081710 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 29 11:11:05.081723 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Jan 29 11:11:05.081736 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 29 11:11:05.081752 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 29 11:11:05.081772 kernel: ACPI: Early table checksum verification disabled Jan 29 11:11:05.081783 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Jan 29 11:11:05.081796 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:11:05.081808 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:11:05.081821 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:11:05.081833 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jan 29 11:11:05.081845 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:11:05.081858 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:11:05.081870 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:11:05.081889 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:11:05.081901 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Jan 29 11:11:05.081913 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Jan 29 11:11:05.081926 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jan 29 11:11:05.081938 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Jan 29 11:11:05.081949 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Jan 29 11:11:05.081961 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Jan 29 11:11:05.081981 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Jan 29 11:11:05.081999 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 29 11:11:05.082011 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 29 11:11:05.082023 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 29 11:11:05.082036 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 29 11:11:05.082056 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Jan 29 11:11:05.082070 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Jan 29 11:11:05.082089 kernel: Zone ranges: Jan 29 11:11:05.082102 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 29 11:11:05.082115 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Jan 29 11:11:05.082128 kernel: Normal empty Jan 29 11:11:05.082142 kernel: Movable zone start for each node Jan 29 11:11:05.082155 kernel: Early memory node ranges Jan 29 11:11:05.082168 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 29 11:11:05.082182 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Jan 29 11:11:05.082197 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Jan 29 11:11:05.082217 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 11:11:05.082232 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 29 11:11:05.082252 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Jan 29 11:11:05.082267 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 29 11:11:05.082281 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 29 11:11:05.082331 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 29 11:11:05.082348 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 29 11:11:05.082392 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 29 11:11:05.082407 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 29 11:11:05.082420 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 29 11:11:05.083521 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 29 11:11:05.083543 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 29 11:11:05.083559 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 29 11:11:05.083575 kernel: TSC deadline timer available Jan 29 11:11:05.083592 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 29 11:11:05.083608 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 29 11:11:05.083625 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Jan 29 11:11:05.083651 kernel: Booting paravirtualized kernel on KVM Jan 29 11:11:05.083676 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 29 11:11:05.083706 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 29 11:11:05.083724 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 29 11:11:05.083742 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 29 11:11:05.083759 kernel: pcpu-alloc: [0] 0 1 Jan 29 11:11:05.083776 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 29 11:11:05.083798 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5 Jan 29 11:11:05.083822 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 11:11:05.083836 kernel: random: crng init done Jan 29 11:11:05.083855 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 11:11:05.083870 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 29 11:11:05.083886 kernel: Fallback order for Node 0: 0 Jan 29 11:11:05.083902 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Jan 29 11:11:05.083917 kernel: Policy zone: DMA32 Jan 29 11:11:05.083932 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 11:11:05.083946 kernel: Memory: 1969152K/2096612K available (14336K kernel code, 2299K rwdata, 22800K rodata, 43320K init, 1756K bss, 127200K reserved, 0K cma-reserved) Jan 29 11:11:05.083962 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 29 11:11:05.083978 kernel: Kernel/User page tables isolation: enabled Jan 29 11:11:05.084000 kernel: ftrace: allocating 37890 entries in 149 pages Jan 29 11:11:05.084015 kernel: ftrace: allocated 149 pages with 4 groups Jan 29 11:11:05.084029 kernel: Dynamic Preempt: voluntary Jan 29 11:11:05.084044 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 11:11:05.084063 kernel: rcu: RCU event tracing is enabled. Jan 29 11:11:05.084078 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 29 11:11:05.084097 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 11:11:05.084112 kernel: Rude variant of Tasks RCU enabled. Jan 29 11:11:05.084127 kernel: Tracing variant of Tasks RCU enabled. Jan 29 11:11:05.084149 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 11:11:05.084168 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 29 11:11:05.084184 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 29 11:11:05.084199 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 11:11:05.084223 kernel: Console: colour VGA+ 80x25 Jan 29 11:11:05.084245 kernel: printk: console [tty0] enabled Jan 29 11:11:05.084261 kernel: printk: console [ttyS0] enabled Jan 29 11:11:05.084275 kernel: ACPI: Core revision 20230628 Jan 29 11:11:05.084290 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 29 11:11:05.084313 kernel: APIC: Switch to symmetric I/O mode setup Jan 29 11:11:05.084329 kernel: x2apic enabled Jan 29 11:11:05.084353 kernel: APIC: Switched APIC routing to: physical x2apic Jan 29 11:11:05.084368 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 29 11:11:05.084381 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x21134dbeb26, max_idle_ns: 440795298546 ns Jan 29 11:11:05.084394 kernel: Calibrating delay loop (skipped) preset value.. 4589.21 BogoMIPS (lpj=2294606) Jan 29 11:11:05.084407 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 29 11:11:05.084420 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 29 11:11:05.084482 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 29 11:11:05.084497 kernel: Spectre V2 : Mitigation: Retpolines Jan 29 11:11:05.084512 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 29 11:11:05.084532 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 29 11:11:05.084545 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 29 11:11:05.084558 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 29 11:11:05.084571 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 29 11:11:05.084585 kernel: MDS: Mitigation: Clear CPU buffers Jan 29 11:11:05.084600 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 29 11:11:05.084626 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 29 11:11:05.084642 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 29 11:11:05.084673 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 29 11:11:05.084726 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 29 11:11:05.084743 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 29 11:11:05.084760 kernel: Freeing SMP alternatives memory: 32K Jan 29 11:11:05.084776 kernel: pid_max: default: 32768 minimum: 301 Jan 29 11:11:05.084793 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 11:11:05.084816 kernel: landlock: Up and running. Jan 29 11:11:05.084850 kernel: SELinux: Initializing. Jan 29 11:11:05.084867 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 29 11:11:05.084884 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 29 11:11:05.084901 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Jan 29 11:11:05.084918 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 11:11:05.084946 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 11:11:05.084964 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 11:11:05.084989 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Jan 29 11:11:05.085013 kernel: signal: max sigframe size: 1776 Jan 29 11:11:05.085042 kernel: rcu: Hierarchical SRCU implementation. Jan 29 11:11:05.085060 kernel: rcu: Max phase no-delay instances is 400. Jan 29 11:11:05.085078 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 29 11:11:05.085094 kernel: smp: Bringing up secondary CPUs ... Jan 29 11:11:05.085110 kernel: smpboot: x86: Booting SMP configuration: Jan 29 11:11:05.085127 kernel: .... node #0, CPUs: #1 Jan 29 11:11:05.085143 kernel: smp: Brought up 1 node, 2 CPUs Jan 29 11:11:05.085164 kernel: smpboot: Max logical packages: 1 Jan 29 11:11:05.085188 kernel: smpboot: Total of 2 processors activated (9178.42 BogoMIPS) Jan 29 11:11:05.085203 kernel: devtmpfs: initialized Jan 29 11:11:05.085220 kernel: x86/mm: Memory block size: 128MB Jan 29 11:11:05.085237 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 11:11:05.085253 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 29 11:11:05.085270 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 11:11:05.085287 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 11:11:05.085304 kernel: audit: initializing netlink subsys (disabled) Jan 29 11:11:05.085321 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 11:11:05.085343 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 29 11:11:05.085359 kernel: audit: type=2000 audit(1738149063.864:1): state=initialized audit_enabled=0 res=1 Jan 29 11:11:05.085399 kernel: cpuidle: using governor menu Jan 29 11:11:05.085416 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 11:11:05.085609 kernel: dca service started, version 1.12.1 Jan 29 11:11:05.085629 kernel: PCI: Using configuration type 1 for base access Jan 29 11:11:05.085644 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 29 11:11:05.085658 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 11:11:05.085672 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 11:11:05.085697 kernel: ACPI: Added _OSI(Module Device) Jan 29 11:11:05.085711 kernel: ACPI: Added _OSI(Processor Device) Jan 29 11:11:05.085727 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 11:11:05.085743 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 11:11:05.085759 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 11:11:05.085775 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 29 11:11:05.085792 kernel: ACPI: Interpreter enabled Jan 29 11:11:05.085807 kernel: ACPI: PM: (supports S0 S5) Jan 29 11:11:05.085825 kernel: ACPI: Using IOAPIC for interrupt routing Jan 29 11:11:05.085848 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 29 11:11:05.085864 kernel: PCI: Using E820 reservations for host bridge windows Jan 29 11:11:05.085881 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 29 11:11:05.085896 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 11:11:05.086291 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 29 11:11:05.086528 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 29 11:11:05.086700 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 29 11:11:05.086733 kernel: acpiphp: Slot [3] registered Jan 29 11:11:05.086749 kernel: acpiphp: Slot [4] registered Jan 29 11:11:05.086764 kernel: acpiphp: Slot [5] registered Jan 29 11:11:05.086777 kernel: acpiphp: Slot [6] registered Jan 29 11:11:05.086791 kernel: acpiphp: Slot [7] registered Jan 29 11:11:05.086808 kernel: acpiphp: Slot [8] registered Jan 29 11:11:05.086821 kernel: acpiphp: Slot [9] registered Jan 29 11:11:05.086834 kernel: acpiphp: Slot [10] registered Jan 29 11:11:05.086848 kernel: acpiphp: Slot [11] registered Jan 29 11:11:05.086868 kernel: acpiphp: Slot [12] registered Jan 29 11:11:05.086882 kernel: acpiphp: Slot [13] registered Jan 29 11:11:05.086895 kernel: acpiphp: Slot [14] registered Jan 29 11:11:05.086909 kernel: acpiphp: Slot [15] registered Jan 29 11:11:05.086924 kernel: acpiphp: Slot [16] registered Jan 29 11:11:05.086938 kernel: acpiphp: Slot [17] registered Jan 29 11:11:05.086953 kernel: acpiphp: Slot [18] registered Jan 29 11:11:05.086967 kernel: acpiphp: Slot [19] registered Jan 29 11:11:05.086980 kernel: acpiphp: Slot [20] registered Jan 29 11:11:05.086995 kernel: acpiphp: Slot [21] registered Jan 29 11:11:05.087017 kernel: acpiphp: Slot [22] registered Jan 29 11:11:05.087031 kernel: acpiphp: Slot [23] registered Jan 29 11:11:05.087046 kernel: acpiphp: Slot [24] registered Jan 29 11:11:05.087061 kernel: acpiphp: Slot [25] registered Jan 29 11:11:05.087076 kernel: acpiphp: Slot [26] registered Jan 29 11:11:05.087091 kernel: acpiphp: Slot [27] registered Jan 29 11:11:05.087106 kernel: acpiphp: Slot [28] registered Jan 29 11:11:05.087120 kernel: acpiphp: Slot [29] registered Jan 29 11:11:05.087135 kernel: acpiphp: Slot [30] registered Jan 29 11:11:05.087155 kernel: acpiphp: Slot [31] registered Jan 29 11:11:05.087169 kernel: PCI host bridge to bus 0000:00 Jan 29 11:11:05.087410 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 29 11:11:05.087611 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 29 11:11:05.087784 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 29 11:11:05.087942 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 29 11:11:05.088088 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jan 29 11:11:05.088234 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 11:11:05.088591 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 29 11:11:05.088890 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 29 11:11:05.089134 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jan 29 11:11:05.089299 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Jan 29 11:11:05.089498 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 29 11:11:05.089732 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 29 11:11:05.089924 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 29 11:11:05.090089 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 29 11:11:05.090280 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Jan 29 11:11:05.090511 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Jan 29 11:11:05.090709 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 29 11:11:05.090925 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jan 29 11:11:05.091092 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jan 29 11:11:05.091302 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jan 29 11:11:05.091502 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jan 29 11:11:05.091659 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Jan 29 11:11:05.091817 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Jan 29 11:11:05.091996 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jan 29 11:11:05.092191 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 29 11:11:05.092478 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 29 11:11:05.092820 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Jan 29 11:11:05.093031 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Jan 29 11:11:05.093207 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Jan 29 11:11:05.093413 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 29 11:11:05.093649 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Jan 29 11:11:05.093853 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Jan 29 11:11:05.094071 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Jan 29 11:11:05.094304 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Jan 29 11:11:05.096143 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Jan 29 11:11:05.096382 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Jan 29 11:11:05.096565 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Jan 29 11:11:05.096780 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Jan 29 11:11:05.096954 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Jan 29 11:11:05.097135 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Jan 29 11:11:05.098745 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Jan 29 11:11:05.098960 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Jan 29 11:11:05.099137 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Jan 29 11:11:05.099305 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Jan 29 11:11:05.100644 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Jan 29 11:11:05.100976 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Jan 29 11:11:05.101235 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Jan 29 11:11:05.101402 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Jan 29 11:11:05.104465 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 29 11:11:05.104532 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 29 11:11:05.104554 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 29 11:11:05.104575 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 29 11:11:05.104596 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 29 11:11:05.104628 kernel: iommu: Default domain type: Translated Jan 29 11:11:05.104649 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 29 11:11:05.104693 kernel: PCI: Using ACPI for IRQ routing Jan 29 11:11:05.104703 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 29 11:11:05.104713 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 29 11:11:05.104726 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Jan 29 11:11:05.104957 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jan 29 11:11:05.105068 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jan 29 11:11:05.105179 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 29 11:11:05.105192 kernel: vgaarb: loaded Jan 29 11:11:05.105203 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 29 11:11:05.105213 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 29 11:11:05.105223 kernel: clocksource: Switched to clocksource kvm-clock Jan 29 11:11:05.105233 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 11:11:05.105244 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 11:11:05.105253 kernel: pnp: PnP ACPI init Jan 29 11:11:05.105263 kernel: pnp: PnP ACPI: found 4 devices Jan 29 11:11:05.105277 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 29 11:11:05.105286 kernel: NET: Registered PF_INET protocol family Jan 29 11:11:05.105296 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 11:11:05.105306 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 29 11:11:05.105315 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 11:11:05.105325 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 29 11:11:05.105335 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 29 11:11:05.105345 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 29 11:11:05.105354 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 29 11:11:05.105369 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 29 11:11:05.105378 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 11:11:05.105388 kernel: NET: Registered PF_XDP protocol family Jan 29 11:11:05.105583 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 29 11:11:05.105680 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 29 11:11:05.105770 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 29 11:11:05.105875 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 29 11:11:05.105976 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jan 29 11:11:05.106104 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jan 29 11:11:05.106219 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 29 11:11:05.106235 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 29 11:11:05.106344 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 46639 usecs Jan 29 11:11:05.106359 kernel: PCI: CLS 0 bytes, default 64 Jan 29 11:11:05.106369 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 29 11:11:05.106379 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x21134dbeb26, max_idle_ns: 440795298546 ns Jan 29 11:11:05.106418 kernel: Initialise system trusted keyrings Jan 29 11:11:05.106460 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 29 11:11:05.106470 kernel: Key type asymmetric registered Jan 29 11:11:05.106480 kernel: Asymmetric key parser 'x509' registered Jan 29 11:11:05.106491 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 29 11:11:05.106501 kernel: io scheduler mq-deadline registered Jan 29 11:11:05.106511 kernel: io scheduler kyber registered Jan 29 11:11:05.106520 kernel: io scheduler bfq registered Jan 29 11:11:05.106530 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 29 11:11:05.106541 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jan 29 11:11:05.106551 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 29 11:11:05.106565 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 29 11:11:05.106575 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 11:11:05.106586 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 29 11:11:05.106596 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 29 11:11:05.106606 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 29 11:11:05.106615 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 29 11:11:05.106777 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 29 11:11:05.106793 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 29 11:11:05.106900 kernel: rtc_cmos 00:03: registered as rtc0 Jan 29 11:11:05.106996 kernel: rtc_cmos 00:03: setting system clock to 2025-01-29T11:11:04 UTC (1738149064) Jan 29 11:11:05.107093 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 29 11:11:05.107106 kernel: intel_pstate: CPU model not supported Jan 29 11:11:05.107116 kernel: NET: Registered PF_INET6 protocol family Jan 29 11:11:05.107127 kernel: Segment Routing with IPv6 Jan 29 11:11:05.107137 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 11:11:05.107147 kernel: NET: Registered PF_PACKET protocol family Jan 29 11:11:05.107163 kernel: Key type dns_resolver registered Jan 29 11:11:05.107173 kernel: IPI shorthand broadcast: enabled Jan 29 11:11:05.107183 kernel: sched_clock: Marking stable (1354005026, 191235308)->(1592406333, -47165999) Jan 29 11:11:05.107192 kernel: registered taskstats version 1 Jan 29 11:11:05.107202 kernel: Loading compiled-in X.509 certificates Jan 29 11:11:05.107212 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: ede78b3e719729f95eaaf7cb6a5289b567f6ee3e' Jan 29 11:11:05.107222 kernel: Key type .fscrypt registered Jan 29 11:11:05.107232 kernel: Key type fscrypt-provisioning registered Jan 29 11:11:05.107242 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 11:11:05.107256 kernel: ima: Allocated hash algorithm: sha1 Jan 29 11:11:05.107266 kernel: ima: No architecture policies found Jan 29 11:11:05.107275 kernel: clk: Disabling unused clocks Jan 29 11:11:05.107284 kernel: Freeing unused kernel image (initmem) memory: 43320K Jan 29 11:11:05.107295 kernel: Write protecting the kernel read-only data: 38912k Jan 29 11:11:05.107329 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Jan 29 11:11:05.107344 kernel: Run /init as init process Jan 29 11:11:05.107354 kernel: with arguments: Jan 29 11:11:05.107365 kernel: /init Jan 29 11:11:05.107378 kernel: with environment: Jan 29 11:11:05.107388 kernel: HOME=/ Jan 29 11:11:05.107401 kernel: TERM=linux Jan 29 11:11:05.107411 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 11:11:05.108515 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:11:05.108569 systemd[1]: Detected virtualization kvm. Jan 29 11:11:05.108593 systemd[1]: Detected architecture x86-64. Jan 29 11:11:05.108615 systemd[1]: Running in initrd. Jan 29 11:11:05.108651 systemd[1]: No hostname configured, using default hostname. Jan 29 11:11:05.108696 systemd[1]: Hostname set to . Jan 29 11:11:05.108725 systemd[1]: Initializing machine ID from VM UUID. Jan 29 11:11:05.108752 systemd[1]: Queued start job for default target initrd.target. Jan 29 11:11:05.108779 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:11:05.108807 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:11:05.108836 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 11:11:05.108863 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:11:05.108899 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 11:11:05.108922 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 11:11:05.108948 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 11:11:05.108972 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 11:11:05.108999 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:11:05.109021 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:11:05.109050 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:11:05.109073 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:11:05.109096 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:11:05.109123 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:11:05.109146 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:11:05.109169 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:11:05.109197 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 11:11:05.109220 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 11:11:05.109243 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:11:05.109272 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:11:05.109298 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:11:05.109327 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:11:05.109356 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 11:11:05.109382 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:11:05.109412 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 11:11:05.109452 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 11:11:05.109476 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:11:05.109499 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:11:05.109522 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:11:05.109622 systemd-journald[184]: Collecting audit messages is disabled. Jan 29 11:11:05.109682 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 11:11:05.109705 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:11:05.109730 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 11:11:05.109760 systemd-journald[184]: Journal started Jan 29 11:11:05.109815 systemd-journald[184]: Runtime Journal (/run/log/journal/7f24be0215d340aa81c9d00fc78f40c5) is 4.9M, max 39.3M, 34.4M free. Jan 29 11:11:05.092913 systemd-modules-load[185]: Inserted module 'overlay' Jan 29 11:11:05.127461 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 11:11:05.127554 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:11:05.145468 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 11:11:05.148204 systemd-modules-load[185]: Inserted module 'br_netfilter' Jan 29 11:11:05.195696 kernel: Bridge firewalling registered Jan 29 11:11:05.148821 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:11:05.211980 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:11:05.213495 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:11:05.222179 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:11:05.226511 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:11:05.234823 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:11:05.237694 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:11:05.242787 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:11:05.268585 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:11:05.279850 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:11:05.282520 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:11:05.285389 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:11:05.296758 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 11:11:05.338086 dracut-cmdline[221]: dracut-dracut-053 Jan 29 11:11:05.343388 systemd-resolved[217]: Positive Trust Anchors: Jan 29 11:11:05.347593 dracut-cmdline[221]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5 Jan 29 11:11:05.343410 systemd-resolved[217]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:11:05.343495 systemd-resolved[217]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:11:05.351884 systemd-resolved[217]: Defaulting to hostname 'linux'. Jan 29 11:11:05.355269 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:11:05.356969 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:11:05.465509 kernel: SCSI subsystem initialized Jan 29 11:11:05.479534 kernel: Loading iSCSI transport class v2.0-870. Jan 29 11:11:05.495476 kernel: iscsi: registered transport (tcp) Jan 29 11:11:05.526704 kernel: iscsi: registered transport (qla4xxx) Jan 29 11:11:05.526809 kernel: QLogic iSCSI HBA Driver Jan 29 11:11:05.598821 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 11:11:05.607821 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 11:11:05.655582 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 11:11:05.655676 kernel: device-mapper: uevent: version 1.0.3 Jan 29 11:11:05.656516 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 11:11:05.709493 kernel: raid6: avx2x4 gen() 15155 MB/s Jan 29 11:11:05.727502 kernel: raid6: avx2x2 gen() 15413 MB/s Jan 29 11:11:05.745732 kernel: raid6: avx2x1 gen() 13157 MB/s Jan 29 11:11:05.745856 kernel: raid6: using algorithm avx2x2 gen() 15413 MB/s Jan 29 11:11:05.765168 kernel: raid6: .... xor() 14991 MB/s, rmw enabled Jan 29 11:11:05.765287 kernel: raid6: using avx2x2 recovery algorithm Jan 29 11:11:05.797504 kernel: xor: automatically using best checksumming function avx Jan 29 11:11:06.025498 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 11:11:06.048020 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:11:06.056840 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:11:06.091871 systemd-udevd[404]: Using default interface naming scheme 'v255'. Jan 29 11:11:06.102606 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:11:06.116906 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 11:11:06.150970 dracut-pre-trigger[412]: rd.md=0: removing MD RAID activation Jan 29 11:11:06.205830 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:11:06.212869 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:11:06.332665 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:11:06.343818 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 11:11:06.384372 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 11:11:06.389136 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:11:06.391196 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:11:06.392985 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:11:06.399024 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 11:11:06.437897 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:11:06.480515 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Jan 29 11:11:06.553103 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 29 11:11:06.553336 kernel: libata version 3.00 loaded. Jan 29 11:11:06.553365 kernel: cryptd: max_cpu_qlen set to 1000 Jan 29 11:11:06.553392 kernel: ata_piix 0000:00:01.1: version 2.13 Jan 29 11:11:06.568213 kernel: scsi host0: Virtio SCSI HBA Jan 29 11:11:06.568397 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 11:11:06.568413 kernel: GPT:9289727 != 125829119 Jan 29 11:11:06.568473 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 11:11:06.568488 kernel: GPT:9289727 != 125829119 Jan 29 11:11:06.568500 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 11:11:06.568513 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:11:06.568526 kernel: scsi host1: ata_piix Jan 29 11:11:06.568720 kernel: ACPI: bus type USB registered Jan 29 11:11:06.568735 kernel: usbcore: registered new interface driver usbfs Jan 29 11:11:06.568747 kernel: scsi host2: ata_piix Jan 29 11:11:06.568891 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Jan 29 11:11:06.568906 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Jan 29 11:11:06.568919 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Jan 29 11:11:06.578134 kernel: usbcore: registered new interface driver hub Jan 29 11:11:06.578160 kernel: usbcore: registered new device driver usb Jan 29 11:11:06.578173 kernel: virtio_blk virtio5: [vdb] 976 512-byte logical blocks (500 kB/488 KiB) Jan 29 11:11:06.578334 kernel: AVX2 version of gcm_enc/dec engaged. Jan 29 11:11:06.580468 kernel: AES CTR mode by8 optimization enabled Jan 29 11:11:06.591170 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:11:06.591349 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:11:06.592901 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:11:06.593758 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:11:06.593885 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:11:06.594706 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:11:06.602808 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:11:06.674808 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:11:06.682771 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:11:06.743974 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:11:06.792467 kernel: BTRFS: device fsid 7f507843-6957-466b-8fb7-5bee228b170a devid 1 transid 44 /dev/vda3 scanned by (udev-worker) (449) Jan 29 11:11:06.803482 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (463) Jan 29 11:11:06.806071 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 29 11:11:06.820518 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 29 11:11:06.833392 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 29 11:11:06.834936 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 29 11:11:06.849641 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Jan 29 11:11:06.849950 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Jan 29 11:11:06.850182 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Jan 29 11:11:06.850407 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Jan 29 11:11:06.850635 kernel: hub 1-0:1.0: USB hub found Jan 29 11:11:06.850866 kernel: hub 1-0:1.0: 2 ports detected Jan 29 11:11:06.857353 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 11:11:06.868899 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 11:11:06.884166 disk-uuid[550]: Primary Header is updated. Jan 29 11:11:06.884166 disk-uuid[550]: Secondary Entries is updated. Jan 29 11:11:06.884166 disk-uuid[550]: Secondary Header is updated. Jan 29 11:11:06.894639 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:11:07.906576 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:11:07.907813 disk-uuid[551]: The operation has completed successfully. Jan 29 11:11:07.977245 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 11:11:07.978524 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 11:11:08.001758 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 11:11:08.006660 sh[562]: Success Jan 29 11:11:08.027471 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 29 11:11:08.130057 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 11:11:08.140689 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 11:11:08.143380 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 11:11:08.180684 kernel: BTRFS info (device dm-0): first mount of filesystem 7f507843-6957-466b-8fb7-5bee228b170a Jan 29 11:11:08.180809 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:11:08.182484 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 11:11:08.185486 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 11:11:08.185586 kernel: BTRFS info (device dm-0): using free space tree Jan 29 11:11:08.199316 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 11:11:08.201825 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 11:11:08.208915 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 11:11:08.212466 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 11:11:08.239701 kernel: BTRFS info (device vda6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 29 11:11:08.239793 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:11:08.239832 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:11:08.247469 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:11:08.266709 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 11:11:08.268643 kernel: BTRFS info (device vda6): last unmount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 29 11:11:08.281334 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 11:11:08.290820 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 11:11:08.375045 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:11:08.385923 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:11:08.427733 systemd-networkd[749]: lo: Link UP Jan 29 11:11:08.427752 systemd-networkd[749]: lo: Gained carrier Jan 29 11:11:08.431609 systemd-networkd[749]: Enumeration completed Jan 29 11:11:08.431810 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:11:08.432781 systemd[1]: Reached target network.target - Network. Jan 29 11:11:08.434219 systemd-networkd[749]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 29 11:11:08.434225 systemd-networkd[749]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Jan 29 11:11:08.435849 systemd-networkd[749]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:11:08.435856 systemd-networkd[749]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:11:08.437109 systemd-networkd[749]: eth0: Link UP Jan 29 11:11:08.437121 systemd-networkd[749]: eth0: Gained carrier Jan 29 11:11:08.437140 systemd-networkd[749]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 29 11:11:08.448090 systemd-networkd[749]: eth1: Link UP Jan 29 11:11:08.448096 systemd-networkd[749]: eth1: Gained carrier Jan 29 11:11:08.448121 systemd-networkd[749]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:11:08.460609 systemd-networkd[749]: eth0: DHCPv4 address 64.23.245.19/20, gateway 64.23.240.1 acquired from 169.254.169.253 Jan 29 11:11:08.468137 systemd-networkd[749]: eth1: DHCPv4 address 10.124.0.21/20 acquired from 169.254.169.253 Jan 29 11:11:08.498201 ignition[666]: Ignition 2.20.0 Jan 29 11:11:08.498850 ignition[666]: Stage: fetch-offline Jan 29 11:11:08.498935 ignition[666]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:11:08.501501 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:11:08.498948 ignition[666]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 29 11:11:08.499098 ignition[666]: parsed url from cmdline: "" Jan 29 11:11:08.499104 ignition[666]: no config URL provided Jan 29 11:11:08.499111 ignition[666]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 11:11:08.499123 ignition[666]: no config at "/usr/lib/ignition/user.ign" Jan 29 11:11:08.499131 ignition[666]: failed to fetch config: resource requires networking Jan 29 11:11:08.499448 ignition[666]: Ignition finished successfully Jan 29 11:11:08.510986 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 29 11:11:08.554212 ignition[758]: Ignition 2.20.0 Jan 29 11:11:08.555288 ignition[758]: Stage: fetch Jan 29 11:11:08.555659 ignition[758]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:11:08.555674 ignition[758]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 29 11:11:08.555793 ignition[758]: parsed url from cmdline: "" Jan 29 11:11:08.555798 ignition[758]: no config URL provided Jan 29 11:11:08.555804 ignition[758]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 11:11:08.555814 ignition[758]: no config at "/usr/lib/ignition/user.ign" Jan 29 11:11:08.555848 ignition[758]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Jan 29 11:11:08.577911 ignition[758]: GET result: OK Jan 29 11:11:08.578117 ignition[758]: parsing config with SHA512: 2c68f5e3c915d85608097e10bc9b84675564504b7f631994ebe03ab5973792ba28aa0e51aea5f963750becca2cdcd58783a14d9b4c6c2c572d2b078790d42718 Jan 29 11:11:08.585673 unknown[758]: fetched base config from "system" Jan 29 11:11:08.585689 unknown[758]: fetched base config from "system" Jan 29 11:11:08.586485 ignition[758]: fetch: fetch complete Jan 29 11:11:08.585698 unknown[758]: fetched user config from "digitalocean" Jan 29 11:11:08.586494 ignition[758]: fetch: fetch passed Jan 29 11:11:08.588907 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 29 11:11:08.586573 ignition[758]: Ignition finished successfully Jan 29 11:11:08.597817 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 11:11:08.646861 ignition[765]: Ignition 2.20.0 Jan 29 11:11:08.646880 ignition[765]: Stage: kargs Jan 29 11:11:08.647227 ignition[765]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:11:08.647244 ignition[765]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 29 11:11:08.648923 ignition[765]: kargs: kargs passed Jan 29 11:11:08.649032 ignition[765]: Ignition finished successfully Jan 29 11:11:08.650674 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 11:11:08.658829 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 11:11:08.689094 ignition[771]: Ignition 2.20.0 Jan 29 11:11:08.689113 ignition[771]: Stage: disks Jan 29 11:11:08.689405 ignition[771]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:11:08.689419 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 29 11:11:08.690915 ignition[771]: disks: disks passed Jan 29 11:11:08.690997 ignition[771]: Ignition finished successfully Jan 29 11:11:08.693888 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 11:11:08.700262 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 11:11:08.701602 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 11:11:08.702977 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:11:08.704379 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:11:08.705751 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:11:08.717800 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 11:11:08.756468 systemd-fsck[779]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 29 11:11:08.766333 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 11:11:08.772763 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 11:11:08.946463 kernel: EXT4-fs (vda9): mounted filesystem 59ba8ffc-e6b0-4bb4-a36e-13a47bd6ad99 r/w with ordered data mode. Quota mode: none. Jan 29 11:11:08.946893 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 11:11:08.949572 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 11:11:08.956708 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:11:08.966745 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 11:11:08.973158 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... Jan 29 11:11:08.978506 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 29 11:11:08.982269 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 11:11:08.982366 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:11:08.989158 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 11:11:08.998495 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (787) Jan 29 11:11:09.004519 kernel: BTRFS info (device vda6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 29 11:11:09.009306 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:11:09.009416 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:11:09.010555 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 11:11:09.026963 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:11:09.037101 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:11:09.093791 coreos-metadata[789]: Jan 29 11:11:09.093 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 29 11:11:09.102177 initrd-setup-root[817]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 11:11:09.108515 coreos-metadata[789]: Jan 29 11:11:09.107 INFO Fetch successful Jan 29 11:11:09.113540 coreos-metadata[790]: Jan 29 11:11:09.113 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 29 11:11:09.117370 initrd-setup-root[824]: cut: /sysroot/etc/group: No such file or directory Jan 29 11:11:09.122946 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. Jan 29 11:11:09.123159 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. Jan 29 11:11:09.128323 coreos-metadata[790]: Jan 29 11:11:09.126 INFO Fetch successful Jan 29 11:11:09.138345 initrd-setup-root[832]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 11:11:09.140399 coreos-metadata[790]: Jan 29 11:11:09.138 INFO wrote hostname ci-4186.1.0-4-1698ea429f to /sysroot/etc/hostname Jan 29 11:11:09.140019 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 29 11:11:09.149542 initrd-setup-root[840]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 11:11:09.303672 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 11:11:09.312769 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 11:11:09.330008 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 11:11:09.345289 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 11:11:09.347864 kernel: BTRFS info (device vda6): last unmount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 29 11:11:09.370703 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 11:11:09.400889 ignition[908]: INFO : Ignition 2.20.0 Jan 29 11:11:09.403658 ignition[908]: INFO : Stage: mount Jan 29 11:11:09.403658 ignition[908]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:11:09.403658 ignition[908]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 29 11:11:09.406056 ignition[908]: INFO : mount: mount passed Jan 29 11:11:09.406056 ignition[908]: INFO : Ignition finished successfully Jan 29 11:11:09.405766 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 11:11:09.413698 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 11:11:09.443828 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:11:09.463483 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (920) Jan 29 11:11:09.467984 kernel: BTRFS info (device vda6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 29 11:11:09.468109 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:11:09.471208 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:11:09.476486 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:11:09.480248 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:11:09.513698 ignition[936]: INFO : Ignition 2.20.0 Jan 29 11:11:09.513698 ignition[936]: INFO : Stage: files Jan 29 11:11:09.515370 ignition[936]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:11:09.515370 ignition[936]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 29 11:11:09.518969 ignition[936]: DEBUG : files: compiled without relabeling support, skipping Jan 29 11:11:09.518969 ignition[936]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 11:11:09.518969 ignition[936]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 11:11:09.517074 systemd-networkd[749]: eth0: Gained IPv6LL Jan 29 11:11:09.528740 ignition[936]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 11:11:09.530115 ignition[936]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 11:11:09.530115 ignition[936]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 11:11:09.529893 unknown[936]: wrote ssh authorized keys file for user: core Jan 29 11:11:09.534782 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 11:11:09.536666 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 29 11:11:09.576360 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 29 11:11:09.640476 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 11:11:09.640476 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 29 11:11:09.640476 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 29 11:11:09.900776 systemd-networkd[749]: eth1: Gained IPv6LL Jan 29 11:11:10.316465 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 29 11:11:10.436583 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 29 11:11:10.436583 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 29 11:11:10.440769 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 11:11:10.440769 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:11:10.440769 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:11:10.440769 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:11:10.440769 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:11:10.440769 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:11:10.440769 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:11:10.440769 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:11:10.440769 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:11:10.440769 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 11:11:10.440769 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 11:11:10.440769 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 11:11:10.440769 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 29 11:11:10.978953 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 29 11:11:11.282532 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 11:11:11.282532 ignition[936]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 29 11:11:11.285911 ignition[936]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:11:11.285911 ignition[936]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:11:11.285911 ignition[936]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 29 11:11:11.285911 ignition[936]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 29 11:11:11.285911 ignition[936]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 11:11:11.285911 ignition[936]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:11:11.285911 ignition[936]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:11:11.285911 ignition[936]: INFO : files: files passed Jan 29 11:11:11.285911 ignition[936]: INFO : Ignition finished successfully Jan 29 11:11:11.286213 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 11:11:11.295816 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 11:11:11.299719 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 11:11:11.305990 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 11:11:11.307172 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 11:11:11.321788 initrd-setup-root-after-ignition[965]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:11:11.321788 initrd-setup-root-after-ignition[965]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:11:11.324667 initrd-setup-root-after-ignition[969]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:11:11.325716 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:11:11.327157 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 11:11:11.330677 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 11:11:11.398105 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 11:11:11.398277 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 11:11:11.400043 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 11:11:11.401470 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 11:11:11.402814 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 11:11:11.414467 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 11:11:11.436816 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:11:11.444971 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 11:11:11.475956 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:11:11.478311 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:11:11.480673 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 11:11:11.482315 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 11:11:11.482643 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:11:11.486343 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 11:11:11.487383 systemd[1]: Stopped target basic.target - Basic System. Jan 29 11:11:11.488748 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 11:11:11.490007 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:11:11.491488 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 11:11:11.492999 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 11:11:11.494376 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:11:11.495677 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 11:11:11.497044 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 11:11:11.498385 systemd[1]: Stopped target swap.target - Swaps. Jan 29 11:11:11.499515 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 11:11:11.499783 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:11:11.501440 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:11:11.502487 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:11:11.503884 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 11:11:11.504202 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:11:11.505369 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 11:11:11.505719 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 11:11:11.507701 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 11:11:11.508034 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:11:11.509828 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 11:11:11.510137 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 11:11:11.511974 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 29 11:11:11.512215 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 29 11:11:11.520105 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 11:11:11.522957 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 11:11:11.523410 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:11:11.534947 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 11:11:11.536767 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 11:11:11.537574 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:11:11.543244 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 11:11:11.544538 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:11:11.557197 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 11:11:11.558557 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 11:11:11.569939 ignition[989]: INFO : Ignition 2.20.0 Jan 29 11:11:11.569939 ignition[989]: INFO : Stage: umount Jan 29 11:11:11.569939 ignition[989]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:11:11.569939 ignition[989]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 29 11:11:11.569939 ignition[989]: INFO : umount: umount passed Jan 29 11:11:11.569939 ignition[989]: INFO : Ignition finished successfully Jan 29 11:11:11.576284 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 11:11:11.579811 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 11:11:11.588704 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 11:11:11.589911 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 11:11:11.589998 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 11:11:11.603032 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 11:11:11.603139 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 11:11:11.607468 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 29 11:11:11.607594 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 29 11:11:11.608785 systemd[1]: Stopped target network.target - Network. Jan 29 11:11:11.612285 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 11:11:11.612420 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:11:11.613689 systemd[1]: Stopped target paths.target - Path Units. Jan 29 11:11:11.614929 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 11:11:11.618582 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:11:11.622366 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 11:11:11.623710 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 11:11:11.625291 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 11:11:11.625376 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:11:11.626388 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 11:11:11.626471 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:11:11.627844 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 11:11:11.627971 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 11:11:11.628807 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 11:11:11.628892 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 11:11:11.639354 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 11:11:11.640909 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 11:11:11.643604 systemd-networkd[749]: eth0: DHCPv6 lease lost Jan 29 11:11:11.658525 systemd-networkd[749]: eth1: DHCPv6 lease lost Jan 29 11:11:11.663051 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 11:11:11.663315 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 11:11:11.674841 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 11:11:11.675129 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 11:11:11.685365 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 11:11:11.686032 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:11:11.691763 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 11:11:11.693955 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 11:11:11.694098 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:11:11.696677 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 11:11:11.696798 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:11:11.700202 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 11:11:11.700303 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 11:11:11.703018 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 11:11:11.703115 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:11:11.705676 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:11:11.709240 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 11:11:11.710855 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 11:11:11.716770 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 11:11:11.716949 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 11:11:11.725053 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 11:11:11.725309 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:11:11.727162 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 11:11:11.727306 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 11:11:11.728206 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 11:11:11.728282 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:11:11.729798 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 11:11:11.729883 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:11:11.731459 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 11:11:11.731556 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 11:11:11.732998 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:11:11.733084 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:11:11.737736 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 11:11:11.738635 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 11:11:11.738737 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:11:11.740334 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:11:11.740418 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:11:11.743912 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 11:11:11.745527 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 11:11:11.765388 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 11:11:11.765597 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 11:11:11.767461 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 11:11:11.775891 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 11:11:11.794149 systemd[1]: Switching root. Jan 29 11:11:11.839580 systemd-journald[184]: Journal stopped Jan 29 11:11:13.542907 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Jan 29 11:11:13.543009 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 11:11:13.543033 kernel: SELinux: policy capability open_perms=1 Jan 29 11:11:13.543045 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 11:11:13.543058 kernel: SELinux: policy capability always_check_network=0 Jan 29 11:11:13.543071 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 11:11:13.543084 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 11:11:13.543101 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 11:11:13.543114 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 11:11:13.543126 kernel: audit: type=1403 audit(1738149072.125:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 11:11:13.543153 systemd[1]: Successfully loaded SELinux policy in 55.717ms. Jan 29 11:11:13.543175 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 19.223ms. Jan 29 11:11:13.543191 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:11:13.543205 systemd[1]: Detected virtualization kvm. Jan 29 11:11:13.543219 systemd[1]: Detected architecture x86-64. Jan 29 11:11:13.543236 systemd[1]: Detected first boot. Jan 29 11:11:13.543250 systemd[1]: Hostname set to . Jan 29 11:11:13.543264 systemd[1]: Initializing machine ID from VM UUID. Jan 29 11:11:13.543278 zram_generator::config[1033]: No configuration found. Jan 29 11:11:13.543293 systemd[1]: Populated /etc with preset unit settings. Jan 29 11:11:13.543307 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 11:11:13.543322 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 11:11:13.543336 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 11:11:13.543351 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 11:11:13.543369 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 11:11:13.543383 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 11:11:13.543398 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 11:11:13.543419 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 11:11:13.544519 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 11:11:13.544565 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 11:11:13.544603 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 11:11:13.544630 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:11:13.544663 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:11:13.544692 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 11:11:13.544720 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 11:11:13.544742 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 11:11:13.544761 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:11:13.544783 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 29 11:11:13.544807 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:11:13.544822 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 11:11:13.544841 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 11:11:13.544855 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 11:11:13.544869 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 11:11:13.544882 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:11:13.544897 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:11:13.544911 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:11:13.544926 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:11:13.544939 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 11:11:13.544958 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 11:11:13.544973 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:11:13.544989 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:11:13.545003 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:11:13.545017 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 11:11:13.545031 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 11:11:13.545045 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 11:11:13.545065 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 11:11:13.545080 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:11:13.545099 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 11:11:13.545113 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 11:11:13.545127 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 11:11:13.545146 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 11:11:13.545160 systemd[1]: Reached target machines.target - Containers. Jan 29 11:11:13.545173 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 11:11:13.545189 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:11:13.545202 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:11:13.545220 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 11:11:13.545234 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:11:13.545247 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:11:13.545261 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:11:13.545275 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 11:11:13.545288 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:11:13.545303 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 11:11:13.545317 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 11:11:13.545334 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 11:11:13.545348 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 11:11:13.545363 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 11:11:13.545376 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:11:13.545390 kernel: fuse: init (API version 7.39) Jan 29 11:11:13.545405 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:11:13.545419 kernel: loop: module loaded Jan 29 11:11:13.545468 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 11:11:13.545485 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 11:11:13.545504 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:11:13.546492 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 11:11:13.546526 systemd[1]: Stopped verity-setup.service. Jan 29 11:11:13.546542 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:11:13.546556 kernel: ACPI: bus type drm_connector registered Jan 29 11:11:13.546572 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 11:11:13.546585 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 11:11:13.546599 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 11:11:13.546613 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 11:11:13.546636 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 11:11:13.546649 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 11:11:13.546663 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:11:13.546725 systemd-journald[1109]: Collecting audit messages is disabled. Jan 29 11:11:13.546762 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 11:11:13.546775 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 11:11:13.546788 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:11:13.546806 systemd-journald[1109]: Journal started Jan 29 11:11:13.546837 systemd-journald[1109]: Runtime Journal (/run/log/journal/7f24be0215d340aa81c9d00fc78f40c5) is 4.9M, max 39.3M, 34.4M free. Jan 29 11:11:13.088226 systemd[1]: Queued start job for default target multi-user.target. Jan 29 11:11:13.112907 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 29 11:11:13.548633 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:11:13.113748 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 11:11:13.550512 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:11:13.554860 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:11:13.555139 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:11:13.556518 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:11:13.556765 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:11:13.558094 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 11:11:13.558311 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 11:11:13.560401 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:11:13.560778 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:11:13.562086 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:11:13.563787 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 11:11:13.566113 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 11:11:13.589297 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 11:11:13.597683 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 11:11:13.605639 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 11:11:13.606305 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 11:11:13.606366 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:11:13.611067 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 29 11:11:13.622886 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 11:11:13.634895 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 11:11:13.635977 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:11:13.645759 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 11:11:13.652690 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 11:11:13.653674 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:11:13.664859 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 11:11:13.665751 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:11:13.675695 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:11:13.687716 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 11:11:13.696577 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 11:11:13.702032 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 11:11:13.703071 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 11:11:13.705273 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 11:11:13.732773 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 11:11:13.738714 systemd-journald[1109]: Time spent on flushing to /var/log/journal/7f24be0215d340aa81c9d00fc78f40c5 is 134.719ms for 989 entries. Jan 29 11:11:13.738714 systemd-journald[1109]: System Journal (/var/log/journal/7f24be0215d340aa81c9d00fc78f40c5) is 8.0M, max 195.6M, 187.6M free. Jan 29 11:11:13.939141 systemd-journald[1109]: Received client request to flush runtime journal. Jan 29 11:11:13.939253 kernel: loop0: detected capacity change from 0 to 210664 Jan 29 11:11:13.939293 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 11:11:13.939326 kernel: loop1: detected capacity change from 0 to 141000 Jan 29 11:11:13.780794 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:11:13.818082 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 11:11:13.822413 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 11:11:13.833920 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 29 11:11:13.851581 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:11:13.867899 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 11:11:13.870957 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 11:11:13.882748 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:11:13.947963 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 11:11:13.957244 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 11:11:13.961718 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 29 11:11:13.975604 udevadm[1167]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 29 11:11:13.989190 kernel: loop2: detected capacity change from 0 to 8 Jan 29 11:11:14.013540 kernel: loop3: detected capacity change from 0 to 138184 Jan 29 11:11:14.014084 systemd-tmpfiles[1168]: ACLs are not supported, ignoring. Jan 29 11:11:14.014118 systemd-tmpfiles[1168]: ACLs are not supported, ignoring. Jan 29 11:11:14.031907 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:11:14.151485 kernel: loop4: detected capacity change from 0 to 210664 Jan 29 11:11:14.200040 kernel: loop5: detected capacity change from 0 to 141000 Jan 29 11:11:14.268286 kernel: loop6: detected capacity change from 0 to 8 Jan 29 11:11:14.274001 kernel: loop7: detected capacity change from 0 to 138184 Jan 29 11:11:14.312766 (sd-merge)[1179]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Jan 29 11:11:14.322106 (sd-merge)[1179]: Merged extensions into '/usr'. Jan 29 11:11:14.338136 systemd[1]: Reloading requested from client PID 1152 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 11:11:14.338159 systemd[1]: Reloading... Jan 29 11:11:14.456084 zram_generator::config[1202]: No configuration found. Jan 29 11:11:14.768357 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:11:14.874034 systemd[1]: Reloading finished in 532 ms. Jan 29 11:11:14.892815 ldconfig[1147]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 11:11:14.912999 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 11:11:14.915687 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 11:11:14.933986 systemd[1]: Starting ensure-sysext.service... Jan 29 11:11:14.945136 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:11:14.976713 systemd[1]: Reloading requested from client PID 1248 ('systemctl') (unit ensure-sysext.service)... Jan 29 11:11:14.976749 systemd[1]: Reloading... Jan 29 11:11:15.031328 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 11:11:15.032623 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 11:11:15.034207 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 11:11:15.035070 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. Jan 29 11:11:15.035314 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. Jan 29 11:11:15.048824 systemd-tmpfiles[1249]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:11:15.049052 systemd-tmpfiles[1249]: Skipping /boot Jan 29 11:11:15.117095 systemd-tmpfiles[1249]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:11:15.117120 systemd-tmpfiles[1249]: Skipping /boot Jan 29 11:11:15.182482 zram_generator::config[1279]: No configuration found. Jan 29 11:11:15.405902 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:11:15.485777 systemd[1]: Reloading finished in 508 ms. Jan 29 11:11:15.510593 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 11:11:15.524291 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:11:15.540844 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 11:11:15.545747 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 11:11:15.558911 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 11:11:15.565793 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:11:15.575836 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:11:15.587823 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 11:11:15.609292 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 11:11:15.615713 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:11:15.616062 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:11:15.626063 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:11:15.633622 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:11:15.643983 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:11:15.644958 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:11:15.645193 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:11:15.650235 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:11:15.652631 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:11:15.653016 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:11:15.653208 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:11:15.661925 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:11:15.663148 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:11:15.672663 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:11:15.674885 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:11:15.675181 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:11:15.682360 systemd[1]: Finished ensure-sysext.service. Jan 29 11:11:15.691129 systemd-udevd[1331]: Using default interface naming scheme 'v255'. Jan 29 11:11:15.709751 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 29 11:11:15.730797 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 11:11:15.741931 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 11:11:15.744007 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 11:11:15.746729 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:11:15.763781 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:11:15.817788 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:11:15.818084 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:11:15.821242 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:11:15.822324 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:11:15.824187 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:11:15.825416 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:11:15.827094 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:11:15.828055 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:11:15.842013 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:11:15.842379 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:11:15.846501 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 11:11:15.856000 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 11:11:15.882585 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 11:11:15.889480 augenrules[1379]: No rules Jan 29 11:11:15.894649 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:11:15.895013 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 11:11:15.910944 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 11:11:16.058514 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 29 11:11:16.059836 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 11:11:16.081675 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 29 11:11:16.089714 systemd-networkd[1357]: lo: Link UP Jan 29 11:11:16.089729 systemd-networkd[1357]: lo: Gained carrier Jan 29 11:11:16.093129 systemd-networkd[1357]: Enumeration completed Jan 29 11:11:16.093570 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:11:16.096158 systemd-networkd[1357]: eth0: Configuring with /run/systemd/network/10-0e:c8:5f:a2:96:54.network. Jan 29 11:11:16.102688 systemd-networkd[1357]: eth0: Link UP Jan 29 11:11:16.103698 systemd-networkd[1357]: eth0: Gained carrier Jan 29 11:11:16.104129 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 11:11:16.117263 systemd-timesyncd[1346]: Network configuration changed, trying to establish connection. Jan 29 11:11:16.141638 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Jan 29 11:11:16.142385 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:11:16.142765 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:11:16.148568 systemd-resolved[1330]: Positive Trust Anchors: Jan 29 11:11:16.148612 systemd-resolved[1330]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:11:16.148669 systemd-resolved[1330]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:11:16.149769 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:11:16.157828 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:11:16.170268 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:11:16.171563 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:11:16.171651 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 11:11:16.171678 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:11:16.179119 systemd-resolved[1330]: Using system hostname 'ci-4186.1.0-4-1698ea429f'. Jan 29 11:11:16.191085 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:11:16.192026 systemd[1]: Reached target network.target - Network. Jan 29 11:11:16.193130 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:11:16.201188 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:11:16.202623 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:11:16.206124 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:11:16.207636 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:11:16.215047 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:11:16.216828 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:11:16.222461 kernel: ISO 9660 Extensions: RRIP_1991A Jan 29 11:11:16.226454 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 44 scanned by (udev-worker) (1362) Jan 29 11:11:16.227643 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Jan 29 11:11:16.229408 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:11:16.229556 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:11:16.321590 systemd-networkd[1357]: eth1: Configuring with /run/systemd/network/10-d2:80:fc:3f:53:9d.network. Jan 29 11:11:16.322226 systemd-timesyncd[1346]: Network configuration changed, trying to establish connection. Jan 29 11:11:16.323940 systemd-networkd[1357]: eth1: Link UP Jan 29 11:11:16.323955 systemd-networkd[1357]: eth1: Gained carrier Jan 29 11:11:16.328408 systemd-timesyncd[1346]: Network configuration changed, trying to establish connection. Jan 29 11:11:16.329388 systemd-timesyncd[1346]: Network configuration changed, trying to establish connection. Jan 29 11:11:16.347465 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jan 29 11:11:16.381458 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jan 29 11:11:16.409069 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 11:11:16.416461 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 29 11:11:16.421735 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 11:11:16.433463 kernel: ACPI: button: Power Button [PWRF] Jan 29 11:11:16.458743 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 11:11:16.479467 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 11:11:16.494472 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jan 29 11:11:16.500385 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jan 29 11:11:16.506470 kernel: Console: switching to colour dummy device 80x25 Jan 29 11:11:16.508949 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:11:16.510001 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 29 11:11:16.510067 kernel: [drm] features: -context_init Jan 29 11:11:16.522469 kernel: [drm] number of scanouts: 1 Jan 29 11:11:16.522584 kernel: [drm] number of cap sets: 0 Jan 29 11:11:16.546638 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jan 29 11:11:16.553834 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 29 11:11:16.553950 kernel: Console: switching to colour frame buffer device 128x48 Jan 29 11:11:16.556397 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:11:16.558952 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:11:16.569716 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 29 11:11:16.584954 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:11:16.648062 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:11:16.648442 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:11:16.666888 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:11:16.690466 kernel: EDAC MC: Ver: 3.0.0 Jan 29 11:11:16.748409 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 11:11:16.755900 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 11:11:16.785469 lvm[1431]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:11:16.806058 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:11:16.816152 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 11:11:16.817917 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:11:16.818060 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:11:16.818330 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 11:11:16.818495 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 11:11:16.818884 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 11:11:16.819149 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 11:11:16.819260 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 11:11:16.819346 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 11:11:16.819393 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:11:16.820140 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:11:16.822973 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 11:11:16.825992 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 11:11:16.837827 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 11:11:16.846820 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 11:11:16.850100 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 11:11:16.852008 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:11:16.853249 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:11:16.854969 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:11:16.855034 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:11:16.858633 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 11:11:16.865195 lvm[1438]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:11:16.878315 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 29 11:11:16.889800 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 11:11:16.897900 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 11:11:16.911782 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 11:11:16.912561 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 11:11:16.920783 jq[1442]: false Jan 29 11:11:16.924722 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 11:11:16.938617 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 29 11:11:16.949848 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 11:11:16.967114 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 11:11:16.976662 dbus-daemon[1441]: [system] SELinux support is enabled Jan 29 11:11:16.983607 coreos-metadata[1440]: Jan 29 11:11:16.980 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 29 11:11:16.984745 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 11:11:16.988013 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 11:11:16.990327 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 11:11:16.998832 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 11:11:17.004974 coreos-metadata[1440]: Jan 29 11:11:17.004 INFO Fetch successful Jan 29 11:11:17.013673 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 11:11:17.015968 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 11:11:17.027564 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 11:11:17.035106 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 11:11:17.035404 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 11:11:17.043371 update_engine[1452]: I20250129 11:11:17.042761 1452 main.cc:92] Flatcar Update Engine starting Jan 29 11:11:17.048078 update_engine[1452]: I20250129 11:11:17.047788 1452 update_check_scheduler.cc:74] Next update check in 6m45s Jan 29 11:11:17.052344 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 11:11:17.053481 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 11:11:17.056772 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 11:11:17.056958 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Jan 29 11:11:17.056989 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 11:11:17.062479 systemd[1]: Started update-engine.service - Update Engine. Jan 29 11:11:17.079743 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 11:11:17.098915 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 11:11:17.099181 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 11:11:17.119602 extend-filesystems[1443]: Found loop4 Jan 29 11:11:17.119602 extend-filesystems[1443]: Found loop5 Jan 29 11:11:17.119602 extend-filesystems[1443]: Found loop6 Jan 29 11:11:17.129358 jq[1454]: true Jan 29 11:11:17.134163 extend-filesystems[1443]: Found loop7 Jan 29 11:11:17.134163 extend-filesystems[1443]: Found vda Jan 29 11:11:17.134163 extend-filesystems[1443]: Found vda1 Jan 29 11:11:17.134163 extend-filesystems[1443]: Found vda2 Jan 29 11:11:17.134163 extend-filesystems[1443]: Found vda3 Jan 29 11:11:17.134163 extend-filesystems[1443]: Found usr Jan 29 11:11:17.134163 extend-filesystems[1443]: Found vda4 Jan 29 11:11:17.134163 extend-filesystems[1443]: Found vda6 Jan 29 11:11:17.134163 extend-filesystems[1443]: Found vda7 Jan 29 11:11:17.134163 extend-filesystems[1443]: Found vda9 Jan 29 11:11:17.134163 extend-filesystems[1443]: Checking size of /dev/vda9 Jan 29 11:11:17.168526 (ntainerd)[1475]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 11:11:17.176871 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 11:11:17.196910 tar[1458]: linux-amd64/helm Jan 29 11:11:17.179180 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 11:11:17.203265 jq[1471]: true Jan 29 11:11:17.232596 extend-filesystems[1443]: Resized partition /dev/vda9 Jan 29 11:11:17.237804 extend-filesystems[1487]: resize2fs 1.47.1 (20-May-2024) Jan 29 11:11:17.255850 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Jan 29 11:11:17.258303 systemd-logind[1450]: New seat seat0. Jan 29 11:11:17.283677 systemd-logind[1450]: Watching system buttons on /dev/input/event2 (Power Button) Jan 29 11:11:17.283720 systemd-logind[1450]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 29 11:11:17.284043 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 11:11:17.305571 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 29 11:11:17.313140 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 11:11:17.378503 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 29 11:11:17.425003 extend-filesystems[1487]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 29 11:11:17.425003 extend-filesystems[1487]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 29 11:11:17.425003 extend-filesystems[1487]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 29 11:11:17.448005 extend-filesystems[1443]: Resized filesystem in /dev/vda9 Jan 29 11:11:17.448005 extend-filesystems[1443]: Found vdb Jan 29 11:11:17.434719 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 11:11:17.435013 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 11:11:17.453534 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 44 scanned by (udev-worker) (1363) Jan 29 11:11:17.589890 locksmithd[1466]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 11:11:17.610034 bash[1505]: Updated "/home/core/.ssh/authorized_keys" Jan 29 11:11:17.613768 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 11:11:17.630926 systemd[1]: Starting sshkeys.service... Jan 29 11:11:17.692023 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 29 11:11:17.705378 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 29 11:11:17.809310 coreos-metadata[1517]: Jan 29 11:11:17.808 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 29 11:11:17.824560 coreos-metadata[1517]: Jan 29 11:11:17.824 INFO Fetch successful Jan 29 11:11:17.836843 systemd-networkd[1357]: eth1: Gained IPv6LL Jan 29 11:11:17.837604 systemd-timesyncd[1346]: Network configuration changed, trying to establish connection. Jan 29 11:11:17.848236 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 11:11:17.854796 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 11:11:17.857377 unknown[1517]: wrote ssh authorized keys file for user: core Jan 29 11:11:17.874114 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:11:17.884011 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 11:11:17.966032 systemd-networkd[1357]: eth0: Gained IPv6LL Jan 29 11:11:17.968383 systemd-timesyncd[1346]: Network configuration changed, trying to establish connection. Jan 29 11:11:17.976459 containerd[1475]: time="2025-01-29T11:11:17.974974605Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 29 11:11:17.991964 update-ssh-keys[1523]: Updated "/home/core/.ssh/authorized_keys" Jan 29 11:11:17.993114 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 29 11:11:18.000537 systemd[1]: Finished sshkeys.service. Jan 29 11:11:18.029787 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 11:11:18.099766 containerd[1475]: time="2025-01-29T11:11:18.099509445Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:11:18.114663 containerd[1475]: time="2025-01-29T11:11:18.114582427Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:11:18.114663 containerd[1475]: time="2025-01-29T11:11:18.114653464Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 11:11:18.114871 containerd[1475]: time="2025-01-29T11:11:18.114684635Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 11:11:18.114979 containerd[1475]: time="2025-01-29T11:11:18.114951194Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 11:11:18.115029 containerd[1475]: time="2025-01-29T11:11:18.114991153Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 11:11:18.115121 containerd[1475]: time="2025-01-29T11:11:18.115094130Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:11:18.115172 containerd[1475]: time="2025-01-29T11:11:18.115143933Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:11:18.117690 containerd[1475]: time="2025-01-29T11:11:18.117628123Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:11:18.117690 containerd[1475]: time="2025-01-29T11:11:18.117680863Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 11:11:18.117882 containerd[1475]: time="2025-01-29T11:11:18.117710815Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:11:18.117882 containerd[1475]: time="2025-01-29T11:11:18.117724930Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 11:11:18.117984 containerd[1475]: time="2025-01-29T11:11:18.117910088Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:11:18.119448 containerd[1475]: time="2025-01-29T11:11:18.118249827Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:11:18.119578 containerd[1475]: time="2025-01-29T11:11:18.119503715Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:11:18.119578 containerd[1475]: time="2025-01-29T11:11:18.119533941Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 11:11:18.119773 containerd[1475]: time="2025-01-29T11:11:18.119742274Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 11:11:18.119861 containerd[1475]: time="2025-01-29T11:11:18.119839852Z" level=info msg="metadata content store policy set" policy=shared Jan 29 11:11:18.232567 containerd[1475]: time="2025-01-29T11:11:18.232488859Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 11:11:18.232819 containerd[1475]: time="2025-01-29T11:11:18.232616600Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 11:11:18.232819 containerd[1475]: time="2025-01-29T11:11:18.232643306Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 11:11:18.232819 containerd[1475]: time="2025-01-29T11:11:18.232670498Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 11:11:18.232819 containerd[1475]: time="2025-01-29T11:11:18.232694054Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 11:11:18.233030 containerd[1475]: time="2025-01-29T11:11:18.233002073Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 11:11:18.234446 containerd[1475]: time="2025-01-29T11:11:18.233577226Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 11:11:18.235489 containerd[1475]: time="2025-01-29T11:11:18.235415177Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 11:11:18.235580 containerd[1475]: time="2025-01-29T11:11:18.235504193Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 11:11:18.235580 containerd[1475]: time="2025-01-29T11:11:18.235562668Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 11:11:18.235720 containerd[1475]: time="2025-01-29T11:11:18.235584224Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 11:11:18.237585 containerd[1475]: time="2025-01-29T11:11:18.237516858Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 11:11:18.237668 containerd[1475]: time="2025-01-29T11:11:18.237595835Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 11:11:18.237668 containerd[1475]: time="2025-01-29T11:11:18.237642387Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 11:11:18.237713 containerd[1475]: time="2025-01-29T11:11:18.237676451Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 11:11:18.237736 containerd[1475]: time="2025-01-29T11:11:18.237696898Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 11:11:18.237896 containerd[1475]: time="2025-01-29T11:11:18.237733914Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 11:11:18.237896 containerd[1475]: time="2025-01-29T11:11:18.237752316Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 11:11:18.237896 containerd[1475]: time="2025-01-29T11:11:18.237834819Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 11:11:18.237961 containerd[1475]: time="2025-01-29T11:11:18.237890952Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 11:11:18.237991 containerd[1475]: time="2025-01-29T11:11:18.237972707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 11:11:18.238019 containerd[1475]: time="2025-01-29T11:11:18.238005647Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 11:11:18.238797 containerd[1475]: time="2025-01-29T11:11:18.238104571Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 11:11:18.238797 containerd[1475]: time="2025-01-29T11:11:18.238187452Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 11:11:18.238797 containerd[1475]: time="2025-01-29T11:11:18.238231301Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 11:11:18.238797 containerd[1475]: time="2025-01-29T11:11:18.238257243Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 11:11:18.238797 containerd[1475]: time="2025-01-29T11:11:18.238294630Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 11:11:18.238797 containerd[1475]: time="2025-01-29T11:11:18.238323555Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 11:11:18.238797 containerd[1475]: time="2025-01-29T11:11:18.238345300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 11:11:18.238797 containerd[1475]: time="2025-01-29T11:11:18.238384011Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 11:11:18.238797 containerd[1475]: time="2025-01-29T11:11:18.238484699Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 11:11:18.238797 containerd[1475]: time="2025-01-29T11:11:18.238516805Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 11:11:18.238797 containerd[1475]: time="2025-01-29T11:11:18.238596862Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 11:11:18.238797 containerd[1475]: time="2025-01-29T11:11:18.238643072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 11:11:18.238797 containerd[1475]: time="2025-01-29T11:11:18.238665310Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 11:11:18.238797 containerd[1475]: time="2025-01-29T11:11:18.238777997Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 11:11:18.239618 containerd[1475]: time="2025-01-29T11:11:18.238832119Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 11:11:18.239618 containerd[1475]: time="2025-01-29T11:11:18.238852645Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 11:11:18.239618 containerd[1475]: time="2025-01-29T11:11:18.238886304Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 11:11:18.239618 containerd[1475]: time="2025-01-29T11:11:18.238901364Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 11:11:18.239618 containerd[1475]: time="2025-01-29T11:11:18.238921033Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 11:11:18.242442 containerd[1475]: time="2025-01-29T11:11:18.238936017Z" level=info msg="NRI interface is disabled by configuration." Jan 29 11:11:18.242442 containerd[1475]: time="2025-01-29T11:11:18.240982044Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 11:11:18.244898 containerd[1475]: time="2025-01-29T11:11:18.244753833Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 11:11:18.245183 containerd[1475]: time="2025-01-29T11:11:18.244914407Z" level=info msg="Connect containerd service" Jan 29 11:11:18.245183 containerd[1475]: time="2025-01-29T11:11:18.245021062Z" level=info msg="using legacy CRI server" Jan 29 11:11:18.245183 containerd[1475]: time="2025-01-29T11:11:18.245056269Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 11:11:18.247444 containerd[1475]: time="2025-01-29T11:11:18.245366727Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 11:11:18.248421 containerd[1475]: time="2025-01-29T11:11:18.248366500Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 11:11:18.249008 containerd[1475]: time="2025-01-29T11:11:18.248621535Z" level=info msg="Start subscribing containerd event" Jan 29 11:11:18.249008 containerd[1475]: time="2025-01-29T11:11:18.248704451Z" level=info msg="Start recovering state" Jan 29 11:11:18.249008 containerd[1475]: time="2025-01-29T11:11:18.248797121Z" level=info msg="Start event monitor" Jan 29 11:11:18.249008 containerd[1475]: time="2025-01-29T11:11:18.248817967Z" level=info msg="Start snapshots syncer" Jan 29 11:11:18.249008 containerd[1475]: time="2025-01-29T11:11:18.248830676Z" level=info msg="Start cni network conf syncer for default" Jan 29 11:11:18.249008 containerd[1475]: time="2025-01-29T11:11:18.248838474Z" level=info msg="Start streaming server" Jan 29 11:11:18.249244 containerd[1475]: time="2025-01-29T11:11:18.249204242Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 11:11:18.249332 containerd[1475]: time="2025-01-29T11:11:18.249299833Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 11:11:18.249515 containerd[1475]: time="2025-01-29T11:11:18.249387749Z" level=info msg="containerd successfully booted in 0.278144s" Jan 29 11:11:18.250008 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 11:11:18.419270 sshd_keygen[1477]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 11:11:18.490020 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 11:11:18.505058 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 11:11:18.538323 tar[1458]: linux-amd64/LICENSE Jan 29 11:11:18.540549 tar[1458]: linux-amd64/README.md Jan 29 11:11:18.550902 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 11:11:18.551281 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 11:11:18.556183 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 29 11:11:18.570095 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 11:11:18.600689 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 11:11:18.613476 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 11:11:18.627143 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 29 11:11:18.629110 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 11:11:19.751887 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:11:19.752121 (kubelet)[1563]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:11:19.757112 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 11:11:19.762732 systemd[1]: Startup finished in 1.514s (kernel) + 7.380s (initrd) + 7.691s (userspace) = 16.587s. Jan 29 11:11:19.799339 agetty[1557]: failed to open credentials directory Jan 29 11:11:19.803032 agetty[1556]: failed to open credentials directory Jan 29 11:11:20.896783 kubelet[1563]: E0129 11:11:20.896654 1563 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:11:20.900848 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:11:20.901080 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:11:20.901972 systemd[1]: kubelet.service: Consumed 1.477s CPU time. Jan 29 11:11:26.646143 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 11:11:26.662235 systemd[1]: Started sshd@0-64.23.245.19:22-139.178.89.65:35488.service - OpenSSH per-connection server daemon (139.178.89.65:35488). Jan 29 11:11:26.774857 sshd[1576]: Accepted publickey for core from 139.178.89.65 port 35488 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:11:26.780054 sshd-session[1576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:11:26.798558 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 11:11:26.808194 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 11:11:26.816271 systemd-logind[1450]: New session 1 of user core. Jan 29 11:11:26.835620 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 11:11:26.845203 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 11:11:26.859179 (systemd)[1580]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 11:11:27.073643 systemd[1580]: Queued start job for default target default.target. Jan 29 11:11:27.082946 systemd[1580]: Created slice app.slice - User Application Slice. Jan 29 11:11:27.083016 systemd[1580]: Reached target paths.target - Paths. Jan 29 11:11:27.083045 systemd[1580]: Reached target timers.target - Timers. Jan 29 11:11:27.086274 systemd[1580]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 11:11:27.118895 systemd[1580]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 11:11:27.119126 systemd[1580]: Reached target sockets.target - Sockets. Jan 29 11:11:27.119151 systemd[1580]: Reached target basic.target - Basic System. Jan 29 11:11:27.119235 systemd[1580]: Reached target default.target - Main User Target. Jan 29 11:11:27.119287 systemd[1580]: Startup finished in 248ms. Jan 29 11:11:27.119761 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 11:11:27.131948 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 11:11:27.281508 kernel: hrtimer: interrupt took 5836683 ns Jan 29 11:11:27.309779 systemd[1]: Started sshd@1-64.23.245.19:22-139.178.89.65:35492.service - OpenSSH per-connection server daemon (139.178.89.65:35492). Jan 29 11:11:27.536260 sshd[1591]: Accepted publickey for core from 139.178.89.65 port 35492 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:11:27.544063 sshd-session[1591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:11:27.578336 systemd-logind[1450]: New session 2 of user core. Jan 29 11:11:27.586802 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 11:11:27.697423 sshd[1593]: Connection closed by 139.178.89.65 port 35492 Jan 29 11:11:27.696672 sshd-session[1591]: pam_unix(sshd:session): session closed for user core Jan 29 11:11:27.713081 systemd[1]: sshd@1-64.23.245.19:22-139.178.89.65:35492.service: Deactivated successfully. Jan 29 11:11:27.716193 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 11:11:27.717411 systemd-logind[1450]: Session 2 logged out. Waiting for processes to exit. Jan 29 11:11:27.737114 systemd[1]: Started sshd@2-64.23.245.19:22-139.178.89.65:35506.service - OpenSSH per-connection server daemon (139.178.89.65:35506). Jan 29 11:11:27.739556 systemd-logind[1450]: Removed session 2. Jan 29 11:11:27.800077 sshd[1598]: Accepted publickey for core from 139.178.89.65 port 35506 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:11:27.802414 sshd-session[1598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:11:27.811510 systemd-logind[1450]: New session 3 of user core. Jan 29 11:11:27.821864 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 11:11:27.884576 sshd[1600]: Connection closed by 139.178.89.65 port 35506 Jan 29 11:11:27.885168 sshd-session[1598]: pam_unix(sshd:session): session closed for user core Jan 29 11:11:27.898802 systemd[1]: sshd@2-64.23.245.19:22-139.178.89.65:35506.service: Deactivated successfully. Jan 29 11:11:27.901804 systemd[1]: session-3.scope: Deactivated successfully. Jan 29 11:11:27.904714 systemd-logind[1450]: Session 3 logged out. Waiting for processes to exit. Jan 29 11:11:27.910056 systemd[1]: Started sshd@3-64.23.245.19:22-139.178.89.65:35512.service - OpenSSH per-connection server daemon (139.178.89.65:35512). Jan 29 11:11:27.912882 systemd-logind[1450]: Removed session 3. Jan 29 11:11:27.993951 sshd[1605]: Accepted publickey for core from 139.178.89.65 port 35512 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:11:27.996577 sshd-session[1605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:11:28.010876 systemd-logind[1450]: New session 4 of user core. Jan 29 11:11:28.016852 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 11:11:28.089539 sshd[1607]: Connection closed by 139.178.89.65 port 35512 Jan 29 11:11:28.088295 sshd-session[1605]: pam_unix(sshd:session): session closed for user core Jan 29 11:11:28.101060 systemd[1]: sshd@3-64.23.245.19:22-139.178.89.65:35512.service: Deactivated successfully. Jan 29 11:11:28.104343 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 11:11:28.105820 systemd-logind[1450]: Session 4 logged out. Waiting for processes to exit. Jan 29 11:11:28.114095 systemd[1]: Started sshd@4-64.23.245.19:22-139.178.89.65:35528.service - OpenSSH per-connection server daemon (139.178.89.65:35528). Jan 29 11:11:28.116783 systemd-logind[1450]: Removed session 4. Jan 29 11:11:28.187382 sshd[1612]: Accepted publickey for core from 139.178.89.65 port 35528 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:11:28.190253 sshd-session[1612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:11:28.200024 systemd-logind[1450]: New session 5 of user core. Jan 29 11:11:28.206830 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 11:11:28.299165 sudo[1615]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 11:11:28.300408 sudo[1615]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:11:28.317000 sudo[1615]: pam_unix(sudo:session): session closed for user root Jan 29 11:11:28.321082 sshd[1614]: Connection closed by 139.178.89.65 port 35528 Jan 29 11:11:28.322651 sshd-session[1612]: pam_unix(sshd:session): session closed for user core Jan 29 11:11:28.335406 systemd[1]: sshd@4-64.23.245.19:22-139.178.89.65:35528.service: Deactivated successfully. Jan 29 11:11:28.338385 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 11:11:28.341323 systemd-logind[1450]: Session 5 logged out. Waiting for processes to exit. Jan 29 11:11:28.348280 systemd[1]: Started sshd@5-64.23.245.19:22-139.178.89.65:35540.service - OpenSSH per-connection server daemon (139.178.89.65:35540). Jan 29 11:11:28.350761 systemd-logind[1450]: Removed session 5. Jan 29 11:11:28.427685 sshd[1620]: Accepted publickey for core from 139.178.89.65 port 35540 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:11:28.430145 sshd-session[1620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:11:28.438376 systemd-logind[1450]: New session 6 of user core. Jan 29 11:11:28.453810 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 11:11:28.519352 sudo[1624]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 11:11:28.520030 sudo[1624]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:11:28.525915 sudo[1624]: pam_unix(sudo:session): session closed for user root Jan 29 11:11:28.534750 sudo[1623]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 29 11:11:28.535179 sudo[1623]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:11:28.560129 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 11:11:28.618380 augenrules[1646]: No rules Jan 29 11:11:28.621451 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:11:28.621786 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 11:11:28.624032 sudo[1623]: pam_unix(sudo:session): session closed for user root Jan 29 11:11:28.628325 sshd[1622]: Connection closed by 139.178.89.65 port 35540 Jan 29 11:11:28.628909 sshd-session[1620]: pam_unix(sshd:session): session closed for user core Jan 29 11:11:28.640609 systemd[1]: sshd@5-64.23.245.19:22-139.178.89.65:35540.service: Deactivated successfully. Jan 29 11:11:28.643644 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 11:11:28.646483 systemd-logind[1450]: Session 6 logged out. Waiting for processes to exit. Jan 29 11:11:28.653142 systemd[1]: Started sshd@6-64.23.245.19:22-139.178.89.65:35550.service - OpenSSH per-connection server daemon (139.178.89.65:35550). Jan 29 11:11:28.655643 systemd-logind[1450]: Removed session 6. Jan 29 11:11:28.724746 sshd[1654]: Accepted publickey for core from 139.178.89.65 port 35550 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:11:28.727173 sshd-session[1654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:11:28.737222 systemd-logind[1450]: New session 7 of user core. Jan 29 11:11:28.752815 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 11:11:28.820181 sudo[1657]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 11:11:28.820736 sudo[1657]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:11:29.897040 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 29 11:11:29.900085 (dockerd)[1674]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 29 11:11:30.596413 dockerd[1674]: time="2025-01-29T11:11:30.596280809Z" level=info msg="Starting up" Jan 29 11:11:30.808560 systemd[1]: var-lib-docker-metacopy\x2dcheck1710120843-merged.mount: Deactivated successfully. Jan 29 11:11:30.841943 dockerd[1674]: time="2025-01-29T11:11:30.841882843Z" level=info msg="Loading containers: start." Jan 29 11:11:31.143468 kernel: Initializing XFRM netlink socket Jan 29 11:11:31.151721 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 11:11:31.159904 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:11:31.199755 systemd-timesyncd[1346]: Network configuration changed, trying to establish connection. Jan 29 11:11:32.070991 systemd-timesyncd[1346]: Contacted time server 69.89.207.199:123 (2.flatcar.pool.ntp.org). Jan 29 11:11:32.071396 systemd-timesyncd[1346]: Initial clock synchronization to Wed 2025-01-29 11:11:32.067547 UTC. Jan 29 11:11:32.071935 systemd-resolved[1330]: Clock change detected. Flushing caches. Jan 29 11:11:32.111439 systemd-networkd[1357]: docker0: Link UP Jan 29 11:11:32.196836 dockerd[1674]: time="2025-01-29T11:11:32.195900814Z" level=info msg="Loading containers: done." Jan 29 11:11:32.271753 dockerd[1674]: time="2025-01-29T11:11:32.271084307Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 29 11:11:32.271753 dockerd[1674]: time="2025-01-29T11:11:32.271273646Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jan 29 11:11:32.271753 dockerd[1674]: time="2025-01-29T11:11:32.271477441Z" level=info msg="Daemon has completed initialization" Jan 29 11:11:32.278928 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:11:32.291499 (kubelet)[1840]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:11:32.385744 dockerd[1674]: time="2025-01-29T11:11:32.385526425Z" level=info msg="API listen on /run/docker.sock" Jan 29 11:11:32.393591 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 29 11:11:32.399751 kubelet[1840]: E0129 11:11:32.398830 1840 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:11:32.406100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:11:32.406365 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:11:33.738763 containerd[1475]: time="2025-01-29T11:11:33.738267036Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 29 11:11:34.776738 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount454694234.mount: Deactivated successfully. Jan 29 11:11:37.351503 containerd[1475]: time="2025-01-29T11:11:37.351411853Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:11:37.355923 containerd[1475]: time="2025-01-29T11:11:37.355820216Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=32677012" Jan 29 11:11:37.361147 containerd[1475]: time="2025-01-29T11:11:37.361034510Z" level=info msg="ImageCreate event name:\"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:11:37.377793 containerd[1475]: time="2025-01-29T11:11:37.377328604Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:11:37.382282 containerd[1475]: time="2025-01-29T11:11:37.382187445Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"32673812\" in 3.643814161s" Jan 29 11:11:37.382282 containerd[1475]: time="2025-01-29T11:11:37.382278080Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\"" Jan 29 11:11:37.418796 containerd[1475]: time="2025-01-29T11:11:37.418742494Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 29 11:11:39.735486 containerd[1475]: time="2025-01-29T11:11:39.735365179Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:11:39.742925 containerd[1475]: time="2025-01-29T11:11:39.742816534Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=29605745" Jan 29 11:11:39.748760 containerd[1475]: time="2025-01-29T11:11:39.748632737Z" level=info msg="ImageCreate event name:\"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:11:39.760747 containerd[1475]: time="2025-01-29T11:11:39.760130242Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:11:39.763654 containerd[1475]: time="2025-01-29T11:11:39.763527854Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"31052327\" in 2.344731516s" Jan 29 11:11:39.763982 containerd[1475]: time="2025-01-29T11:11:39.763946915Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\"" Jan 29 11:11:39.808912 containerd[1475]: time="2025-01-29T11:11:39.808653747Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 29 11:11:39.815175 systemd-resolved[1330]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Jan 29 11:11:41.475079 containerd[1475]: time="2025-01-29T11:11:41.474976975Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:11:41.483335 containerd[1475]: time="2025-01-29T11:11:41.482803477Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=17783064" Jan 29 11:11:41.493805 containerd[1475]: time="2025-01-29T11:11:41.493494547Z" level=info msg="ImageCreate event name:\"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:11:41.516698 containerd[1475]: time="2025-01-29T11:11:41.516553490Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:11:41.519180 containerd[1475]: time="2025-01-29T11:11:41.518611384Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"19229664\" in 1.709891371s" Jan 29 11:11:41.519180 containerd[1475]: time="2025-01-29T11:11:41.518817119Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\"" Jan 29 11:11:41.563775 containerd[1475]: time="2025-01-29T11:11:41.563705669Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 29 11:11:42.657230 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 29 11:11:42.669378 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:11:42.873549 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:11:42.886434 (kubelet)[1982]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:11:42.890857 systemd-resolved[1330]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Jan 29 11:11:42.991157 kubelet[1982]: E0129 11:11:42.990978 1982 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:11:42.994858 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:11:42.995083 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:11:43.387884 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2984935808.mount: Deactivated successfully. Jan 29 11:11:44.131807 containerd[1475]: time="2025-01-29T11:11:44.131574703Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:11:44.141285 containerd[1475]: time="2025-01-29T11:11:44.141181240Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=29058337" Jan 29 11:11:44.152210 containerd[1475]: time="2025-01-29T11:11:44.152063385Z" level=info msg="ImageCreate event name:\"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:11:44.166911 containerd[1475]: time="2025-01-29T11:11:44.166771485Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:11:44.168540 containerd[1475]: time="2025-01-29T11:11:44.168299043Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"29057356\" in 2.604516998s" Jan 29 11:11:44.168540 containerd[1475]: time="2025-01-29T11:11:44.168371814Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\"" Jan 29 11:11:44.208064 containerd[1475]: time="2025-01-29T11:11:44.208002663Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 29 11:11:44.983434 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2768025645.mount: Deactivated successfully. Jan 29 11:11:46.522839 containerd[1475]: time="2025-01-29T11:11:46.522759112Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:11:46.528278 containerd[1475]: time="2025-01-29T11:11:46.528169133Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 29 11:11:46.534938 containerd[1475]: time="2025-01-29T11:11:46.534860560Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:11:46.546846 containerd[1475]: time="2025-01-29T11:11:46.546779802Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:11:46.550248 containerd[1475]: time="2025-01-29T11:11:46.550166661Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.342101127s" Jan 29 11:11:46.550248 containerd[1475]: time="2025-01-29T11:11:46.550239686Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 29 11:11:46.595929 containerd[1475]: time="2025-01-29T11:11:46.595870234Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 29 11:11:46.601081 systemd-resolved[1330]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. Jan 29 11:11:47.167853 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1157699238.mount: Deactivated successfully. Jan 29 11:11:47.191489 containerd[1475]: time="2025-01-29T11:11:47.191273808Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:11:47.197332 containerd[1475]: time="2025-01-29T11:11:47.197208626Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jan 29 11:11:47.202484 containerd[1475]: time="2025-01-29T11:11:47.202370417Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:11:47.209758 containerd[1475]: time="2025-01-29T11:11:47.209661577Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:11:47.212118 containerd[1475]: time="2025-01-29T11:11:47.211919397Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 615.990145ms" Jan 29 11:11:47.212118 containerd[1475]: time="2025-01-29T11:11:47.211981653Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 29 11:11:47.254910 containerd[1475]: time="2025-01-29T11:11:47.254848825Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 29 11:11:48.149794 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2344549406.mount: Deactivated successfully. Jan 29 11:11:51.183026 containerd[1475]: time="2025-01-29T11:11:51.182884519Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:11:51.185704 containerd[1475]: time="2025-01-29T11:11:51.185616789Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Jan 29 11:11:51.191232 containerd[1475]: time="2025-01-29T11:11:51.191104219Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:11:51.201385 containerd[1475]: time="2025-01-29T11:11:51.201266830Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:11:51.203893 containerd[1475]: time="2025-01-29T11:11:51.203813974Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 3.948909941s" Jan 29 11:11:51.204312 containerd[1475]: time="2025-01-29T11:11:51.204106363Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jan 29 11:11:53.246846 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 29 11:11:53.257792 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:11:53.525149 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:11:53.539038 (kubelet)[2168]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:11:53.630370 kubelet[2168]: E0129 11:11:53.630291 2168 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:11:53.633828 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:11:53.634282 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:11:54.869342 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:11:54.880193 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:11:54.932066 systemd[1]: Reloading requested from client PID 2182 ('systemctl') (unit session-7.scope)... Jan 29 11:11:54.932106 systemd[1]: Reloading... Jan 29 11:11:55.115756 zram_generator::config[2221]: No configuration found. Jan 29 11:11:55.339933 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:11:55.516568 systemd[1]: Reloading finished in 583 ms. Jan 29 11:11:55.597028 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 29 11:11:55.597187 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 29 11:11:55.597613 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:11:55.604375 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:11:55.784152 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:11:55.800464 (kubelet)[2276]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 11:11:55.883045 kubelet[2276]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:11:55.883045 kubelet[2276]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 11:11:55.883045 kubelet[2276]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:11:55.885786 kubelet[2276]: I0129 11:11:55.885691 2276 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 11:11:56.736449 kubelet[2276]: I0129 11:11:56.735840 2276 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 11:11:56.736449 kubelet[2276]: I0129 11:11:56.735889 2276 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 11:11:56.736449 kubelet[2276]: I0129 11:11:56.736297 2276 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 11:11:56.802804 kubelet[2276]: I0129 11:11:56.802756 2276 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:11:56.807379 kubelet[2276]: E0129 11:11:56.807194 2276 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://64.23.245.19:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 64.23.245.19:6443: connect: connection refused Jan 29 11:11:56.823446 kubelet[2276]: I0129 11:11:56.823263 2276 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 11:11:56.824582 kubelet[2276]: I0129 11:11:56.823812 2276 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 11:11:56.824582 kubelet[2276]: I0129 11:11:56.823868 2276 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4186.1.0-4-1698ea429f","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 11:11:56.824582 kubelet[2276]: I0129 11:11:56.824140 2276 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 11:11:56.824582 kubelet[2276]: I0129 11:11:56.824153 2276 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 11:11:56.825025 kubelet[2276]: I0129 11:11:56.824375 2276 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:11:56.828163 kubelet[2276]: I0129 11:11:56.828025 2276 kubelet.go:400] "Attempting to sync node with API server" Jan 29 11:11:56.828163 kubelet[2276]: I0129 11:11:56.828073 2276 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 11:11:56.828163 kubelet[2276]: I0129 11:11:56.828133 2276 kubelet.go:312] "Adding apiserver pod source" Jan 29 11:11:56.828818 kubelet[2276]: I0129 11:11:56.828524 2276 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 11:11:56.830696 kubelet[2276]: W0129 11:11:56.830608 2276 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://64.23.245.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.0-4-1698ea429f&limit=500&resourceVersion=0": dial tcp 64.23.245.19:6443: connect: connection refused Jan 29 11:11:56.830696 kubelet[2276]: E0129 11:11:56.830700 2276 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://64.23.245.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.0-4-1698ea429f&limit=500&resourceVersion=0": dial tcp 64.23.245.19:6443: connect: connection refused Jan 29 11:11:56.835548 kubelet[2276]: W0129 11:11:56.835385 2276 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://64.23.245.19:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 64.23.245.19:6443: connect: connection refused Jan 29 11:11:56.835548 kubelet[2276]: E0129 11:11:56.835475 2276 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://64.23.245.19:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 64.23.245.19:6443: connect: connection refused Jan 29 11:11:56.836426 kubelet[2276]: I0129 11:11:56.836058 2276 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 11:11:56.839838 kubelet[2276]: I0129 11:11:56.839788 2276 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 11:11:56.841384 kubelet[2276]: W0129 11:11:56.840075 2276 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 11:11:56.841384 kubelet[2276]: I0129 11:11:56.841233 2276 server.go:1264] "Started kubelet" Jan 29 11:11:56.853884 kubelet[2276]: I0129 11:11:56.853836 2276 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 11:11:56.862883 kubelet[2276]: I0129 11:11:56.862842 2276 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 11:11:56.863978 kubelet[2276]: I0129 11:11:56.863937 2276 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 11:11:56.864275 kubelet[2276]: I0129 11:11:56.864258 2276 reconciler.go:26] "Reconciler: start to sync state" Jan 29 11:11:56.864853 kubelet[2276]: I0129 11:11:56.864789 2276 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 11:11:56.865286 kubelet[2276]: I0129 11:11:56.865267 2276 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 11:11:56.873861 kubelet[2276]: I0129 11:11:56.873309 2276 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 11:11:56.874454 kubelet[2276]: E0129 11:11:56.874227 2276 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://64.23.245.19:6443/api/v1/namespaces/default/events\": dial tcp 64.23.245.19:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4186.1.0-4-1698ea429f.181f2569a3fa4efc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186.1.0-4-1698ea429f,UID:ci-4186.1.0-4-1698ea429f,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4186.1.0-4-1698ea429f,},FirstTimestamp:2025-01-29 11:11:56.84118502 +0000 UTC m=+1.034566279,LastTimestamp:2025-01-29 11:11:56.84118502 +0000 UTC m=+1.034566279,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186.1.0-4-1698ea429f,}" Jan 29 11:11:56.875080 kubelet[2276]: W0129 11:11:56.875020 2276 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://64.23.245.19:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.23.245.19:6443: connect: connection refused Jan 29 11:11:56.875230 kubelet[2276]: E0129 11:11:56.875217 2276 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://64.23.245.19:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.23.245.19:6443: connect: connection refused Jan 29 11:11:56.875983 kubelet[2276]: I0129 11:11:56.875958 2276 server.go:455] "Adding debug handlers to kubelet server" Jan 29 11:11:56.878060 kubelet[2276]: E0129 11:11:56.877990 2276 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.245.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.1.0-4-1698ea429f?timeout=10s\": dial tcp 64.23.245.19:6443: connect: connection refused" interval="200ms" Jan 29 11:11:56.879964 kubelet[2276]: I0129 11:11:56.879892 2276 factory.go:221] Registration of the systemd container factory successfully Jan 29 11:11:56.880088 kubelet[2276]: I0129 11:11:56.880033 2276 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 11:11:56.883829 kubelet[2276]: E0129 11:11:56.883774 2276 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 11:11:56.884385 kubelet[2276]: I0129 11:11:56.884077 2276 factory.go:221] Registration of the containerd container factory successfully Jan 29 11:11:56.915857 kubelet[2276]: I0129 11:11:56.915202 2276 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 11:11:56.922525 kubelet[2276]: I0129 11:11:56.921963 2276 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 11:11:56.922525 kubelet[2276]: I0129 11:11:56.922021 2276 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 11:11:56.922525 kubelet[2276]: I0129 11:11:56.922075 2276 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:11:56.923037 kubelet[2276]: I0129 11:11:56.922997 2276 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 11:11:56.923037 kubelet[2276]: I0129 11:11:56.923039 2276 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 11:11:56.923198 kubelet[2276]: I0129 11:11:56.923066 2276 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 11:11:56.923198 kubelet[2276]: E0129 11:11:56.923159 2276 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 11:11:56.924647 kubelet[2276]: W0129 11:11:56.924591 2276 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://64.23.245.19:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.23.245.19:6443: connect: connection refused Jan 29 11:11:56.925318 kubelet[2276]: E0129 11:11:56.924820 2276 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://64.23.245.19:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.23.245.19:6443: connect: connection refused Jan 29 11:11:56.946899 kubelet[2276]: I0129 11:11:56.946835 2276 policy_none.go:49] "None policy: Start" Jan 29 11:11:56.949060 kubelet[2276]: I0129 11:11:56.949017 2276 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 11:11:56.949296 kubelet[2276]: I0129 11:11:56.949083 2276 state_mem.go:35] "Initializing new in-memory state store" Jan 29 11:11:56.961162 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 11:11:56.965134 kubelet[2276]: I0129 11:11:56.964620 2276 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.1.0-4-1698ea429f" Jan 29 11:11:56.965294 kubelet[2276]: E0129 11:11:56.965226 2276 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://64.23.245.19:6443/api/v1/nodes\": dial tcp 64.23.245.19:6443: connect: connection refused" node="ci-4186.1.0-4-1698ea429f" Jan 29 11:11:56.976155 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 11:11:56.986189 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 11:11:57.009587 kubelet[2276]: I0129 11:11:57.009260 2276 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 11:11:57.009587 kubelet[2276]: I0129 11:11:57.009525 2276 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 11:11:57.010207 kubelet[2276]: I0129 11:11:57.009669 2276 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 11:11:57.011324 kubelet[2276]: E0129 11:11:57.011278 2276 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4186.1.0-4-1698ea429f\" not found" Jan 29 11:11:57.028983 kubelet[2276]: I0129 11:11:57.026870 2276 topology_manager.go:215] "Topology Admit Handler" podUID="1a120eaa871ca7603adbf7de21825114" podNamespace="kube-system" podName="kube-apiserver-ci-4186.1.0-4-1698ea429f" Jan 29 11:11:57.028983 kubelet[2276]: I0129 11:11:57.028060 2276 topology_manager.go:215] "Topology Admit Handler" podUID="baf86c3e5f9d584a3413c832f5a56561" podNamespace="kube-system" podName="kube-controller-manager-ci-4186.1.0-4-1698ea429f" Jan 29 11:11:57.030023 kubelet[2276]: I0129 11:11:57.029968 2276 topology_manager.go:215] "Topology Admit Handler" podUID="e5da7fc9b92e41a7bffed02630302437" podNamespace="kube-system" podName="kube-scheduler-ci-4186.1.0-4-1698ea429f" Jan 29 11:11:57.047698 systemd[1]: Created slice kubepods-burstable-pod1a120eaa871ca7603adbf7de21825114.slice - libcontainer container kubepods-burstable-pod1a120eaa871ca7603adbf7de21825114.slice. Jan 29 11:11:57.066010 systemd[1]: Created slice kubepods-burstable-pode5da7fc9b92e41a7bffed02630302437.slice - libcontainer container kubepods-burstable-pode5da7fc9b92e41a7bffed02630302437.slice. Jan 29 11:11:57.077833 kubelet[2276]: E0129 11:11:57.077644 2276 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://64.23.245.19:6443/api/v1/namespaces/default/events\": dial tcp 64.23.245.19:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4186.1.0-4-1698ea429f.181f2569a3fa4efc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186.1.0-4-1698ea429f,UID:ci-4186.1.0-4-1698ea429f,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4186.1.0-4-1698ea429f,},FirstTimestamp:2025-01-29 11:11:56.84118502 +0000 UTC m=+1.034566279,LastTimestamp:2025-01-29 11:11:56.84118502 +0000 UTC m=+1.034566279,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186.1.0-4-1698ea429f,}" Jan 29 11:11:57.079504 kubelet[2276]: E0129 11:11:57.079427 2276 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.245.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.1.0-4-1698ea429f?timeout=10s\": dial tcp 64.23.245.19:6443: connect: connection refused" interval="400ms" Jan 29 11:11:57.088130 systemd[1]: Created slice kubepods-burstable-podbaf86c3e5f9d584a3413c832f5a56561.slice - libcontainer container kubepods-burstable-podbaf86c3e5f9d584a3413c832f5a56561.slice. Jan 29 11:11:57.166481 kubelet[2276]: I0129 11:11:57.166050 2276 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/baf86c3e5f9d584a3413c832f5a56561-kubeconfig\") pod \"kube-controller-manager-ci-4186.1.0-4-1698ea429f\" (UID: \"baf86c3e5f9d584a3413c832f5a56561\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-4-1698ea429f" Jan 29 11:11:57.166481 kubelet[2276]: I0129 11:11:57.166126 2276 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/baf86c3e5f9d584a3413c832f5a56561-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186.1.0-4-1698ea429f\" (UID: \"baf86c3e5f9d584a3413c832f5a56561\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-4-1698ea429f" Jan 29 11:11:57.166481 kubelet[2276]: I0129 11:11:57.166156 2276 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1a120eaa871ca7603adbf7de21825114-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186.1.0-4-1698ea429f\" (UID: \"1a120eaa871ca7603adbf7de21825114\") " pod="kube-system/kube-apiserver-ci-4186.1.0-4-1698ea429f" Jan 29 11:11:57.166481 kubelet[2276]: I0129 11:11:57.166185 2276 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/baf86c3e5f9d584a3413c832f5a56561-k8s-certs\") pod \"kube-controller-manager-ci-4186.1.0-4-1698ea429f\" (UID: \"baf86c3e5f9d584a3413c832f5a56561\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-4-1698ea429f" Jan 29 11:11:57.166481 kubelet[2276]: I0129 11:11:57.166213 2276 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/baf86c3e5f9d584a3413c832f5a56561-ca-certs\") pod \"kube-controller-manager-ci-4186.1.0-4-1698ea429f\" (UID: \"baf86c3e5f9d584a3413c832f5a56561\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-4-1698ea429f" Jan 29 11:11:57.166955 kubelet[2276]: I0129 11:11:57.166239 2276 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/baf86c3e5f9d584a3413c832f5a56561-flexvolume-dir\") pod \"kube-controller-manager-ci-4186.1.0-4-1698ea429f\" (UID: \"baf86c3e5f9d584a3413c832f5a56561\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-4-1698ea429f" Jan 29 11:11:57.166955 kubelet[2276]: I0129 11:11:57.166267 2276 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e5da7fc9b92e41a7bffed02630302437-kubeconfig\") pod \"kube-scheduler-ci-4186.1.0-4-1698ea429f\" (UID: \"e5da7fc9b92e41a7bffed02630302437\") " pod="kube-system/kube-scheduler-ci-4186.1.0-4-1698ea429f" Jan 29 11:11:57.166955 kubelet[2276]: I0129 11:11:57.166292 2276 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1a120eaa871ca7603adbf7de21825114-ca-certs\") pod \"kube-apiserver-ci-4186.1.0-4-1698ea429f\" (UID: \"1a120eaa871ca7603adbf7de21825114\") " pod="kube-system/kube-apiserver-ci-4186.1.0-4-1698ea429f" Jan 29 11:11:57.166955 kubelet[2276]: I0129 11:11:57.166786 2276 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1a120eaa871ca7603adbf7de21825114-k8s-certs\") pod \"kube-apiserver-ci-4186.1.0-4-1698ea429f\" (UID: \"1a120eaa871ca7603adbf7de21825114\") " pod="kube-system/kube-apiserver-ci-4186.1.0-4-1698ea429f" Jan 29 11:11:57.167285 kubelet[2276]: I0129 11:11:57.167133 2276 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.1.0-4-1698ea429f" Jan 29 11:11:57.167855 kubelet[2276]: E0129 11:11:57.167789 2276 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://64.23.245.19:6443/api/v1/nodes\": dial tcp 64.23.245.19:6443: connect: connection refused" node="ci-4186.1.0-4-1698ea429f" Jan 29 11:11:57.360559 kubelet[2276]: E0129 11:11:57.360391 2276 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:11:57.361973 containerd[1475]: time="2025-01-29T11:11:57.361624923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186.1.0-4-1698ea429f,Uid:1a120eaa871ca7603adbf7de21825114,Namespace:kube-system,Attempt:0,}" Jan 29 11:11:57.383823 kubelet[2276]: E0129 11:11:57.383439 2276 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:11:57.388561 containerd[1475]: time="2025-01-29T11:11:57.388353063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186.1.0-4-1698ea429f,Uid:e5da7fc9b92e41a7bffed02630302437,Namespace:kube-system,Attempt:0,}" Jan 29 11:11:57.394053 kubelet[2276]: E0129 11:11:57.393985 2276 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:11:57.395380 containerd[1475]: time="2025-01-29T11:11:57.394634337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186.1.0-4-1698ea429f,Uid:baf86c3e5f9d584a3413c832f5a56561,Namespace:kube-system,Attempt:0,}" Jan 29 11:11:57.480448 kubelet[2276]: E0129 11:11:57.480376 2276 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.245.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.1.0-4-1698ea429f?timeout=10s\": dial tcp 64.23.245.19:6443: connect: connection refused" interval="800ms" Jan 29 11:11:57.570100 kubelet[2276]: I0129 11:11:57.569907 2276 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.1.0-4-1698ea429f" Jan 29 11:11:57.571494 kubelet[2276]: E0129 11:11:57.571415 2276 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://64.23.245.19:6443/api/v1/nodes\": dial tcp 64.23.245.19:6443: connect: connection refused" node="ci-4186.1.0-4-1698ea429f" Jan 29 11:11:57.704322 kubelet[2276]: W0129 11:11:57.704032 2276 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://64.23.245.19:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.23.245.19:6443: connect: connection refused Jan 29 11:11:57.704322 kubelet[2276]: E0129 11:11:57.704105 2276 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://64.23.245.19:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.23.245.19:6443: connect: connection refused Jan 29 11:11:57.919015 kubelet[2276]: W0129 11:11:57.918870 2276 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://64.23.245.19:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.23.245.19:6443: connect: connection refused Jan 29 11:11:57.919015 kubelet[2276]: E0129 11:11:57.918930 2276 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://64.23.245.19:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.23.245.19:6443: connect: connection refused Jan 29 11:11:58.037125 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1585960570.mount: Deactivated successfully. Jan 29 11:11:58.080383 containerd[1475]: time="2025-01-29T11:11:58.080289864Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:11:58.092104 containerd[1475]: time="2025-01-29T11:11:58.091824304Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 29 11:11:58.094763 containerd[1475]: time="2025-01-29T11:11:58.094489638Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:11:58.102769 containerd[1475]: time="2025-01-29T11:11:58.102359431Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:11:58.108733 containerd[1475]: time="2025-01-29T11:11:58.107906258Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 11:11:58.118777 containerd[1475]: time="2025-01-29T11:11:58.118645746Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:11:58.120540 containerd[1475]: time="2025-01-29T11:11:58.120449963Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 758.668281ms" Jan 29 11:11:58.126614 containerd[1475]: time="2025-01-29T11:11:58.126205221Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:11:58.126614 containerd[1475]: time="2025-01-29T11:11:58.126398660Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 11:11:58.136945 containerd[1475]: time="2025-01-29T11:11:58.136869306Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 742.059746ms" Jan 29 11:11:58.177318 containerd[1475]: time="2025-01-29T11:11:58.177005673Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 788.51834ms" Jan 29 11:11:58.283619 kubelet[2276]: E0129 11:11:58.281192 2276 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.245.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.1.0-4-1698ea429f?timeout=10s\": dial tcp 64.23.245.19:6443: connect: connection refused" interval="1.6s" Jan 29 11:11:58.309847 kubelet[2276]: W0129 11:11:58.307771 2276 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://64.23.245.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.0-4-1698ea429f&limit=500&resourceVersion=0": dial tcp 64.23.245.19:6443: connect: connection refused Jan 29 11:11:58.309847 kubelet[2276]: E0129 11:11:58.307870 2276 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://64.23.245.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.0-4-1698ea429f&limit=500&resourceVersion=0": dial tcp 64.23.245.19:6443: connect: connection refused Jan 29 11:11:58.309847 kubelet[2276]: W0129 11:11:58.308364 2276 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://64.23.245.19:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 64.23.245.19:6443: connect: connection refused Jan 29 11:11:58.309847 kubelet[2276]: E0129 11:11:58.308463 2276 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://64.23.245.19:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 64.23.245.19:6443: connect: connection refused Jan 29 11:11:58.377648 containerd[1475]: time="2025-01-29T11:11:58.372734102Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:11:58.377648 containerd[1475]: time="2025-01-29T11:11:58.376439361Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:11:58.377648 containerd[1475]: time="2025-01-29T11:11:58.376461770Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:11:58.377648 containerd[1475]: time="2025-01-29T11:11:58.376684020Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:11:58.379675 kubelet[2276]: I0129 11:11:58.379165 2276 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.1.0-4-1698ea429f" Jan 29 11:11:58.379675 kubelet[2276]: E0129 11:11:58.379632 2276 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://64.23.245.19:6443/api/v1/nodes\": dial tcp 64.23.245.19:6443: connect: connection refused" node="ci-4186.1.0-4-1698ea429f" Jan 29 11:11:58.391050 containerd[1475]: time="2025-01-29T11:11:58.390664223Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:11:58.391050 containerd[1475]: time="2025-01-29T11:11:58.390757723Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:11:58.391050 containerd[1475]: time="2025-01-29T11:11:58.390774782Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:11:58.391050 containerd[1475]: time="2025-01-29T11:11:58.390869063Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:11:58.393348 containerd[1475]: time="2025-01-29T11:11:58.393202016Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:11:58.393348 containerd[1475]: time="2025-01-29T11:11:58.393278944Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:11:58.393348 containerd[1475]: time="2025-01-29T11:11:58.393304344Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:11:58.394097 containerd[1475]: time="2025-01-29T11:11:58.393423086Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:11:58.423384 systemd[1]: Started cri-containerd-0afcfee83d1d376b89e0a7dc619d6c5f4d48b228838cf11df160555b77005c5a.scope - libcontainer container 0afcfee83d1d376b89e0a7dc619d6c5f4d48b228838cf11df160555b77005c5a. Jan 29 11:11:58.450017 systemd[1]: Started cri-containerd-648b5030d3949e2c7631e61981c56a9565575292a0df20d179c6857cccd2cce1.scope - libcontainer container 648b5030d3949e2c7631e61981c56a9565575292a0df20d179c6857cccd2cce1. Jan 29 11:11:58.460973 systemd[1]: Started cri-containerd-71b493471564e0725ad5c0b78d95ed4e423356fc533bc0d37eed1d1272e1fc2d.scope - libcontainer container 71b493471564e0725ad5c0b78d95ed4e423356fc533bc0d37eed1d1272e1fc2d. Jan 29 11:11:58.545460 containerd[1475]: time="2025-01-29T11:11:58.545402725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186.1.0-4-1698ea429f,Uid:baf86c3e5f9d584a3413c832f5a56561,Namespace:kube-system,Attempt:0,} returns sandbox id \"648b5030d3949e2c7631e61981c56a9565575292a0df20d179c6857cccd2cce1\"" Jan 29 11:11:58.549766 kubelet[2276]: E0129 11:11:58.549233 2276 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:11:58.554217 containerd[1475]: time="2025-01-29T11:11:58.554095072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186.1.0-4-1698ea429f,Uid:e5da7fc9b92e41a7bffed02630302437,Namespace:kube-system,Attempt:0,} returns sandbox id \"0afcfee83d1d376b89e0a7dc619d6c5f4d48b228838cf11df160555b77005c5a\"" Jan 29 11:11:58.562833 kubelet[2276]: E0129 11:11:58.561287 2276 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:11:58.572695 containerd[1475]: time="2025-01-29T11:11:58.572648513Z" level=info msg="CreateContainer within sandbox \"648b5030d3949e2c7631e61981c56a9565575292a0df20d179c6857cccd2cce1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 29 11:11:58.574089 containerd[1475]: time="2025-01-29T11:11:58.573610909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186.1.0-4-1698ea429f,Uid:1a120eaa871ca7603adbf7de21825114,Namespace:kube-system,Attempt:0,} returns sandbox id \"71b493471564e0725ad5c0b78d95ed4e423356fc533bc0d37eed1d1272e1fc2d\"" Jan 29 11:11:58.576845 containerd[1475]: time="2025-01-29T11:11:58.576785151Z" level=info msg="CreateContainer within sandbox \"0afcfee83d1d376b89e0a7dc619d6c5f4d48b228838cf11df160555b77005c5a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 29 11:11:58.577526 kubelet[2276]: E0129 11:11:58.577490 2276 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:11:58.581989 containerd[1475]: time="2025-01-29T11:11:58.581943484Z" level=info msg="CreateContainer within sandbox \"71b493471564e0725ad5c0b78d95ed4e423356fc533bc0d37eed1d1272e1fc2d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 29 11:11:58.667038 containerd[1475]: time="2025-01-29T11:11:58.666943183Z" level=info msg="CreateContainer within sandbox \"0afcfee83d1d376b89e0a7dc619d6c5f4d48b228838cf11df160555b77005c5a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6925b52f077122d13e66bea3061527379cabe94cc8991bdcb08a2c0ca94aa1c6\"" Jan 29 11:11:58.672744 containerd[1475]: time="2025-01-29T11:11:58.671094754Z" level=info msg="CreateContainer within sandbox \"648b5030d3949e2c7631e61981c56a9565575292a0df20d179c6857cccd2cce1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"09006969c1915c1cc9b096fc442efaaef97a925b368b1ee755eaad394ffef957\"" Jan 29 11:11:58.672744 containerd[1475]: time="2025-01-29T11:11:58.671500746Z" level=info msg="StartContainer for \"6925b52f077122d13e66bea3061527379cabe94cc8991bdcb08a2c0ca94aa1c6\"" Jan 29 11:11:58.691335 containerd[1475]: time="2025-01-29T11:11:58.691280671Z" level=info msg="StartContainer for \"09006969c1915c1cc9b096fc442efaaef97a925b368b1ee755eaad394ffef957\"" Jan 29 11:11:58.714426 containerd[1475]: time="2025-01-29T11:11:58.714233596Z" level=info msg="CreateContainer within sandbox \"71b493471564e0725ad5c0b78d95ed4e423356fc533bc0d37eed1d1272e1fc2d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"58661316c7ae2c510ffd6729af33a5282a7009134def1a0c390e69a87cea4f51\"" Jan 29 11:11:58.717789 containerd[1475]: time="2025-01-29T11:11:58.717744835Z" level=info msg="StartContainer for \"58661316c7ae2c510ffd6729af33a5282a7009134def1a0c390e69a87cea4f51\"" Jan 29 11:11:58.729030 systemd[1]: Started cri-containerd-6925b52f077122d13e66bea3061527379cabe94cc8991bdcb08a2c0ca94aa1c6.scope - libcontainer container 6925b52f077122d13e66bea3061527379cabe94cc8991bdcb08a2c0ca94aa1c6. Jan 29 11:11:58.755080 systemd[1]: Started cri-containerd-09006969c1915c1cc9b096fc442efaaef97a925b368b1ee755eaad394ffef957.scope - libcontainer container 09006969c1915c1cc9b096fc442efaaef97a925b368b1ee755eaad394ffef957. Jan 29 11:11:58.797078 systemd[1]: Started cri-containerd-58661316c7ae2c510ffd6729af33a5282a7009134def1a0c390e69a87cea4f51.scope - libcontainer container 58661316c7ae2c510ffd6729af33a5282a7009134def1a0c390e69a87cea4f51. Jan 29 11:11:58.858909 kubelet[2276]: E0129 11:11:58.858605 2276 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://64.23.245.19:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 64.23.245.19:6443: connect: connection refused Jan 29 11:11:58.890016 containerd[1475]: time="2025-01-29T11:11:58.889585033Z" level=info msg="StartContainer for \"6925b52f077122d13e66bea3061527379cabe94cc8991bdcb08a2c0ca94aa1c6\" returns successfully" Jan 29 11:11:58.908101 containerd[1475]: time="2025-01-29T11:11:58.907944261Z" level=info msg="StartContainer for \"09006969c1915c1cc9b096fc442efaaef97a925b368b1ee755eaad394ffef957\" returns successfully" Jan 29 11:11:58.944594 containerd[1475]: time="2025-01-29T11:11:58.943338514Z" level=info msg="StartContainer for \"58661316c7ae2c510ffd6729af33a5282a7009134def1a0c390e69a87cea4f51\" returns successfully" Jan 29 11:11:58.951953 kubelet[2276]: E0129 11:11:58.951905 2276 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:11:58.960139 kubelet[2276]: E0129 11:11:58.958021 2276 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:11:58.964803 kubelet[2276]: E0129 11:11:58.964760 2276 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:11:59.969907 kubelet[2276]: E0129 11:11:59.969045 2276 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:11:59.983461 kubelet[2276]: I0129 11:11:59.983031 2276 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.1.0-4-1698ea429f" Jan 29 11:12:00.973608 kubelet[2276]: E0129 11:12:00.973533 2276 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:12:01.832428 kubelet[2276]: E0129 11:12:01.832206 2276 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:12:02.047293 kubelet[2276]: E0129 11:12:02.047241 2276 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4186.1.0-4-1698ea429f\" not found" node="ci-4186.1.0-4-1698ea429f" Jan 29 11:12:02.122918 kubelet[2276]: I0129 11:12:02.122811 2276 kubelet_node_status.go:76] "Successfully registered node" node="ci-4186.1.0-4-1698ea429f" Jan 29 11:12:02.834297 kubelet[2276]: I0129 11:12:02.834222 2276 apiserver.go:52] "Watching apiserver" Jan 29 11:12:02.865026 kubelet[2276]: I0129 11:12:02.864933 2276 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 11:12:02.977323 update_engine[1452]: I20250129 11:12:02.977223 1452 update_attempter.cc:509] Updating boot flags... Jan 29 11:12:03.051739 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 44 scanned by (udev-worker) (2556) Jan 29 11:12:03.140013 kubelet[2276]: W0129 11:12:03.139881 2276 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 11:12:03.142484 kubelet[2276]: E0129 11:12:03.142246 2276 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:12:03.182087 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 44 scanned by (udev-worker) (2556) Jan 29 11:12:03.992282 kubelet[2276]: E0129 11:12:03.992175 2276 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:12:05.258838 systemd[1]: Reloading requested from client PID 2564 ('systemctl') (unit session-7.scope)... Jan 29 11:12:05.258859 systemd[1]: Reloading... Jan 29 11:12:05.406757 zram_generator::config[2606]: No configuration found. Jan 29 11:12:05.616879 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:12:05.787989 systemd[1]: Reloading finished in 528 ms. Jan 29 11:12:05.847759 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:12:05.848651 kubelet[2276]: I0129 11:12:05.848388 2276 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:12:05.863490 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 11:12:05.864100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:12:05.864354 systemd[1]: kubelet.service: Consumed 1.596s CPU time, 112.5M memory peak, 0B memory swap peak. Jan 29 11:12:05.872381 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:12:06.086989 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:12:06.099327 (kubelet)[2655]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 11:12:06.201804 kubelet[2655]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:12:06.201804 kubelet[2655]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 11:12:06.201804 kubelet[2655]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:12:06.203740 kubelet[2655]: I0129 11:12:06.202346 2655 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 11:12:06.210613 kubelet[2655]: I0129 11:12:06.210574 2655 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 11:12:06.210812 kubelet[2655]: I0129 11:12:06.210801 2655 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 11:12:06.211143 kubelet[2655]: I0129 11:12:06.211124 2655 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 11:12:06.213703 kubelet[2655]: I0129 11:12:06.213675 2655 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 11:12:06.215314 kubelet[2655]: I0129 11:12:06.215288 2655 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:12:06.225883 kubelet[2655]: I0129 11:12:06.225855 2655 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 11:12:06.226404 kubelet[2655]: I0129 11:12:06.226366 2655 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 11:12:06.226706 kubelet[2655]: I0129 11:12:06.226489 2655 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4186.1.0-4-1698ea429f","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 11:12:06.226896 kubelet[2655]: I0129 11:12:06.226883 2655 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 11:12:06.226946 kubelet[2655]: I0129 11:12:06.226940 2655 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 11:12:06.227039 kubelet[2655]: I0129 11:12:06.227032 2655 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:12:06.227374 kubelet[2655]: I0129 11:12:06.227361 2655 kubelet.go:400] "Attempting to sync node with API server" Jan 29 11:12:06.227449 kubelet[2655]: I0129 11:12:06.227441 2655 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 11:12:06.227881 kubelet[2655]: I0129 11:12:06.227866 2655 kubelet.go:312] "Adding apiserver pod source" Jan 29 11:12:06.228192 kubelet[2655]: I0129 11:12:06.227973 2655 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 11:12:06.231441 kubelet[2655]: I0129 11:12:06.231394 2655 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 11:12:06.232259 kubelet[2655]: I0129 11:12:06.232119 2655 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 11:12:06.233600 kubelet[2655]: I0129 11:12:06.233579 2655 server.go:1264] "Started kubelet" Jan 29 11:12:06.240281 kubelet[2655]: I0129 11:12:06.240119 2655 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 11:12:06.250887 kubelet[2655]: I0129 11:12:06.250307 2655 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 11:12:06.251931 kubelet[2655]: I0129 11:12:06.251812 2655 server.go:455] "Adding debug handlers to kubelet server" Jan 29 11:12:06.263032 kubelet[2655]: I0129 11:12:06.252815 2655 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 11:12:06.263032 kubelet[2655]: I0129 11:12:06.262834 2655 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 11:12:06.268795 kubelet[2655]: I0129 11:12:06.268758 2655 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 11:12:06.275212 kubelet[2655]: I0129 11:12:06.275090 2655 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 11:12:06.277174 kubelet[2655]: I0129 11:12:06.275613 2655 reconciler.go:26] "Reconciler: start to sync state" Jan 29 11:12:06.303326 kubelet[2655]: I0129 11:12:06.300957 2655 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 11:12:06.303326 kubelet[2655]: I0129 11:12:06.302807 2655 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 11:12:06.303326 kubelet[2655]: I0129 11:12:06.302855 2655 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 11:12:06.303326 kubelet[2655]: I0129 11:12:06.302891 2655 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 11:12:06.303326 kubelet[2655]: E0129 11:12:06.302962 2655 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 11:12:06.303326 kubelet[2655]: I0129 11:12:06.303032 2655 factory.go:221] Registration of the systemd container factory successfully Jan 29 11:12:06.303326 kubelet[2655]: I0129 11:12:06.303150 2655 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 11:12:06.309290 sudo[2672]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 29 11:12:06.310247 sudo[2672]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 29 11:12:06.320741 kubelet[2655]: I0129 11:12:06.320425 2655 factory.go:221] Registration of the containerd container factory successfully Jan 29 11:12:06.322740 kubelet[2655]: E0129 11:12:06.321814 2655 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 11:12:06.381638 kubelet[2655]: I0129 11:12:06.381597 2655 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.1.0-4-1698ea429f" Jan 29 11:12:06.404283 kubelet[2655]: E0129 11:12:06.403927 2655 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 29 11:12:06.406550 kubelet[2655]: I0129 11:12:06.406510 2655 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 11:12:06.406965 kubelet[2655]: I0129 11:12:06.406940 2655 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 11:12:06.407132 kubelet[2655]: I0129 11:12:06.407119 2655 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:12:06.407388 kubelet[2655]: I0129 11:12:06.406558 2655 kubelet_node_status.go:112] "Node was previously registered" node="ci-4186.1.0-4-1698ea429f" Jan 29 11:12:06.407591 kubelet[2655]: I0129 11:12:06.407574 2655 kubelet_node_status.go:76] "Successfully registered node" node="ci-4186.1.0-4-1698ea429f" Jan 29 11:12:06.408429 kubelet[2655]: I0129 11:12:06.408311 2655 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 29 11:12:06.408731 kubelet[2655]: I0129 11:12:06.408579 2655 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 29 11:12:06.408731 kubelet[2655]: I0129 11:12:06.408635 2655 policy_none.go:49] "None policy: Start" Jan 29 11:12:06.414238 kubelet[2655]: I0129 11:12:06.413850 2655 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 11:12:06.414238 kubelet[2655]: I0129 11:12:06.413893 2655 state_mem.go:35] "Initializing new in-memory state store" Jan 29 11:12:06.414238 kubelet[2655]: I0129 11:12:06.414118 2655 state_mem.go:75] "Updated machine memory state" Jan 29 11:12:06.456091 kubelet[2655]: I0129 11:12:06.455081 2655 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 11:12:06.458754 kubelet[2655]: I0129 11:12:06.456599 2655 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 11:12:06.460969 kubelet[2655]: I0129 11:12:06.460400 2655 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 11:12:06.610760 kubelet[2655]: I0129 11:12:06.605580 2655 topology_manager.go:215] "Topology Admit Handler" podUID="1a120eaa871ca7603adbf7de21825114" podNamespace="kube-system" podName="kube-apiserver-ci-4186.1.0-4-1698ea429f" Jan 29 11:12:06.610760 kubelet[2655]: I0129 11:12:06.605819 2655 topology_manager.go:215] "Topology Admit Handler" podUID="baf86c3e5f9d584a3413c832f5a56561" podNamespace="kube-system" podName="kube-controller-manager-ci-4186.1.0-4-1698ea429f" Jan 29 11:12:06.610760 kubelet[2655]: I0129 11:12:06.605959 2655 topology_manager.go:215] "Topology Admit Handler" podUID="e5da7fc9b92e41a7bffed02630302437" podNamespace="kube-system" podName="kube-scheduler-ci-4186.1.0-4-1698ea429f" Jan 29 11:12:06.620147 kubelet[2655]: W0129 11:12:06.620100 2655 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 11:12:06.630117 kubelet[2655]: W0129 11:12:06.630036 2655 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 11:12:06.630294 kubelet[2655]: E0129 11:12:06.630138 2655 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4186.1.0-4-1698ea429f\" already exists" pod="kube-system/kube-apiserver-ci-4186.1.0-4-1698ea429f" Jan 29 11:12:06.630792 kubelet[2655]: W0129 11:12:06.630764 2655 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 11:12:06.683861 kubelet[2655]: I0129 11:12:06.679870 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1a120eaa871ca7603adbf7de21825114-k8s-certs\") pod \"kube-apiserver-ci-4186.1.0-4-1698ea429f\" (UID: \"1a120eaa871ca7603adbf7de21825114\") " pod="kube-system/kube-apiserver-ci-4186.1.0-4-1698ea429f" Jan 29 11:12:06.683861 kubelet[2655]: I0129 11:12:06.679928 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1a120eaa871ca7603adbf7de21825114-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186.1.0-4-1698ea429f\" (UID: \"1a120eaa871ca7603adbf7de21825114\") " pod="kube-system/kube-apiserver-ci-4186.1.0-4-1698ea429f" Jan 29 11:12:06.683861 kubelet[2655]: I0129 11:12:06.680822 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/baf86c3e5f9d584a3413c832f5a56561-flexvolume-dir\") pod \"kube-controller-manager-ci-4186.1.0-4-1698ea429f\" (UID: \"baf86c3e5f9d584a3413c832f5a56561\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-4-1698ea429f" Jan 29 11:12:06.683861 kubelet[2655]: I0129 11:12:06.681007 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/baf86c3e5f9d584a3413c832f5a56561-kubeconfig\") pod \"kube-controller-manager-ci-4186.1.0-4-1698ea429f\" (UID: \"baf86c3e5f9d584a3413c832f5a56561\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-4-1698ea429f" Jan 29 11:12:06.683861 kubelet[2655]: I0129 11:12:06.681050 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/baf86c3e5f9d584a3413c832f5a56561-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186.1.0-4-1698ea429f\" (UID: \"baf86c3e5f9d584a3413c832f5a56561\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-4-1698ea429f" Jan 29 11:12:06.684258 kubelet[2655]: I0129 11:12:06.681824 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1a120eaa871ca7603adbf7de21825114-ca-certs\") pod \"kube-apiserver-ci-4186.1.0-4-1698ea429f\" (UID: \"1a120eaa871ca7603adbf7de21825114\") " pod="kube-system/kube-apiserver-ci-4186.1.0-4-1698ea429f" Jan 29 11:12:06.684258 kubelet[2655]: I0129 11:12:06.681856 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/baf86c3e5f9d584a3413c832f5a56561-ca-certs\") pod \"kube-controller-manager-ci-4186.1.0-4-1698ea429f\" (UID: \"baf86c3e5f9d584a3413c832f5a56561\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-4-1698ea429f" Jan 29 11:12:06.684258 kubelet[2655]: I0129 11:12:06.681883 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/baf86c3e5f9d584a3413c832f5a56561-k8s-certs\") pod \"kube-controller-manager-ci-4186.1.0-4-1698ea429f\" (UID: \"baf86c3e5f9d584a3413c832f5a56561\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-4-1698ea429f" Jan 29 11:12:06.684258 kubelet[2655]: I0129 11:12:06.682115 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e5da7fc9b92e41a7bffed02630302437-kubeconfig\") pod \"kube-scheduler-ci-4186.1.0-4-1698ea429f\" (UID: \"e5da7fc9b92e41a7bffed02630302437\") " pod="kube-system/kube-scheduler-ci-4186.1.0-4-1698ea429f" Jan 29 11:12:06.922223 kubelet[2655]: E0129 11:12:06.922178 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:12:06.933557 kubelet[2655]: E0129 11:12:06.933505 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:12:06.934107 kubelet[2655]: E0129 11:12:06.933963 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:12:07.200820 sudo[2672]: pam_unix(sudo:session): session closed for user root Jan 29 11:12:07.229803 kubelet[2655]: I0129 11:12:07.229743 2655 apiserver.go:52] "Watching apiserver" Jan 29 11:12:07.277635 kubelet[2655]: I0129 11:12:07.277529 2655 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 11:12:07.370559 kubelet[2655]: E0129 11:12:07.368496 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:12:07.370559 kubelet[2655]: E0129 11:12:07.369043 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:12:07.370559 kubelet[2655]: E0129 11:12:07.369742 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:12:07.433007 kubelet[2655]: I0129 11:12:07.432945 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4186.1.0-4-1698ea429f" podStartSLOduration=1.432912233 podStartE2EDuration="1.432912233s" podCreationTimestamp="2025-01-29 11:12:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:12:07.403240973 +0000 UTC m=+1.289217048" watchObservedRunningTime="2025-01-29 11:12:07.432912233 +0000 UTC m=+1.318888303" Jan 29 11:12:07.458741 kubelet[2655]: I0129 11:12:07.458538 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4186.1.0-4-1698ea429f" podStartSLOduration=4.45851194 podStartE2EDuration="4.45851194s" podCreationTimestamp="2025-01-29 11:12:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:12:07.43379084 +0000 UTC m=+1.319766913" watchObservedRunningTime="2025-01-29 11:12:07.45851194 +0000 UTC m=+1.344488015" Jan 29 11:12:07.489077 kubelet[2655]: I0129 11:12:07.488991 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4186.1.0-4-1698ea429f" podStartSLOduration=1.488948704 podStartE2EDuration="1.488948704s" podCreationTimestamp="2025-01-29 11:12:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:12:07.460846132 +0000 UTC m=+1.346822225" watchObservedRunningTime="2025-01-29 11:12:07.488948704 +0000 UTC m=+1.374924780" Jan 29 11:12:08.374061 kubelet[2655]: E0129 11:12:08.373532 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:12:08.374061 kubelet[2655]: E0129 11:12:08.373954 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:12:09.036402 sudo[1657]: pam_unix(sudo:session): session closed for user root Jan 29 11:12:09.040387 sshd[1656]: Connection closed by 139.178.89.65 port 35550 Jan 29 11:12:09.042976 sshd-session[1654]: pam_unix(sshd:session): session closed for user core Jan 29 11:12:09.048792 systemd-logind[1450]: Session 7 logged out. Waiting for processes to exit. Jan 29 11:12:09.049887 systemd[1]: sshd@6-64.23.245.19:22-139.178.89.65:35550.service: Deactivated successfully. Jan 29 11:12:09.054662 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 11:12:09.055271 systemd[1]: session-7.scope: Consumed 6.741s CPU time, 187.3M memory peak, 0B memory swap peak. Jan 29 11:12:09.060253 systemd-logind[1450]: Removed session 7. Jan 29 11:12:10.879734 kubelet[2655]: E0129 11:12:10.879675 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:12:11.378194 kubelet[2655]: E0129 11:12:11.378084 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:12:12.381075 kubelet[2655]: E0129 11:12:12.380980 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:12:12.481476 kubelet[2655]: E0129 11:12:12.479705 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:12:13.384139 kubelet[2655]: E0129 11:12:13.383335 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:12:17.874871 kubelet[2655]: E0129 11:12:17.874172 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:12:18.661931 kubelet[2655]: I0129 11:12:18.661804 2655 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 29 11:12:18.664786 containerd[1475]: time="2025-01-29T11:12:18.663320330Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 11:12:18.665348 kubelet[2655]: I0129 11:12:18.663679 2655 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 29 11:12:19.149942 kubelet[2655]: I0129 11:12:19.148956 2655 topology_manager.go:215] "Topology Admit Handler" podUID="be7e408d-2297-4e2e-9590-197a09d4d70c" podNamespace="kube-system" podName="cilium-95rfj" Jan 29 11:12:19.149942 kubelet[2655]: I0129 11:12:19.149266 2655 topology_manager.go:215] "Topology Admit Handler" podUID="5e0fbcb1-e923-4be5-be56-486a572589ec" podNamespace="kube-system" podName="kube-proxy-46mr2" Jan 29 11:12:19.168130 systemd[1]: Created slice kubepods-besteffort-pod5e0fbcb1_e923_4be5_be56_486a572589ec.slice - libcontainer container kubepods-besteffort-pod5e0fbcb1_e923_4be5_be56_486a572589ec.slice. Jan 29 11:12:19.174984 kubelet[2655]: I0129 11:12:19.173638 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/be7e408d-2297-4e2e-9590-197a09d4d70c-hostproc\") pod \"cilium-95rfj\" (UID: \"be7e408d-2297-4e2e-9590-197a09d4d70c\") " pod="kube-system/cilium-95rfj" Jan 29 11:12:19.174984 kubelet[2655]: I0129 11:12:19.173682 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/be7e408d-2297-4e2e-9590-197a09d4d70c-etc-cni-netd\") pod \"cilium-95rfj\" (UID: \"be7e408d-2297-4e2e-9590-197a09d4d70c\") " pod="kube-system/cilium-95rfj" Jan 29 11:12:19.174984 kubelet[2655]: I0129 11:12:19.173732 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/be7e408d-2297-4e2e-9590-197a09d4d70c-xtables-lock\") pod \"cilium-95rfj\" (UID: \"be7e408d-2297-4e2e-9590-197a09d4d70c\") " pod="kube-system/cilium-95rfj" Jan 29 11:12:19.174984 kubelet[2655]: I0129 11:12:19.173759 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/be7e408d-2297-4e2e-9590-197a09d4d70c-host-proc-sys-net\") pod \"cilium-95rfj\" (UID: \"be7e408d-2297-4e2e-9590-197a09d4d70c\") " pod="kube-system/cilium-95rfj" Jan 29 11:12:19.174984 kubelet[2655]: I0129 11:12:19.173781 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/be7e408d-2297-4e2e-9590-197a09d4d70c-host-proc-sys-kernel\") pod \"cilium-95rfj\" (UID: \"be7e408d-2297-4e2e-9590-197a09d4d70c\") " pod="kube-system/cilium-95rfj" Jan 29 11:12:19.174984 kubelet[2655]: I0129 11:12:19.173815 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5e0fbcb1-e923-4be5-be56-486a572589ec-lib-modules\") pod \"kube-proxy-46mr2\" (UID: \"5e0fbcb1-e923-4be5-be56-486a572589ec\") " pod="kube-system/kube-proxy-46mr2" Jan 29 11:12:19.175454 kubelet[2655]: I0129 11:12:19.173839 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/be7e408d-2297-4e2e-9590-197a09d4d70c-cilium-run\") pod \"cilium-95rfj\" (UID: \"be7e408d-2297-4e2e-9590-197a09d4d70c\") " pod="kube-system/cilium-95rfj" Jan 29 11:12:19.175454 kubelet[2655]: I0129 11:12:19.173864 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/be7e408d-2297-4e2e-9590-197a09d4d70c-clustermesh-secrets\") pod \"cilium-95rfj\" (UID: \"be7e408d-2297-4e2e-9590-197a09d4d70c\") " pod="kube-system/cilium-95rfj" Jan 29 11:12:19.175454 kubelet[2655]: I0129 11:12:19.173888 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5e0fbcb1-e923-4be5-be56-486a572589ec-kube-proxy\") pod \"kube-proxy-46mr2\" (UID: \"5e0fbcb1-e923-4be5-be56-486a572589ec\") " pod="kube-system/kube-proxy-46mr2" Jan 29 11:12:19.175454 kubelet[2655]: I0129 11:12:19.173917 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zk5h\" (UniqueName: \"kubernetes.io/projected/5e0fbcb1-e923-4be5-be56-486a572589ec-kube-api-access-5zk5h\") pod \"kube-proxy-46mr2\" (UID: \"5e0fbcb1-e923-4be5-be56-486a572589ec\") " pod="kube-system/kube-proxy-46mr2" Jan 29 11:12:19.175454 kubelet[2655]: I0129 11:12:19.173945 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/be7e408d-2297-4e2e-9590-197a09d4d70c-cilium-cgroup\") pod \"cilium-95rfj\" (UID: \"be7e408d-2297-4e2e-9590-197a09d4d70c\") " pod="kube-system/cilium-95rfj" Jan 29 11:12:19.175454 kubelet[2655]: I0129 11:12:19.173967 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/be7e408d-2297-4e2e-9590-197a09d4d70c-hubble-tls\") pod \"cilium-95rfj\" (UID: \"be7e408d-2297-4e2e-9590-197a09d4d70c\") " pod="kube-system/cilium-95rfj" Jan 29 11:12:19.177367 kubelet[2655]: I0129 11:12:19.173992 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/be7e408d-2297-4e2e-9590-197a09d4d70c-bpf-maps\") pod \"cilium-95rfj\" (UID: \"be7e408d-2297-4e2e-9590-197a09d4d70c\") " pod="kube-system/cilium-95rfj" Jan 29 11:12:19.177367 kubelet[2655]: I0129 11:12:19.174013 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/be7e408d-2297-4e2e-9590-197a09d4d70c-lib-modules\") pod \"cilium-95rfj\" (UID: \"be7e408d-2297-4e2e-9590-197a09d4d70c\") " pod="kube-system/cilium-95rfj" Jan 29 11:12:19.177367 kubelet[2655]: I0129 11:12:19.174053 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfjp4\" (UniqueName: \"kubernetes.io/projected/be7e408d-2297-4e2e-9590-197a09d4d70c-kube-api-access-dfjp4\") pod \"cilium-95rfj\" (UID: \"be7e408d-2297-4e2e-9590-197a09d4d70c\") " pod="kube-system/cilium-95rfj" Jan 29 11:12:19.177367 kubelet[2655]: I0129 11:12:19.174077 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5e0fbcb1-e923-4be5-be56-486a572589ec-xtables-lock\") pod \"kube-proxy-46mr2\" (UID: \"5e0fbcb1-e923-4be5-be56-486a572589ec\") " pod="kube-system/kube-proxy-46mr2" Jan 29 11:12:19.177367 kubelet[2655]: I0129 11:12:19.174100 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/be7e408d-2297-4e2e-9590-197a09d4d70c-cni-path\") pod \"cilium-95rfj\" (UID: \"be7e408d-2297-4e2e-9590-197a09d4d70c\") " pod="kube-system/cilium-95rfj" Jan 29 11:12:19.177367 kubelet[2655]: I0129 11:12:19.174125 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/be7e408d-2297-4e2e-9590-197a09d4d70c-cilium-config-path\") pod \"cilium-95rfj\" (UID: \"be7e408d-2297-4e2e-9590-197a09d4d70c\") " pod="kube-system/cilium-95rfj" Jan 29 11:12:19.189221 systemd[1]: Created slice kubepods-burstable-podbe7e408d_2297_4e2e_9590_197a09d4d70c.slice - libcontainer container kubepods-burstable-podbe7e408d_2297_4e2e_9590_197a09d4d70c.slice. Jan 29 11:12:19.300316 kubelet[2655]: E0129 11:12:19.300242 2655 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 29 11:12:19.301078 kubelet[2655]: E0129 11:12:19.300569 2655 projected.go:200] Error preparing data for projected volume kube-api-access-dfjp4 for pod kube-system/cilium-95rfj: configmap "kube-root-ca.crt" not found Jan 29 11:12:19.301078 kubelet[2655]: E0129 11:12:19.300697 2655 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/be7e408d-2297-4e2e-9590-197a09d4d70c-kube-api-access-dfjp4 podName:be7e408d-2297-4e2e-9590-197a09d4d70c nodeName:}" failed. No retries permitted until 2025-01-29 11:12:19.800655739 +0000 UTC m=+13.686631794 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-dfjp4" (UniqueName: "kubernetes.io/projected/be7e408d-2297-4e2e-9590-197a09d4d70c-kube-api-access-dfjp4") pod "cilium-95rfj" (UID: "be7e408d-2297-4e2e-9590-197a09d4d70c") : configmap "kube-root-ca.crt" not found Jan 29 11:12:19.315664 kubelet[2655]: E0129 11:12:19.315519 2655 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 29 11:12:19.315664 kubelet[2655]: E0129 11:12:19.315563 2655 projected.go:200] Error preparing data for projected volume kube-api-access-5zk5h for pod kube-system/kube-proxy-46mr2: configmap "kube-root-ca.crt" not found Jan 29 11:12:19.315664 kubelet[2655]: E0129 11:12:19.315635 2655 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5e0fbcb1-e923-4be5-be56-486a572589ec-kube-api-access-5zk5h podName:5e0fbcb1-e923-4be5-be56-486a572589ec nodeName:}" failed. No retries permitted until 2025-01-29 11:12:19.815607928 +0000 UTC m=+13.701583995 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-5zk5h" (UniqueName: "kubernetes.io/projected/5e0fbcb1-e923-4be5-be56-486a572589ec-kube-api-access-5zk5h") pod "kube-proxy-46mr2" (UID: "5e0fbcb1-e923-4be5-be56-486a572589ec") : configmap "kube-root-ca.crt" not found Jan 29 11:12:19.673773 kubelet[2655]: I0129 11:12:19.673599 2655 topology_manager.go:215] "Topology Admit Handler" podUID="763e1748-bde8-4902-bb57-72597b8701ef" podNamespace="kube-system" podName="cilium-operator-599987898-g8g79" Jan 29 11:12:19.689432 systemd[1]: Created slice kubepods-besteffort-pod763e1748_bde8_4902_bb57_72597b8701ef.slice - libcontainer container kubepods-besteffort-pod763e1748_bde8_4902_bb57_72597b8701ef.slice. Jan 29 11:12:19.778982 kubelet[2655]: I0129 11:12:19.778868 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/763e1748-bde8-4902-bb57-72597b8701ef-cilium-config-path\") pod \"cilium-operator-599987898-g8g79\" (UID: \"763e1748-bde8-4902-bb57-72597b8701ef\") " pod="kube-system/cilium-operator-599987898-g8g79" Jan 29 11:12:19.779347 kubelet[2655]: I0129 11:12:19.779010 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5srj\" (UniqueName: \"kubernetes.io/projected/763e1748-bde8-4902-bb57-72597b8701ef-kube-api-access-s5srj\") pod \"cilium-operator-599987898-g8g79\" (UID: \"763e1748-bde8-4902-bb57-72597b8701ef\") " pod="kube-system/cilium-operator-599987898-g8g79" Jan 29 11:12:19.999496 kubelet[2655]: E0129 11:12:19.998750 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:12:20.000265 containerd[1475]: time="2025-01-29T11:12:19.999793557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-g8g79,Uid:763e1748-bde8-4902-bb57-72597b8701ef,Namespace:kube-system,Attempt:0,}" Jan 29 11:12:20.082236 kubelet[2655]: E0129 11:12:20.081750 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:12:20.083257 containerd[1475]: time="2025-01-29T11:12:20.083100054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-46mr2,Uid:5e0fbcb1-e923-4be5-be56-486a572589ec,Namespace:kube-system,Attempt:0,}" Jan 29 11:12:20.089987 containerd[1475]: time="2025-01-29T11:12:20.089196629Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:12:20.089987 containerd[1475]: time="2025-01-29T11:12:20.089321848Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:12:20.089987 containerd[1475]: time="2025-01-29T11:12:20.089340982Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:12:20.089987 containerd[1475]: time="2025-01-29T11:12:20.089841927Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:12:20.100373 kubelet[2655]: E0129 11:12:20.100313 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:12:20.103755 containerd[1475]: time="2025-01-29T11:12:20.103569767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-95rfj,Uid:be7e408d-2297-4e2e-9590-197a09d4d70c,Namespace:kube-system,Attempt:0,}" Jan 29 11:12:20.126927 systemd[1]: Started cri-containerd-91f41f437b203672b810b2dbdb40b79d9340d4bc2fc6de5c1fd9d7e5b48ea9a5.scope - libcontainer container 91f41f437b203672b810b2dbdb40b79d9340d4bc2fc6de5c1fd9d7e5b48ea9a5. Jan 29 11:12:20.202662 containerd[1475]: time="2025-01-29T11:12:20.200594754Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:12:20.202662 containerd[1475]: time="2025-01-29T11:12:20.200697633Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:12:20.202662 containerd[1475]: time="2025-01-29T11:12:20.201803375Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:12:20.202662 containerd[1475]: time="2025-01-29T11:12:20.202149734Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:12:20.227609 containerd[1475]: time="2025-01-29T11:12:20.227461693Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:12:20.227609 containerd[1475]: time="2025-01-29T11:12:20.227547374Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:12:20.227939 containerd[1475]: time="2025-01-29T11:12:20.227565034Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:12:20.227939 containerd[1475]: time="2025-01-29T11:12:20.227699487Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:12:20.258109 systemd[1]: Started cri-containerd-cc1444af7709279803ef038111029ef9c6fbb5def461ef9443579dfba6e0d69d.scope - libcontainer container cc1444af7709279803ef038111029ef9c6fbb5def461ef9443579dfba6e0d69d. Jan 29 11:12:20.267531 containerd[1475]: time="2025-01-29T11:12:20.266975348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-g8g79,Uid:763e1748-bde8-4902-bb57-72597b8701ef,Namespace:kube-system,Attempt:0,} returns sandbox id \"91f41f437b203672b810b2dbdb40b79d9340d4bc2fc6de5c1fd9d7e5b48ea9a5\"" Jan 29 11:12:20.273174 kubelet[2655]: E0129 11:12:20.271814 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:12:20.275383 containerd[1475]: time="2025-01-29T11:12:20.275320278Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 29 11:12:20.281862 systemd[1]: Started cri-containerd-3393bdb5c4c58484ff2fd5c7c8430f415a3af55297fd792bba08cce860d0d0f7.scope - libcontainer container 3393bdb5c4c58484ff2fd5c7c8430f415a3af55297fd792bba08cce860d0d0f7. Jan 29 11:12:20.351179 containerd[1475]: time="2025-01-29T11:12:20.351108276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-46mr2,Uid:5e0fbcb1-e923-4be5-be56-486a572589ec,Namespace:kube-system,Attempt:0,} returns sandbox id \"cc1444af7709279803ef038111029ef9c6fbb5def461ef9443579dfba6e0d69d\"" Jan 29 11:12:20.352519 kubelet[2655]: E0129 11:12:20.352484 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:12:20.356514 containerd[1475]: time="2025-01-29T11:12:20.356476838Z" level=info msg="CreateContainer within sandbox \"cc1444af7709279803ef038111029ef9c6fbb5def461ef9443579dfba6e0d69d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 11:12:20.362458 containerd[1475]: time="2025-01-29T11:12:20.362262200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-95rfj,Uid:be7e408d-2297-4e2e-9590-197a09d4d70c,Namespace:kube-system,Attempt:0,} returns sandbox id \"3393bdb5c4c58484ff2fd5c7c8430f415a3af55297fd792bba08cce860d0d0f7\"" Jan 29 11:12:20.364587 kubelet[2655]: E0129 11:12:20.364409 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:12:20.450500 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2255731298.mount: Deactivated successfully. Jan 29 11:12:20.458238 containerd[1475]: time="2025-01-29T11:12:20.458175400Z" level=info msg="CreateContainer within sandbox \"cc1444af7709279803ef038111029ef9c6fbb5def461ef9443579dfba6e0d69d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3fbcb26a12224fdce6b87f65dae5afe221636b21d8922bcc4090dd371f432549\"" Jan 29 11:12:20.459308 containerd[1475]: time="2025-01-29T11:12:20.459151018Z" level=info msg="StartContainer for \"3fbcb26a12224fdce6b87f65dae5afe221636b21d8922bcc4090dd371f432549\"" Jan 29 11:12:20.508145 systemd[1]: Started cri-containerd-3fbcb26a12224fdce6b87f65dae5afe221636b21d8922bcc4090dd371f432549.scope - libcontainer container 3fbcb26a12224fdce6b87f65dae5afe221636b21d8922bcc4090dd371f432549. Jan 29 11:12:20.559276 containerd[1475]: time="2025-01-29T11:12:20.559065008Z" level=info msg="StartContainer for \"3fbcb26a12224fdce6b87f65dae5afe221636b21d8922bcc4090dd371f432549\" returns successfully" Jan 29 11:12:21.405600 kubelet[2655]: E0129 11:12:21.405545 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:12:21.781587 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1191103928.mount: Deactivated successfully. Jan 29 11:12:22.409535 kubelet[2655]: E0129 11:12:22.408090 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:12:24.147347 containerd[1475]: time="2025-01-29T11:12:24.147269747Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:12:24.161475 containerd[1475]: time="2025-01-29T11:12:24.161393152Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 29 11:12:24.168065 containerd[1475]: time="2025-01-29T11:12:24.167885366Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:12:24.171833 containerd[1475]: time="2025-01-29T11:12:24.171620917Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.896246278s" Jan 29 11:12:24.171833 containerd[1475]: time="2025-01-29T11:12:24.171685067Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 29 11:12:24.175672 containerd[1475]: time="2025-01-29T11:12:24.175589980Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 29 11:12:24.178385 containerd[1475]: time="2025-01-29T11:12:24.178204392Z" level=info msg="CreateContainer within sandbox \"91f41f437b203672b810b2dbdb40b79d9340d4bc2fc6de5c1fd9d7e5b48ea9a5\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 29 11:12:24.275066 containerd[1475]: time="2025-01-29T11:12:24.274896657Z" level=info msg="CreateContainer within sandbox \"91f41f437b203672b810b2dbdb40b79d9340d4bc2fc6de5c1fd9d7e5b48ea9a5\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"b393a8721e9a1bbc0ac4f533bb834bb99326ffd9ffb06279da9580a5f14a2e5a\"" Jan 29 11:12:24.276643 containerd[1475]: time="2025-01-29T11:12:24.276586179Z" level=info msg="StartContainer for \"b393a8721e9a1bbc0ac4f533bb834bb99326ffd9ffb06279da9580a5f14a2e5a\"" Jan 29 11:12:24.326335 systemd[1]: run-containerd-runc-k8s.io-b393a8721e9a1bbc0ac4f533bb834bb99326ffd9ffb06279da9580a5f14a2e5a-runc.fJPdVz.mount: Deactivated successfully. Jan 29 11:12:24.338115 systemd[1]: Started cri-containerd-b393a8721e9a1bbc0ac4f533bb834bb99326ffd9ffb06279da9580a5f14a2e5a.scope - libcontainer container b393a8721e9a1bbc0ac4f533bb834bb99326ffd9ffb06279da9580a5f14a2e5a. Jan 29 11:12:24.408988 containerd[1475]: time="2025-01-29T11:12:24.408821001Z" level=info msg="StartContainer for \"b393a8721e9a1bbc0ac4f533bb834bb99326ffd9ffb06279da9580a5f14a2e5a\" returns successfully" Jan 29 11:12:24.418609 kubelet[2655]: E0129 11:12:24.418560 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:12:24.440639 kubelet[2655]: I0129 11:12:24.440527 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-46mr2" podStartSLOduration=5.440496818 podStartE2EDuration="5.440496818s" podCreationTimestamp="2025-01-29 11:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:12:21.432126638 +0000 UTC m=+15.318102712" watchObservedRunningTime="2025-01-29 11:12:24.440496818 +0000 UTC m=+18.326472907" Jan 29 11:12:25.424975 kubelet[2655]: E0129 11:12:25.423550 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:12:26.389257 kubelet[2655]: I0129 11:12:26.388964 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-g8g79" podStartSLOduration=3.4894433 podStartE2EDuration="7.388934211s" podCreationTimestamp="2025-01-29 11:12:19 +0000 UTC" firstStartedPulling="2025-01-29 11:12:20.273823395 +0000 UTC m=+14.159799463" lastFinishedPulling="2025-01-29 11:12:24.173314308 +0000 UTC m=+18.059290374" observedRunningTime="2025-01-29 11:12:24.441057708 +0000 UTC m=+18.327033786" watchObservedRunningTime="2025-01-29 11:12:26.388934211 +0000 UTC m=+20.274910277" Jan 29 11:12:30.722335 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount759668413.mount: Deactivated successfully. Jan 29 11:12:34.172578 containerd[1475]: time="2025-01-29T11:12:34.172501165Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:12:34.179187 containerd[1475]: time="2025-01-29T11:12:34.179080968Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 29 11:12:34.194583 containerd[1475]: time="2025-01-29T11:12:34.194440239Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:12:34.201923 containerd[1475]: time="2025-01-29T11:12:34.201818076Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.026147918s" Jan 29 11:12:34.201923 containerd[1475]: time="2025-01-29T11:12:34.201888739Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 29 11:12:34.209620 containerd[1475]: time="2025-01-29T11:12:34.209562236Z" level=info msg="CreateContainer within sandbox \"3393bdb5c4c58484ff2fd5c7c8430f415a3af55297fd792bba08cce860d0d0f7\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 11:12:34.291593 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2744373363.mount: Deactivated successfully. Jan 29 11:12:34.328247 containerd[1475]: time="2025-01-29T11:12:34.328164344Z" level=info msg="CreateContainer within sandbox \"3393bdb5c4c58484ff2fd5c7c8430f415a3af55297fd792bba08cce860d0d0f7\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6d07717c18899df4c1ff018d869dc9f13a5e3c2b25dcb221e452f3229455353c\"" Jan 29 11:12:34.329379 containerd[1475]: time="2025-01-29T11:12:34.329264017Z" level=info msg="StartContainer for \"6d07717c18899df4c1ff018d869dc9f13a5e3c2b25dcb221e452f3229455353c\"" Jan 29 11:12:34.693299 systemd[1]: run-containerd-runc-k8s.io-6d07717c18899df4c1ff018d869dc9f13a5e3c2b25dcb221e452f3229455353c-runc.NEQnLM.mount: Deactivated successfully. Jan 29 11:12:34.707066 systemd[1]: Started cri-containerd-6d07717c18899df4c1ff018d869dc9f13a5e3c2b25dcb221e452f3229455353c.scope - libcontainer container 6d07717c18899df4c1ff018d869dc9f13a5e3c2b25dcb221e452f3229455353c. Jan 29 11:12:34.803458 containerd[1475]: time="2025-01-29T11:12:34.803130952Z" level=info msg="StartContainer for \"6d07717c18899df4c1ff018d869dc9f13a5e3c2b25dcb221e452f3229455353c\" returns successfully" Jan 29 11:12:34.806921 systemd[1]: cri-containerd-6d07717c18899df4c1ff018d869dc9f13a5e3c2b25dcb221e452f3229455353c.scope: Deactivated successfully. Jan 29 11:12:34.973359 containerd[1475]: time="2025-01-29T11:12:34.946957791Z" level=info msg="shim disconnected" id=6d07717c18899df4c1ff018d869dc9f13a5e3c2b25dcb221e452f3229455353c namespace=k8s.io Jan 29 11:12:34.973359 containerd[1475]: time="2025-01-29T11:12:34.972829396Z" level=warning msg="cleaning up after shim disconnected" id=6d07717c18899df4c1ff018d869dc9f13a5e3c2b25dcb221e452f3229455353c namespace=k8s.io Jan 29 11:12:34.973359 containerd[1475]: time="2025-01-29T11:12:34.972864817Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:12:35.286571 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6d07717c18899df4c1ff018d869dc9f13a5e3c2b25dcb221e452f3229455353c-rootfs.mount: Deactivated successfully. Jan 29 11:12:35.487285 kubelet[2655]: E0129 11:12:35.487231 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:12:35.492762 containerd[1475]: time="2025-01-29T11:12:35.491918297Z" level=info msg="CreateContainer within sandbox \"3393bdb5c4c58484ff2fd5c7c8430f415a3af55297fd792bba08cce860d0d0f7\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 11:12:35.572498 containerd[1475]: time="2025-01-29T11:12:35.572138670Z" level=info msg="CreateContainer within sandbox \"3393bdb5c4c58484ff2fd5c7c8430f415a3af55297fd792bba08cce860d0d0f7\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0382fba2676ded5580433fdbecc3021cdc0275f9b48999d3f080c10ce189b387\"" Jan 29 11:12:35.574800 containerd[1475]: time="2025-01-29T11:12:35.573762598Z" level=info msg="StartContainer for \"0382fba2676ded5580433fdbecc3021cdc0275f9b48999d3f080c10ce189b387\"" Jan 29 11:12:35.616030 systemd[1]: Started cri-containerd-0382fba2676ded5580433fdbecc3021cdc0275f9b48999d3f080c10ce189b387.scope - libcontainer container 0382fba2676ded5580433fdbecc3021cdc0275f9b48999d3f080c10ce189b387. Jan 29 11:12:35.689017 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 11:12:35.689587 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:12:35.689680 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:12:35.692447 containerd[1475]: time="2025-01-29T11:12:35.692284320Z" level=info msg="StartContainer for \"0382fba2676ded5580433fdbecc3021cdc0275f9b48999d3f080c10ce189b387\" returns successfully" Jan 29 11:12:35.698499 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:12:35.698777 systemd[1]: cri-containerd-0382fba2676ded5580433fdbecc3021cdc0275f9b48999d3f080c10ce189b387.scope: Deactivated successfully. Jan 29 11:12:35.761237 containerd[1475]: time="2025-01-29T11:12:35.761159383Z" level=info msg="shim disconnected" id=0382fba2676ded5580433fdbecc3021cdc0275f9b48999d3f080c10ce189b387 namespace=k8s.io Jan 29 11:12:35.762293 containerd[1475]: time="2025-01-29T11:12:35.761756730Z" level=warning msg="cleaning up after shim disconnected" id=0382fba2676ded5580433fdbecc3021cdc0275f9b48999d3f080c10ce189b387 namespace=k8s.io Jan 29 11:12:35.762293 containerd[1475]: time="2025-01-29T11:12:35.761813571Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:12:35.776284 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:12:36.286258 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0382fba2676ded5580433fdbecc3021cdc0275f9b48999d3f080c10ce189b387-rootfs.mount: Deactivated successfully. Jan 29 11:12:36.492233 kubelet[2655]: E0129 11:12:36.491768 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:12:36.496803 containerd[1475]: time="2025-01-29T11:12:36.496763674Z" level=info msg="CreateContainer within sandbox \"3393bdb5c4c58484ff2fd5c7c8430f415a3af55297fd792bba08cce860d0d0f7\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 11:12:36.636536 containerd[1475]: time="2025-01-29T11:12:36.636476382Z" level=info msg="CreateContainer within sandbox \"3393bdb5c4c58484ff2fd5c7c8430f415a3af55297fd792bba08cce860d0d0f7\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f3dd0401857a1644100bd849a7ac27d9dd853d1feeb23d89ba18383cbcac2679\"" Jan 29 11:12:36.637509 containerd[1475]: time="2025-01-29T11:12:36.637470251Z" level=info msg="StartContainer for \"f3dd0401857a1644100bd849a7ac27d9dd853d1feeb23d89ba18383cbcac2679\"" Jan 29 11:12:36.699138 systemd[1]: Started cri-containerd-f3dd0401857a1644100bd849a7ac27d9dd853d1feeb23d89ba18383cbcac2679.scope - libcontainer container f3dd0401857a1644100bd849a7ac27d9dd853d1feeb23d89ba18383cbcac2679. Jan 29 11:12:36.755199 systemd[1]: cri-containerd-f3dd0401857a1644100bd849a7ac27d9dd853d1feeb23d89ba18383cbcac2679.scope: Deactivated successfully. Jan 29 11:12:36.767322 containerd[1475]: time="2025-01-29T11:12:36.767278256Z" level=info msg="StartContainer for \"f3dd0401857a1644100bd849a7ac27d9dd853d1feeb23d89ba18383cbcac2679\" returns successfully" Jan 29 11:12:36.819105 containerd[1475]: time="2025-01-29T11:12:36.819020474Z" level=info msg="shim disconnected" id=f3dd0401857a1644100bd849a7ac27d9dd853d1feeb23d89ba18383cbcac2679 namespace=k8s.io Jan 29 11:12:36.819886 containerd[1475]: time="2025-01-29T11:12:36.819605815Z" level=warning msg="cleaning up after shim disconnected" id=f3dd0401857a1644100bd849a7ac27d9dd853d1feeb23d89ba18383cbcac2679 namespace=k8s.io Jan 29 11:12:36.819886 containerd[1475]: time="2025-01-29T11:12:36.819637358Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:12:37.286578 systemd[1]: run-containerd-runc-k8s.io-f3dd0401857a1644100bd849a7ac27d9dd853d1feeb23d89ba18383cbcac2679-runc.CW4qaU.mount: Deactivated successfully. Jan 29 11:12:37.287194 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f3dd0401857a1644100bd849a7ac27d9dd853d1feeb23d89ba18383cbcac2679-rootfs.mount: Deactivated successfully. Jan 29 11:12:37.497353 kubelet[2655]: E0129 11:12:37.497311 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:12:37.504849 containerd[1475]: time="2025-01-29T11:12:37.504512114Z" level=info msg="CreateContainer within sandbox \"3393bdb5c4c58484ff2fd5c7c8430f415a3af55297fd792bba08cce860d0d0f7\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 11:12:37.571081 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3977634659.mount: Deactivated successfully. Jan 29 11:12:37.580772 containerd[1475]: time="2025-01-29T11:12:37.580679504Z" level=info msg="CreateContainer within sandbox \"3393bdb5c4c58484ff2fd5c7c8430f415a3af55297fd792bba08cce860d0d0f7\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4298ae8d12ea9b9c8f9e8c2df636c9c60769aa634ca49597611fb8626a2483dc\"" Jan 29 11:12:37.583215 containerd[1475]: time="2025-01-29T11:12:37.581841200Z" level=info msg="StartContainer for \"4298ae8d12ea9b9c8f9e8c2df636c9c60769aa634ca49597611fb8626a2483dc\"" Jan 29 11:12:37.646017 systemd[1]: Started cri-containerd-4298ae8d12ea9b9c8f9e8c2df636c9c60769aa634ca49597611fb8626a2483dc.scope - libcontainer container 4298ae8d12ea9b9c8f9e8c2df636c9c60769aa634ca49597611fb8626a2483dc. Jan 29 11:12:37.686900 systemd[1]: cri-containerd-4298ae8d12ea9b9c8f9e8c2df636c9c60769aa634ca49597611fb8626a2483dc.scope: Deactivated successfully. Jan 29 11:12:37.709745 containerd[1475]: time="2025-01-29T11:12:37.709553130Z" level=info msg="StartContainer for \"4298ae8d12ea9b9c8f9e8c2df636c9c60769aa634ca49597611fb8626a2483dc\" returns successfully" Jan 29 11:12:37.754766 containerd[1475]: time="2025-01-29T11:12:37.754583427Z" level=info msg="shim disconnected" id=4298ae8d12ea9b9c8f9e8c2df636c9c60769aa634ca49597611fb8626a2483dc namespace=k8s.io Jan 29 11:12:37.754766 containerd[1475]: time="2025-01-29T11:12:37.754755276Z" level=warning msg="cleaning up after shim disconnected" id=4298ae8d12ea9b9c8f9e8c2df636c9c60769aa634ca49597611fb8626a2483dc namespace=k8s.io Jan 29 11:12:37.754766 containerd[1475]: time="2025-01-29T11:12:37.754768686Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:12:38.286186 systemd[1]: run-containerd-runc-k8s.io-4298ae8d12ea9b9c8f9e8c2df636c9c60769aa634ca49597611fb8626a2483dc-runc.hW3eGs.mount: Deactivated successfully. Jan 29 11:12:38.286852 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4298ae8d12ea9b9c8f9e8c2df636c9c60769aa634ca49597611fb8626a2483dc-rootfs.mount: Deactivated successfully. Jan 29 11:12:38.510769 kubelet[2655]: E0129 11:12:38.509638 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:12:38.516890 containerd[1475]: time="2025-01-29T11:12:38.516699006Z" level=info msg="CreateContainer within sandbox \"3393bdb5c4c58484ff2fd5c7c8430f415a3af55297fd792bba08cce860d0d0f7\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 11:12:38.584069 containerd[1475]: time="2025-01-29T11:12:38.583096592Z" level=info msg="CreateContainer within sandbox \"3393bdb5c4c58484ff2fd5c7c8430f415a3af55297fd792bba08cce860d0d0f7\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4ce18c9dda5f0d4cc241115893d887ecf9da6ba080744a31a968a074494b31ef\"" Jan 29 11:12:38.586755 containerd[1475]: time="2025-01-29T11:12:38.586183896Z" level=info msg="StartContainer for \"4ce18c9dda5f0d4cc241115893d887ecf9da6ba080744a31a968a074494b31ef\"" Jan 29 11:12:38.637205 systemd[1]: run-containerd-runc-k8s.io-4ce18c9dda5f0d4cc241115893d887ecf9da6ba080744a31a968a074494b31ef-runc.ofNsNy.mount: Deactivated successfully. Jan 29 11:12:38.647246 systemd[1]: Started cri-containerd-4ce18c9dda5f0d4cc241115893d887ecf9da6ba080744a31a968a074494b31ef.scope - libcontainer container 4ce18c9dda5f0d4cc241115893d887ecf9da6ba080744a31a968a074494b31ef. Jan 29 11:12:38.706241 containerd[1475]: time="2025-01-29T11:12:38.706178340Z" level=info msg="StartContainer for \"4ce18c9dda5f0d4cc241115893d887ecf9da6ba080744a31a968a074494b31ef\" returns successfully" Jan 29 11:12:38.918622 kubelet[2655]: I0129 11:12:38.917383 2655 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 29 11:12:38.979661 kubelet[2655]: I0129 11:12:38.978947 2655 topology_manager.go:215] "Topology Admit Handler" podUID="367dd8ed-476f-4a44-9ecc-925b3a1607e0" podNamespace="kube-system" podName="coredns-7db6d8ff4d-tgqp8" Jan 29 11:12:38.986024 kubelet[2655]: I0129 11:12:38.985121 2655 topology_manager.go:215] "Topology Admit Handler" podUID="b42a98dd-4f26-4143-b349-cee4dcd68ee0" podNamespace="kube-system" podName="coredns-7db6d8ff4d-bjfz6" Jan 29 11:12:38.996560 systemd[1]: Created slice kubepods-burstable-pod367dd8ed_476f_4a44_9ecc_925b3a1607e0.slice - libcontainer container kubepods-burstable-pod367dd8ed_476f_4a44_9ecc_925b3a1607e0.slice. Jan 29 11:12:39.011953 systemd[1]: Created slice kubepods-burstable-podb42a98dd_4f26_4143_b349_cee4dcd68ee0.slice - libcontainer container kubepods-burstable-podb42a98dd_4f26_4143_b349_cee4dcd68ee0.slice. Jan 29 11:12:39.086408 kubelet[2655]: I0129 11:12:39.086027 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b42a98dd-4f26-4143-b349-cee4dcd68ee0-config-volume\") pod \"coredns-7db6d8ff4d-bjfz6\" (UID: \"b42a98dd-4f26-4143-b349-cee4dcd68ee0\") " pod="kube-system/coredns-7db6d8ff4d-bjfz6" Jan 29 11:12:39.086408 kubelet[2655]: I0129 11:12:39.086118 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmglm\" (UniqueName: \"kubernetes.io/projected/367dd8ed-476f-4a44-9ecc-925b3a1607e0-kube-api-access-nmglm\") pod \"coredns-7db6d8ff4d-tgqp8\" (UID: \"367dd8ed-476f-4a44-9ecc-925b3a1607e0\") " pod="kube-system/coredns-7db6d8ff4d-tgqp8" Jan 29 11:12:39.086408 kubelet[2655]: I0129 11:12:39.086184 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55qt9\" (UniqueName: \"kubernetes.io/projected/b42a98dd-4f26-4143-b349-cee4dcd68ee0-kube-api-access-55qt9\") pod \"coredns-7db6d8ff4d-bjfz6\" (UID: \"b42a98dd-4f26-4143-b349-cee4dcd68ee0\") " pod="kube-system/coredns-7db6d8ff4d-bjfz6" Jan 29 11:12:39.086408 kubelet[2655]: I0129 11:12:39.086230 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/367dd8ed-476f-4a44-9ecc-925b3a1607e0-config-volume\") pod \"coredns-7db6d8ff4d-tgqp8\" (UID: \"367dd8ed-476f-4a44-9ecc-925b3a1607e0\") " pod="kube-system/coredns-7db6d8ff4d-tgqp8" Jan 29 11:12:39.302545 kubelet[2655]: E0129 11:12:39.302285 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:12:39.305758 containerd[1475]: time="2025-01-29T11:12:39.305590106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tgqp8,Uid:367dd8ed-476f-4a44-9ecc-925b3a1607e0,Namespace:kube-system,Attempt:0,}" Jan 29 11:12:39.317610 kubelet[2655]: E0129 11:12:39.317125 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:12:39.319966 containerd[1475]: time="2025-01-29T11:12:39.319097196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bjfz6,Uid:b42a98dd-4f26-4143-b349-cee4dcd68ee0,Namespace:kube-system,Attempt:0,}" Jan 29 11:12:39.551932 kubelet[2655]: E0129 11:12:39.550587 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:12:39.585509 kubelet[2655]: I0129 11:12:39.585298 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-95rfj" podStartSLOduration=6.746867112 podStartE2EDuration="20.585268819s" podCreationTimestamp="2025-01-29 11:12:19 +0000 UTC" firstStartedPulling="2025-01-29 11:12:20.367112435 +0000 UTC m=+14.253088490" lastFinishedPulling="2025-01-29 11:12:34.205514142 +0000 UTC m=+28.091490197" observedRunningTime="2025-01-29 11:12:39.583636163 +0000 UTC m=+33.469612264" watchObservedRunningTime="2025-01-29 11:12:39.585268819 +0000 UTC m=+33.471244895" Jan 29 11:12:40.552317 kubelet[2655]: E0129 11:12:40.552266 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:12:41.236171 systemd-networkd[1357]: cilium_host: Link UP Jan 29 11:12:41.239950 systemd-networkd[1357]: cilium_net: Link UP Jan 29 11:12:41.240903 systemd-networkd[1357]: cilium_net: Gained carrier Jan 29 11:12:41.242072 systemd-networkd[1357]: cilium_host: Gained carrier Jan 29 11:12:41.421900 systemd-networkd[1357]: cilium_vxlan: Link UP Jan 29 11:12:41.421912 systemd-networkd[1357]: cilium_vxlan: Gained carrier Jan 29 11:12:41.555383 kubelet[2655]: E0129 11:12:41.555204 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:12:41.833872 systemd-networkd[1357]: cilium_net: Gained IPv6LL Jan 29 11:12:42.218460 systemd-networkd[1357]: cilium_host: Gained IPv6LL Jan 29 11:12:42.286829 kernel: NET: Registered PF_ALG protocol family Jan 29 11:12:42.474016 systemd-networkd[1357]: cilium_vxlan: Gained IPv6LL Jan 29 11:12:43.523748 systemd-networkd[1357]: lxc_health: Link UP Jan 29 11:12:43.533101 systemd-networkd[1357]: lxc_health: Gained carrier Jan 29 11:12:43.993763 kernel: eth0: renamed from tmp3f7b7 Jan 29 11:12:44.000001 systemd-networkd[1357]: lxcbf72241745a5: Link UP Jan 29 11:12:44.012913 systemd-networkd[1357]: lxcbf72241745a5: Gained carrier Jan 29 11:12:44.020938 systemd-networkd[1357]: lxc666ad81c1b54: Link UP Jan 29 11:12:44.030788 kernel: eth0: renamed from tmp98ed5 Jan 29 11:12:44.042050 systemd-networkd[1357]: lxc666ad81c1b54: Gained carrier Jan 29 11:12:44.113295 kubelet[2655]: E0129 11:12:44.113243 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:12:44.567124 kubelet[2655]: E0129 11:12:44.566996 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:12:45.290012 systemd-networkd[1357]: lxcbf72241745a5: Gained IPv6LL Jan 29 11:12:45.546684 systemd-networkd[1357]: lxc_health: Gained IPv6LL Jan 29 11:12:45.569311 kubelet[2655]: E0129 11:12:45.569263 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:12:45.802104 systemd-networkd[1357]: lxc666ad81c1b54: Gained IPv6LL Jan 29 11:12:51.132960 containerd[1475]: time="2025-01-29T11:12:51.132403212Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:12:51.132960 containerd[1475]: time="2025-01-29T11:12:51.132543211Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:12:51.132960 containerd[1475]: time="2025-01-29T11:12:51.132570729Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:12:51.132960 containerd[1475]: time="2025-01-29T11:12:51.132699916Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:12:51.161974 containerd[1475]: time="2025-01-29T11:12:51.157412367Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:12:51.161974 containerd[1475]: time="2025-01-29T11:12:51.157531015Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:12:51.161974 containerd[1475]: time="2025-01-29T11:12:51.157553487Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:12:51.161974 containerd[1475]: time="2025-01-29T11:12:51.157670257Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:12:51.194383 systemd[1]: Started cri-containerd-3f7b7be7a567e0f57d88ce82492eea481126bebc2bdb64c0991dbc0850a87999.scope - libcontainer container 3f7b7be7a567e0f57d88ce82492eea481126bebc2bdb64c0991dbc0850a87999. Jan 29 11:12:51.240050 systemd[1]: Started cri-containerd-98ed5a228301a9fb9fec67a89e067aa88fbfd0ee73b2e93ec79f24eddb89c318.scope - libcontainer container 98ed5a228301a9fb9fec67a89e067aa88fbfd0ee73b2e93ec79f24eddb89c318. Jan 29 11:12:51.341105 containerd[1475]: time="2025-01-29T11:12:51.339130354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tgqp8,Uid:367dd8ed-476f-4a44-9ecc-925b3a1607e0,Namespace:kube-system,Attempt:0,} returns sandbox id \"3f7b7be7a567e0f57d88ce82492eea481126bebc2bdb64c0991dbc0850a87999\"" Jan 29 11:12:51.342837 kubelet[2655]: E0129 11:12:51.342675 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:12:51.350155 containerd[1475]: time="2025-01-29T11:12:51.349333501Z" level=info msg="CreateContainer within sandbox \"3f7b7be7a567e0f57d88ce82492eea481126bebc2bdb64c0991dbc0850a87999\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 11:12:51.374899 containerd[1475]: time="2025-01-29T11:12:51.374846487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bjfz6,Uid:b42a98dd-4f26-4143-b349-cee4dcd68ee0,Namespace:kube-system,Attempt:0,} returns sandbox id \"98ed5a228301a9fb9fec67a89e067aa88fbfd0ee73b2e93ec79f24eddb89c318\"" Jan 29 11:12:51.377993 kubelet[2655]: E0129 11:12:51.377625 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:12:51.383276 containerd[1475]: time="2025-01-29T11:12:51.382931108Z" level=info msg="CreateContainer within sandbox \"98ed5a228301a9fb9fec67a89e067aa88fbfd0ee73b2e93ec79f24eddb89c318\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 11:12:51.581143 containerd[1475]: time="2025-01-29T11:12:51.581066300Z" level=info msg="CreateContainer within sandbox \"98ed5a228301a9fb9fec67a89e067aa88fbfd0ee73b2e93ec79f24eddb89c318\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cdca7662d04485b8b77e92c2f300ab575456e0937c6dee8d9b683ceebfad76c5\"" Jan 29 11:12:51.585532 containerd[1475]: time="2025-01-29T11:12:51.582564251Z" level=info msg="StartContainer for \"cdca7662d04485b8b77e92c2f300ab575456e0937c6dee8d9b683ceebfad76c5\"" Jan 29 11:12:51.587383 containerd[1475]: time="2025-01-29T11:12:51.586873645Z" level=info msg="CreateContainer within sandbox \"3f7b7be7a567e0f57d88ce82492eea481126bebc2bdb64c0991dbc0850a87999\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dcfaca26b5625fefd8dce6fe3595ee3fa2d7027dad79703935ea28577630cc1e\"" Jan 29 11:12:51.589189 containerd[1475]: time="2025-01-29T11:12:51.589143574Z" level=info msg="StartContainer for \"dcfaca26b5625fefd8dce6fe3595ee3fa2d7027dad79703935ea28577630cc1e\"" Jan 29 11:12:51.637339 systemd[1]: Started cri-containerd-cdca7662d04485b8b77e92c2f300ab575456e0937c6dee8d9b683ceebfad76c5.scope - libcontainer container cdca7662d04485b8b77e92c2f300ab575456e0937c6dee8d9b683ceebfad76c5. Jan 29 11:12:51.653116 systemd[1]: Started cri-containerd-dcfaca26b5625fefd8dce6fe3595ee3fa2d7027dad79703935ea28577630cc1e.scope - libcontainer container dcfaca26b5625fefd8dce6fe3595ee3fa2d7027dad79703935ea28577630cc1e. Jan 29 11:12:51.722870 containerd[1475]: time="2025-01-29T11:12:51.722816917Z" level=info msg="StartContainer for \"dcfaca26b5625fefd8dce6fe3595ee3fa2d7027dad79703935ea28577630cc1e\" returns successfully" Jan 29 11:12:51.723340 containerd[1475]: time="2025-01-29T11:12:51.722907533Z" level=info msg="StartContainer for \"cdca7662d04485b8b77e92c2f300ab575456e0937c6dee8d9b683ceebfad76c5\" returns successfully" Jan 29 11:12:52.146199 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3483376685.mount: Deactivated successfully. Jan 29 11:12:52.603614 kubelet[2655]: E0129 11:12:52.602370 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:12:52.611001 kubelet[2655]: E0129 11:12:52.610957 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:12:52.658659 kubelet[2655]: I0129 11:12:52.656807 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-tgqp8" podStartSLOduration=33.656781318 podStartE2EDuration="33.656781318s" podCreationTimestamp="2025-01-29 11:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:12:52.654574155 +0000 UTC m=+46.540550229" watchObservedRunningTime="2025-01-29 11:12:52.656781318 +0000 UTC m=+46.542757393" Jan 29 11:12:52.658659 kubelet[2655]: I0129 11:12:52.656943 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-bjfz6" podStartSLOduration=33.656935128 podStartE2EDuration="33.656935128s" podCreationTimestamp="2025-01-29 11:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:12:52.631396589 +0000 UTC m=+46.517372664" watchObservedRunningTime="2025-01-29 11:12:52.656935128 +0000 UTC m=+46.542911203" Jan 29 11:12:53.613675 kubelet[2655]: E0129 11:12:53.613122 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:12:53.613675 kubelet[2655]: E0129 11:12:53.613399 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:12:54.616464 kubelet[2655]: E0129 11:12:54.616385 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:12:54.618009 kubelet[2655]: E0129 11:12:54.617967 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:13:08.434806 systemd[1]: Started sshd@7-64.23.245.19:22-139.178.89.65:33628.service - OpenSSH per-connection server daemon (139.178.89.65:33628). Jan 29 11:13:08.557180 sshd[4025]: Accepted publickey for core from 139.178.89.65 port 33628 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:13:08.560280 sshd-session[4025]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:13:08.568416 systemd-logind[1450]: New session 8 of user core. Jan 29 11:13:08.578113 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 11:13:09.478675 sshd[4027]: Connection closed by 139.178.89.65 port 33628 Jan 29 11:13:09.479780 sshd-session[4025]: pam_unix(sshd:session): session closed for user core Jan 29 11:13:09.484591 systemd-logind[1450]: Session 8 logged out. Waiting for processes to exit. Jan 29 11:13:09.485970 systemd[1]: sshd@7-64.23.245.19:22-139.178.89.65:33628.service: Deactivated successfully. Jan 29 11:13:09.489413 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 11:13:09.491300 systemd-logind[1450]: Removed session 8. Jan 29 11:13:14.495507 systemd[1]: Started sshd@8-64.23.245.19:22-139.178.89.65:50536.service - OpenSSH per-connection server daemon (139.178.89.65:50536). Jan 29 11:13:14.611995 sshd[4040]: Accepted publickey for core from 139.178.89.65 port 50536 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:13:14.614646 sshd-session[4040]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:13:14.622911 systemd-logind[1450]: New session 9 of user core. Jan 29 11:13:14.635789 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 11:13:14.817928 sshd[4042]: Connection closed by 139.178.89.65 port 50536 Jan 29 11:13:14.819493 sshd-session[4040]: pam_unix(sshd:session): session closed for user core Jan 29 11:13:14.824230 systemd[1]: sshd@8-64.23.245.19:22-139.178.89.65:50536.service: Deactivated successfully. Jan 29 11:13:14.827814 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 11:13:14.831782 systemd-logind[1450]: Session 9 logged out. Waiting for processes to exit. Jan 29 11:13:14.833982 systemd-logind[1450]: Removed session 9. Jan 29 11:13:19.839378 systemd[1]: Started sshd@9-64.23.245.19:22-139.178.89.65:50540.service - OpenSSH per-connection server daemon (139.178.89.65:50540). Jan 29 11:13:19.908471 sshd[4054]: Accepted publickey for core from 139.178.89.65 port 50540 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:13:19.910814 sshd-session[4054]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:13:19.919256 systemd-logind[1450]: New session 10 of user core. Jan 29 11:13:19.929074 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 11:13:20.098317 sshd[4056]: Connection closed by 139.178.89.65 port 50540 Jan 29 11:13:20.099634 sshd-session[4054]: pam_unix(sshd:session): session closed for user core Jan 29 11:13:20.107207 systemd[1]: sshd@9-64.23.245.19:22-139.178.89.65:50540.service: Deactivated successfully. Jan 29 11:13:20.111123 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 11:13:20.113026 systemd-logind[1450]: Session 10 logged out. Waiting for processes to exit. Jan 29 11:13:20.114511 systemd-logind[1450]: Removed session 10. Jan 29 11:13:24.306700 kubelet[2655]: E0129 11:13:24.304953 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:13:25.131259 systemd[1]: Started sshd@10-64.23.245.19:22-139.178.89.65:60076.service - OpenSSH per-connection server daemon (139.178.89.65:60076). Jan 29 11:13:25.196653 sshd[4073]: Accepted publickey for core from 139.178.89.65 port 60076 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:13:25.199675 sshd-session[4073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:13:25.211171 systemd-logind[1450]: New session 11 of user core. Jan 29 11:13:25.217144 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 11:13:25.403392 sshd[4075]: Connection closed by 139.178.89.65 port 60076 Jan 29 11:13:25.404298 sshd-session[4073]: pam_unix(sshd:session): session closed for user core Jan 29 11:13:25.409135 systemd[1]: sshd@10-64.23.245.19:22-139.178.89.65:60076.service: Deactivated successfully. Jan 29 11:13:25.412864 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 11:13:25.416174 systemd-logind[1450]: Session 11 logged out. Waiting for processes to exit. Jan 29 11:13:25.418389 systemd-logind[1450]: Removed session 11. Jan 29 11:13:29.305167 kubelet[2655]: E0129 11:13:29.305086 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:13:30.307127 kubelet[2655]: E0129 11:13:30.304860 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:13:30.453306 systemd[1]: Started sshd@11-64.23.245.19:22-139.178.89.65:60080.service - OpenSSH per-connection server daemon (139.178.89.65:60080). Jan 29 11:13:30.711638 sshd[4088]: Accepted publickey for core from 139.178.89.65 port 60080 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:13:30.718547 sshd-session[4088]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:13:30.735656 systemd-logind[1450]: New session 12 of user core. Jan 29 11:13:30.741423 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 29 11:13:30.964767 sshd[4090]: Connection closed by 139.178.89.65 port 60080 Jan 29 11:13:30.965842 sshd-session[4088]: pam_unix(sshd:session): session closed for user core Jan 29 11:13:30.970225 systemd[1]: sshd@11-64.23.245.19:22-139.178.89.65:60080.service: Deactivated successfully. Jan 29 11:13:30.973964 systemd[1]: session-12.scope: Deactivated successfully. Jan 29 11:13:30.976545 systemd-logind[1450]: Session 12 logged out. Waiting for processes to exit. Jan 29 11:13:30.978094 systemd-logind[1450]: Removed session 12. Jan 29 11:13:33.304362 kubelet[2655]: E0129 11:13:33.304307 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:13:35.990314 systemd[1]: Started sshd@12-64.23.245.19:22-139.178.89.65:40442.service - OpenSSH per-connection server daemon (139.178.89.65:40442). Jan 29 11:13:36.077437 sshd[4102]: Accepted publickey for core from 139.178.89.65 port 40442 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:13:36.081046 sshd-session[4102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:13:36.091767 systemd-logind[1450]: New session 13 of user core. Jan 29 11:13:36.098217 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 29 11:13:36.294175 sshd[4104]: Connection closed by 139.178.89.65 port 40442 Jan 29 11:13:36.295410 sshd-session[4102]: pam_unix(sshd:session): session closed for user core Jan 29 11:13:36.310332 systemd[1]: sshd@12-64.23.245.19:22-139.178.89.65:40442.service: Deactivated successfully. Jan 29 11:13:36.314571 systemd[1]: session-13.scope: Deactivated successfully. Jan 29 11:13:36.319552 systemd-logind[1450]: Session 13 logged out. Waiting for processes to exit. Jan 29 11:13:36.326338 systemd[1]: Started sshd@13-64.23.245.19:22-139.178.89.65:40450.service - OpenSSH per-connection server daemon (139.178.89.65:40450). Jan 29 11:13:36.330408 systemd-logind[1450]: Removed session 13. Jan 29 11:13:36.399254 sshd[4116]: Accepted publickey for core from 139.178.89.65 port 40450 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:13:36.402853 sshd-session[4116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:13:36.410789 systemd-logind[1450]: New session 14 of user core. Jan 29 11:13:36.419067 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 29 11:13:36.687428 sshd[4118]: Connection closed by 139.178.89.65 port 40450 Jan 29 11:13:36.688880 sshd-session[4116]: pam_unix(sshd:session): session closed for user core Jan 29 11:13:36.706950 systemd[1]: sshd@13-64.23.245.19:22-139.178.89.65:40450.service: Deactivated successfully. Jan 29 11:13:36.712259 systemd[1]: session-14.scope: Deactivated successfully. Jan 29 11:13:36.718818 systemd-logind[1450]: Session 14 logged out. Waiting for processes to exit. Jan 29 11:13:36.732428 systemd[1]: Started sshd@14-64.23.245.19:22-139.178.89.65:40464.service - OpenSSH per-connection server daemon (139.178.89.65:40464). Jan 29 11:13:36.737879 systemd-logind[1450]: Removed session 14. Jan 29 11:13:36.822783 sshd[4127]: Accepted publickey for core from 139.178.89.65 port 40464 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:13:36.825038 sshd-session[4127]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:13:36.835897 systemd-logind[1450]: New session 15 of user core. Jan 29 11:13:36.841222 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 29 11:13:37.044500 sshd[4129]: Connection closed by 139.178.89.65 port 40464 Jan 29 11:13:37.045861 sshd-session[4127]: pam_unix(sshd:session): session closed for user core Jan 29 11:13:37.051889 systemd[1]: sshd@14-64.23.245.19:22-139.178.89.65:40464.service: Deactivated successfully. Jan 29 11:13:37.055267 systemd[1]: session-15.scope: Deactivated successfully. Jan 29 11:13:37.059917 systemd-logind[1450]: Session 15 logged out. Waiting for processes to exit. Jan 29 11:13:37.061994 systemd-logind[1450]: Removed session 15. Jan 29 11:13:42.073326 systemd[1]: Started sshd@15-64.23.245.19:22-139.178.89.65:39214.service - OpenSSH per-connection server daemon (139.178.89.65:39214). Jan 29 11:13:42.136577 sshd[4140]: Accepted publickey for core from 139.178.89.65 port 39214 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:13:42.137620 sshd-session[4140]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:13:42.147605 systemd-logind[1450]: New session 16 of user core. Jan 29 11:13:42.164096 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 29 11:13:42.362614 sshd[4142]: Connection closed by 139.178.89.65 port 39214 Jan 29 11:13:42.363974 sshd-session[4140]: pam_unix(sshd:session): session closed for user core Jan 29 11:13:42.370084 systemd[1]: sshd@15-64.23.245.19:22-139.178.89.65:39214.service: Deactivated successfully. Jan 29 11:13:42.373565 systemd[1]: session-16.scope: Deactivated successfully. Jan 29 11:13:42.375814 systemd-logind[1450]: Session 16 logged out. Waiting for processes to exit. Jan 29 11:13:42.378547 systemd-logind[1450]: Removed session 16. Jan 29 11:13:44.305259 kubelet[2655]: E0129 11:13:44.305186 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:13:47.386329 systemd[1]: Started sshd@16-64.23.245.19:22-139.178.89.65:39216.service - OpenSSH per-connection server daemon (139.178.89.65:39216). Jan 29 11:13:47.493749 sshd[4154]: Accepted publickey for core from 139.178.89.65 port 39216 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:13:47.494844 sshd-session[4154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:13:47.504035 systemd-logind[1450]: New session 17 of user core. Jan 29 11:13:47.514043 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 29 11:13:47.725075 sshd[4156]: Connection closed by 139.178.89.65 port 39216 Jan 29 11:13:47.726927 sshd-session[4154]: pam_unix(sshd:session): session closed for user core Jan 29 11:13:47.733883 systemd[1]: sshd@16-64.23.245.19:22-139.178.89.65:39216.service: Deactivated successfully. Jan 29 11:13:47.736600 systemd[1]: session-17.scope: Deactivated successfully. Jan 29 11:13:47.738283 systemd-logind[1450]: Session 17 logged out. Waiting for processes to exit. Jan 29 11:13:47.740568 systemd-logind[1450]: Removed session 17. Jan 29 11:13:52.748411 systemd[1]: Started sshd@17-64.23.245.19:22-139.178.89.65:53310.service - OpenSSH per-connection server daemon (139.178.89.65:53310). Jan 29 11:13:52.838043 sshd[4169]: Accepted publickey for core from 139.178.89.65 port 53310 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:13:52.838744 sshd-session[4169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:13:52.846305 systemd-logind[1450]: New session 18 of user core. Jan 29 11:13:52.857165 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 29 11:13:53.048848 sshd[4171]: Connection closed by 139.178.89.65 port 53310 Jan 29 11:13:53.048087 sshd-session[4169]: pam_unix(sshd:session): session closed for user core Jan 29 11:13:53.054955 systemd[1]: sshd@17-64.23.245.19:22-139.178.89.65:53310.service: Deactivated successfully. Jan 29 11:13:53.060845 systemd[1]: session-18.scope: Deactivated successfully. Jan 29 11:13:53.062369 systemd-logind[1450]: Session 18 logged out. Waiting for processes to exit. Jan 29 11:13:53.064813 systemd-logind[1450]: Removed session 18. Jan 29 11:13:58.068299 systemd[1]: Started sshd@18-64.23.245.19:22-139.178.89.65:53314.service - OpenSSH per-connection server daemon (139.178.89.65:53314). Jan 29 11:13:58.144773 sshd[4182]: Accepted publickey for core from 139.178.89.65 port 53314 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:13:58.146302 sshd-session[4182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:13:58.154574 systemd-logind[1450]: New session 19 of user core. Jan 29 11:13:58.170105 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 29 11:13:58.336182 sshd[4184]: Connection closed by 139.178.89.65 port 53314 Jan 29 11:13:58.337020 sshd-session[4182]: pam_unix(sshd:session): session closed for user core Jan 29 11:13:58.342076 systemd[1]: sshd@18-64.23.245.19:22-139.178.89.65:53314.service: Deactivated successfully. Jan 29 11:13:58.345683 systemd[1]: session-19.scope: Deactivated successfully. Jan 29 11:13:58.347239 systemd-logind[1450]: Session 19 logged out. Waiting for processes to exit. Jan 29 11:13:58.349032 systemd-logind[1450]: Removed session 19. Jan 29 11:13:59.306128 kubelet[2655]: E0129 11:13:59.305965 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:14:03.364202 systemd[1]: Started sshd@19-64.23.245.19:22-139.178.89.65:49340.service - OpenSSH per-connection server daemon (139.178.89.65:49340). Jan 29 11:14:03.423396 sshd[4195]: Accepted publickey for core from 139.178.89.65 port 49340 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:14:03.425365 sshd-session[4195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:14:03.434183 systemd-logind[1450]: New session 20 of user core. Jan 29 11:14:03.439039 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 29 11:14:03.602072 sshd[4197]: Connection closed by 139.178.89.65 port 49340 Jan 29 11:14:03.603987 sshd-session[4195]: pam_unix(sshd:session): session closed for user core Jan 29 11:14:03.608496 systemd[1]: sshd@19-64.23.245.19:22-139.178.89.65:49340.service: Deactivated successfully. Jan 29 11:14:03.612158 systemd[1]: session-20.scope: Deactivated successfully. Jan 29 11:14:03.615572 systemd-logind[1450]: Session 20 logged out. Waiting for processes to exit. Jan 29 11:14:03.618935 systemd-logind[1450]: Removed session 20. Jan 29 11:14:08.626790 systemd[1]: Started sshd@20-64.23.245.19:22-139.178.89.65:49356.service - OpenSSH per-connection server daemon (139.178.89.65:49356). Jan 29 11:14:08.706646 sshd[4210]: Accepted publickey for core from 139.178.89.65 port 49356 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:14:08.709180 sshd-session[4210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:14:08.717159 systemd-logind[1450]: New session 21 of user core. Jan 29 11:14:08.725152 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 29 11:14:08.904887 sshd[4212]: Connection closed by 139.178.89.65 port 49356 Jan 29 11:14:08.903957 sshd-session[4210]: pam_unix(sshd:session): session closed for user core Jan 29 11:14:08.910601 systemd[1]: sshd@20-64.23.245.19:22-139.178.89.65:49356.service: Deactivated successfully. Jan 29 11:14:08.916604 systemd[1]: session-21.scope: Deactivated successfully. Jan 29 11:14:08.918349 systemd-logind[1450]: Session 21 logged out. Waiting for processes to exit. Jan 29 11:14:08.920068 systemd-logind[1450]: Removed session 21. Jan 29 11:14:11.304700 kubelet[2655]: E0129 11:14:11.304589 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:14:13.938315 systemd[1]: Started sshd@21-64.23.245.19:22-139.178.89.65:52128.service - OpenSSH per-connection server daemon (139.178.89.65:52128). Jan 29 11:14:14.016850 sshd[4222]: Accepted publickey for core from 139.178.89.65 port 52128 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:14:14.019082 sshd-session[4222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:14:14.027381 systemd-logind[1450]: New session 22 of user core. Jan 29 11:14:14.035106 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 29 11:14:14.216778 sshd[4224]: Connection closed by 139.178.89.65 port 52128 Jan 29 11:14:14.215813 sshd-session[4222]: pam_unix(sshd:session): session closed for user core Jan 29 11:14:14.221079 systemd[1]: sshd@21-64.23.245.19:22-139.178.89.65:52128.service: Deactivated successfully. Jan 29 11:14:14.225438 systemd[1]: session-22.scope: Deactivated successfully. Jan 29 11:14:14.229113 systemd-logind[1450]: Session 22 logged out. Waiting for processes to exit. Jan 29 11:14:14.231638 systemd-logind[1450]: Removed session 22. Jan 29 11:14:19.237273 systemd[1]: Started sshd@22-64.23.245.19:22-139.178.89.65:52144.service - OpenSSH per-connection server daemon (139.178.89.65:52144). Jan 29 11:14:19.309792 sshd[4235]: Accepted publickey for core from 139.178.89.65 port 52144 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:14:19.311596 sshd-session[4235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:14:19.322222 systemd-logind[1450]: New session 23 of user core. Jan 29 11:14:19.332056 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 29 11:14:19.500770 sshd[4237]: Connection closed by 139.178.89.65 port 52144 Jan 29 11:14:19.501895 sshd-session[4235]: pam_unix(sshd:session): session closed for user core Jan 29 11:14:19.507547 systemd[1]: sshd@22-64.23.245.19:22-139.178.89.65:52144.service: Deactivated successfully. Jan 29 11:14:19.510846 systemd[1]: session-23.scope: Deactivated successfully. Jan 29 11:14:19.513130 systemd-logind[1450]: Session 23 logged out. Waiting for processes to exit. Jan 29 11:14:19.516272 systemd-logind[1450]: Removed session 23. Jan 29 11:14:24.311744 kubelet[2655]: E0129 11:14:24.311658 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:14:24.527330 systemd[1]: Started sshd@23-64.23.245.19:22-139.178.89.65:43298.service - OpenSSH per-connection server daemon (139.178.89.65:43298). Jan 29 11:14:24.607390 sshd[4249]: Accepted publickey for core from 139.178.89.65 port 43298 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:14:24.607975 sshd-session[4249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:14:24.616417 systemd-logind[1450]: New session 24 of user core. Jan 29 11:14:24.627187 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 29 11:14:24.804520 sshd[4251]: Connection closed by 139.178.89.65 port 43298 Jan 29 11:14:24.806340 sshd-session[4249]: pam_unix(sshd:session): session closed for user core Jan 29 11:14:24.812476 systemd-logind[1450]: Session 24 logged out. Waiting for processes to exit. Jan 29 11:14:24.814147 systemd[1]: sshd@23-64.23.245.19:22-139.178.89.65:43298.service: Deactivated successfully. Jan 29 11:14:24.819358 systemd[1]: session-24.scope: Deactivated successfully. Jan 29 11:14:24.821674 systemd-logind[1450]: Removed session 24. Jan 29 11:14:26.304937 kubelet[2655]: E0129 11:14:26.304293 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:14:29.831272 systemd[1]: Started sshd@24-64.23.245.19:22-139.178.89.65:43308.service - OpenSSH per-connection server daemon (139.178.89.65:43308). Jan 29 11:14:29.900236 sshd[4262]: Accepted publickey for core from 139.178.89.65 port 43308 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:14:29.902736 sshd-session[4262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:14:29.913221 systemd-logind[1450]: New session 25 of user core. Jan 29 11:14:29.917058 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 29 11:14:30.315852 sshd[4264]: Connection closed by 139.178.89.65 port 43308 Jan 29 11:14:30.320910 sshd-session[4262]: pam_unix(sshd:session): session closed for user core Jan 29 11:14:30.338121 systemd[1]: sshd@24-64.23.245.19:22-139.178.89.65:43308.service: Deactivated successfully. Jan 29 11:14:30.344493 systemd[1]: session-25.scope: Deactivated successfully. Jan 29 11:14:30.351417 systemd-logind[1450]: Session 25 logged out. Waiting for processes to exit. Jan 29 11:14:30.355493 systemd-logind[1450]: Removed session 25. Jan 29 11:14:35.339411 systemd[1]: Started sshd@25-64.23.245.19:22-139.178.89.65:37368.service - OpenSSH per-connection server daemon (139.178.89.65:37368). Jan 29 11:14:35.464799 sshd[4275]: Accepted publickey for core from 139.178.89.65 port 37368 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:14:35.467385 sshd-session[4275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:14:35.476168 systemd-logind[1450]: New session 26 of user core. Jan 29 11:14:35.484100 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 29 11:14:35.675302 sshd[4277]: Connection closed by 139.178.89.65 port 37368 Jan 29 11:14:35.675153 sshd-session[4275]: pam_unix(sshd:session): session closed for user core Jan 29 11:14:35.689532 systemd[1]: sshd@25-64.23.245.19:22-139.178.89.65:37368.service: Deactivated successfully. Jan 29 11:14:35.692781 systemd[1]: session-26.scope: Deactivated successfully. Jan 29 11:14:35.696054 systemd-logind[1450]: Session 26 logged out. Waiting for processes to exit. Jan 29 11:14:35.704242 systemd[1]: Started sshd@26-64.23.245.19:22-139.178.89.65:37382.service - OpenSSH per-connection server daemon (139.178.89.65:37382). Jan 29 11:14:35.707504 systemd-logind[1450]: Removed session 26. Jan 29 11:14:35.780347 sshd[4288]: Accepted publickey for core from 139.178.89.65 port 37382 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:14:35.783534 sshd-session[4288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:14:35.792362 systemd-logind[1450]: New session 27 of user core. Jan 29 11:14:35.798042 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 29 11:14:36.472352 sshd[4290]: Connection closed by 139.178.89.65 port 37382 Jan 29 11:14:36.476874 sshd-session[4288]: pam_unix(sshd:session): session closed for user core Jan 29 11:14:36.494277 systemd[1]: Started sshd@27-64.23.245.19:22-139.178.89.65:37398.service - OpenSSH per-connection server daemon (139.178.89.65:37398). Jan 29 11:14:36.495234 systemd[1]: sshd@26-64.23.245.19:22-139.178.89.65:37382.service: Deactivated successfully. Jan 29 11:14:36.500087 systemd[1]: session-27.scope: Deactivated successfully. Jan 29 11:14:36.503166 systemd-logind[1450]: Session 27 logged out. Waiting for processes to exit. Jan 29 11:14:36.506308 systemd-logind[1450]: Removed session 27. Jan 29 11:14:36.586796 sshd[4297]: Accepted publickey for core from 139.178.89.65 port 37398 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:14:36.589689 sshd-session[4297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:14:36.599342 systemd-logind[1450]: New session 28 of user core. Jan 29 11:14:36.601499 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 29 11:14:38.806770 sshd[4301]: Connection closed by 139.178.89.65 port 37398 Jan 29 11:14:38.810053 sshd-session[4297]: pam_unix(sshd:session): session closed for user core Jan 29 11:14:38.824229 systemd[1]: sshd@27-64.23.245.19:22-139.178.89.65:37398.service: Deactivated successfully. Jan 29 11:14:38.829915 systemd[1]: session-28.scope: Deactivated successfully. Jan 29 11:14:38.832038 systemd-logind[1450]: Session 28 logged out. Waiting for processes to exit. Jan 29 11:14:38.844359 systemd[1]: Started sshd@28-64.23.245.19:22-139.178.89.65:37414.service - OpenSSH per-connection server daemon (139.178.89.65:37414). Jan 29 11:14:38.846932 systemd-logind[1450]: Removed session 28. Jan 29 11:14:38.922820 sshd[4317]: Accepted publickey for core from 139.178.89.65 port 37414 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:14:38.924784 sshd-session[4317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:14:38.935816 systemd-logind[1450]: New session 29 of user core. Jan 29 11:14:38.944321 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 29 11:14:39.381877 sshd[4319]: Connection closed by 139.178.89.65 port 37414 Jan 29 11:14:39.382987 sshd-session[4317]: pam_unix(sshd:session): session closed for user core Jan 29 11:14:39.399472 systemd[1]: sshd@28-64.23.245.19:22-139.178.89.65:37414.service: Deactivated successfully. Jan 29 11:14:39.404385 systemd[1]: session-29.scope: Deactivated successfully. Jan 29 11:14:39.410990 systemd-logind[1450]: Session 29 logged out. Waiting for processes to exit. Jan 29 11:14:39.422239 systemd[1]: Started sshd@29-64.23.245.19:22-139.178.89.65:37428.service - OpenSSH per-connection server daemon (139.178.89.65:37428). Jan 29 11:14:39.426402 systemd-logind[1450]: Removed session 29. Jan 29 11:14:39.503144 sshd[4328]: Accepted publickey for core from 139.178.89.65 port 37428 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:14:39.506111 sshd-session[4328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:14:39.521233 systemd-logind[1450]: New session 30 of user core. Jan 29 11:14:39.529157 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 29 11:14:39.694894 sshd[4330]: Connection closed by 139.178.89.65 port 37428 Jan 29 11:14:39.695637 sshd-session[4328]: pam_unix(sshd:session): session closed for user core Jan 29 11:14:39.701914 systemd[1]: sshd@29-64.23.245.19:22-139.178.89.65:37428.service: Deactivated successfully. Jan 29 11:14:39.705234 systemd[1]: session-30.scope: Deactivated successfully. Jan 29 11:14:39.707010 systemd-logind[1450]: Session 30 logged out. Waiting for processes to exit. Jan 29 11:14:39.708747 systemd-logind[1450]: Removed session 30. Jan 29 11:14:40.305285 kubelet[2655]: E0129 11:14:40.305188 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:14:44.717283 systemd[1]: Started sshd@30-64.23.245.19:22-139.178.89.65:55668.service - OpenSSH per-connection server daemon (139.178.89.65:55668). Jan 29 11:14:44.780881 sshd[4341]: Accepted publickey for core from 139.178.89.65 port 55668 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:14:44.783098 sshd-session[4341]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:14:44.792879 systemd-logind[1450]: New session 31 of user core. Jan 29 11:14:44.798022 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 29 11:14:44.973685 sshd[4343]: Connection closed by 139.178.89.65 port 55668 Jan 29 11:14:44.974466 sshd-session[4341]: pam_unix(sshd:session): session closed for user core Jan 29 11:14:44.979753 systemd-logind[1450]: Session 31 logged out. Waiting for processes to exit. Jan 29 11:14:44.979874 systemd[1]: sshd@30-64.23.245.19:22-139.178.89.65:55668.service: Deactivated successfully. Jan 29 11:14:44.983533 systemd[1]: session-31.scope: Deactivated successfully. Jan 29 11:14:44.986927 systemd-logind[1450]: Removed session 31. Jan 29 11:14:49.306174 kubelet[2655]: E0129 11:14:49.304807 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:14:49.997557 systemd[1]: Started sshd@31-64.23.245.19:22-139.178.89.65:55672.service - OpenSSH per-connection server daemon (139.178.89.65:55672). Jan 29 11:14:50.079280 sshd[4354]: Accepted publickey for core from 139.178.89.65 port 55672 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:14:50.081777 sshd-session[4354]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:14:50.094924 systemd-logind[1450]: New session 32 of user core. Jan 29 11:14:50.099101 systemd[1]: Started session-32.scope - Session 32 of User core. Jan 29 11:14:50.278903 sshd[4359]: Connection closed by 139.178.89.65 port 55672 Jan 29 11:14:50.279738 sshd-session[4354]: pam_unix(sshd:session): session closed for user core Jan 29 11:14:50.286371 systemd[1]: sshd@31-64.23.245.19:22-139.178.89.65:55672.service: Deactivated successfully. Jan 29 11:14:50.290534 systemd[1]: session-32.scope: Deactivated successfully. Jan 29 11:14:50.292375 systemd-logind[1450]: Session 32 logged out. Waiting for processes to exit. Jan 29 11:14:50.294310 systemd-logind[1450]: Removed session 32. Jan 29 11:14:55.302380 systemd[1]: Started sshd@32-64.23.245.19:22-139.178.89.65:56012.service - OpenSSH per-connection server daemon (139.178.89.65:56012). Jan 29 11:14:55.377988 sshd[4373]: Accepted publickey for core from 139.178.89.65 port 56012 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:14:55.380577 sshd-session[4373]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:14:55.389565 systemd-logind[1450]: New session 33 of user core. Jan 29 11:14:55.398116 systemd[1]: Started session-33.scope - Session 33 of User core. Jan 29 11:14:55.566388 sshd[4375]: Connection closed by 139.178.89.65 port 56012 Jan 29 11:14:55.567227 sshd-session[4373]: pam_unix(sshd:session): session closed for user core Jan 29 11:14:55.573453 systemd[1]: sshd@32-64.23.245.19:22-139.178.89.65:56012.service: Deactivated successfully. Jan 29 11:14:55.577327 systemd[1]: session-33.scope: Deactivated successfully. Jan 29 11:14:55.579509 systemd-logind[1450]: Session 33 logged out. Waiting for processes to exit. Jan 29 11:14:55.581452 systemd-logind[1450]: Removed session 33. Jan 29 11:14:57.306634 kubelet[2655]: E0129 11:14:57.304609 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:14:59.305736 kubelet[2655]: E0129 11:14:59.305669 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:15:00.597355 systemd[1]: Started sshd@33-64.23.245.19:22-139.178.89.65:56014.service - OpenSSH per-connection server daemon (139.178.89.65:56014). Jan 29 11:15:00.682283 sshd[4386]: Accepted publickey for core from 139.178.89.65 port 56014 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:15:00.685025 sshd-session[4386]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:15:00.693959 systemd-logind[1450]: New session 34 of user core. Jan 29 11:15:00.702217 systemd[1]: Started session-34.scope - Session 34 of User core. Jan 29 11:15:00.875299 sshd[4388]: Connection closed by 139.178.89.65 port 56014 Jan 29 11:15:00.876089 sshd-session[4386]: pam_unix(sshd:session): session closed for user core Jan 29 11:15:00.884044 systemd[1]: sshd@33-64.23.245.19:22-139.178.89.65:56014.service: Deactivated successfully. Jan 29 11:15:00.890103 systemd[1]: session-34.scope: Deactivated successfully. Jan 29 11:15:00.892479 systemd-logind[1450]: Session 34 logged out. Waiting for processes to exit. Jan 29 11:15:00.893983 systemd-logind[1450]: Removed session 34. Jan 29 11:15:05.899171 systemd[1]: Started sshd@34-64.23.245.19:22-139.178.89.65:49378.service - OpenSSH per-connection server daemon (139.178.89.65:49378). Jan 29 11:15:05.971142 sshd[4399]: Accepted publickey for core from 139.178.89.65 port 49378 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:15:05.974314 sshd-session[4399]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:15:05.983744 systemd-logind[1450]: New session 35 of user core. Jan 29 11:15:05.988113 systemd[1]: Started session-35.scope - Session 35 of User core. Jan 29 11:15:06.152694 sshd[4401]: Connection closed by 139.178.89.65 port 49378 Jan 29 11:15:06.152433 sshd-session[4399]: pam_unix(sshd:session): session closed for user core Jan 29 11:15:06.164684 systemd[1]: sshd@34-64.23.245.19:22-139.178.89.65:49378.service: Deactivated successfully. Jan 29 11:15:06.168018 systemd[1]: session-35.scope: Deactivated successfully. Jan 29 11:15:06.171069 systemd-logind[1450]: Session 35 logged out. Waiting for processes to exit. Jan 29 11:15:06.178223 systemd[1]: Started sshd@35-64.23.245.19:22-139.178.89.65:49386.service - OpenSSH per-connection server daemon (139.178.89.65:49386). Jan 29 11:15:06.180570 systemd-logind[1450]: Removed session 35. Jan 29 11:15:06.252402 sshd[4412]: Accepted publickey for core from 139.178.89.65 port 49386 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:15:06.255657 sshd-session[4412]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:15:06.264461 systemd-logind[1450]: New session 36 of user core. Jan 29 11:15:06.278008 systemd[1]: Started session-36.scope - Session 36 of User core. Jan 29 11:15:07.903542 containerd[1475]: time="2025-01-29T11:15:07.903181728Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 11:15:07.953625 containerd[1475]: time="2025-01-29T11:15:07.953523649Z" level=info msg="StopContainer for \"4ce18c9dda5f0d4cc241115893d887ecf9da6ba080744a31a968a074494b31ef\" with timeout 2 (s)" Jan 29 11:15:07.954865 containerd[1475]: time="2025-01-29T11:15:07.953523846Z" level=info msg="StopContainer for \"b393a8721e9a1bbc0ac4f533bb834bb99326ffd9ffb06279da9580a5f14a2e5a\" with timeout 30 (s)" Jan 29 11:15:07.956270 containerd[1475]: time="2025-01-29T11:15:07.956120028Z" level=info msg="Stop container \"b393a8721e9a1bbc0ac4f533bb834bb99326ffd9ffb06279da9580a5f14a2e5a\" with signal terminated" Jan 29 11:15:07.956644 containerd[1475]: time="2025-01-29T11:15:07.956448129Z" level=info msg="Stop container \"4ce18c9dda5f0d4cc241115893d887ecf9da6ba080744a31a968a074494b31ef\" with signal terminated" Jan 29 11:15:07.970109 systemd-networkd[1357]: lxc_health: Link DOWN Jan 29 11:15:07.970122 systemd-networkd[1357]: lxc_health: Lost carrier Jan 29 11:15:08.006192 systemd[1]: cri-containerd-b393a8721e9a1bbc0ac4f533bb834bb99326ffd9ffb06279da9580a5f14a2e5a.scope: Deactivated successfully. Jan 29 11:15:08.009802 systemd[1]: cri-containerd-4ce18c9dda5f0d4cc241115893d887ecf9da6ba080744a31a968a074494b31ef.scope: Deactivated successfully. Jan 29 11:15:08.010181 systemd[1]: cri-containerd-4ce18c9dda5f0d4cc241115893d887ecf9da6ba080744a31a968a074494b31ef.scope: Consumed 11.900s CPU time. Jan 29 11:15:08.053533 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4ce18c9dda5f0d4cc241115893d887ecf9da6ba080744a31a968a074494b31ef-rootfs.mount: Deactivated successfully. Jan 29 11:15:08.069260 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b393a8721e9a1bbc0ac4f533bb834bb99326ffd9ffb06279da9580a5f14a2e5a-rootfs.mount: Deactivated successfully. Jan 29 11:15:08.095941 containerd[1475]: time="2025-01-29T11:15:08.095602610Z" level=info msg="shim disconnected" id=4ce18c9dda5f0d4cc241115893d887ecf9da6ba080744a31a968a074494b31ef namespace=k8s.io Jan 29 11:15:08.095941 containerd[1475]: time="2025-01-29T11:15:08.095705128Z" level=warning msg="cleaning up after shim disconnected" id=4ce18c9dda5f0d4cc241115893d887ecf9da6ba080744a31a968a074494b31ef namespace=k8s.io Jan 29 11:15:08.095941 containerd[1475]: time="2025-01-29T11:15:08.095744744Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:15:08.096892 containerd[1475]: time="2025-01-29T11:15:08.096628359Z" level=info msg="shim disconnected" id=b393a8721e9a1bbc0ac4f533bb834bb99326ffd9ffb06279da9580a5f14a2e5a namespace=k8s.io Jan 29 11:15:08.097313 containerd[1475]: time="2025-01-29T11:15:08.096694936Z" level=warning msg="cleaning up after shim disconnected" id=b393a8721e9a1bbc0ac4f533bb834bb99326ffd9ffb06279da9580a5f14a2e5a namespace=k8s.io Jan 29 11:15:08.097313 containerd[1475]: time="2025-01-29T11:15:08.097136535Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:15:08.122924 containerd[1475]: time="2025-01-29T11:15:08.122868388Z" level=warning msg="cleanup warnings time=\"2025-01-29T11:15:08Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 11:15:08.131544 containerd[1475]: time="2025-01-29T11:15:08.130937449Z" level=info msg="StopContainer for \"b393a8721e9a1bbc0ac4f533bb834bb99326ffd9ffb06279da9580a5f14a2e5a\" returns successfully" Jan 29 11:15:08.131544 containerd[1475]: time="2025-01-29T11:15:08.131445590Z" level=info msg="StopContainer for \"4ce18c9dda5f0d4cc241115893d887ecf9da6ba080744a31a968a074494b31ef\" returns successfully" Jan 29 11:15:08.132565 containerd[1475]: time="2025-01-29T11:15:08.132514491Z" level=info msg="StopPodSandbox for \"3393bdb5c4c58484ff2fd5c7c8430f415a3af55297fd792bba08cce860d0d0f7\"" Jan 29 11:15:08.132787 containerd[1475]: time="2025-01-29T11:15:08.132704722Z" level=info msg="StopPodSandbox for \"91f41f437b203672b810b2dbdb40b79d9340d4bc2fc6de5c1fd9d7e5b48ea9a5\"" Jan 29 11:15:08.134702 containerd[1475]: time="2025-01-29T11:15:08.134579838Z" level=info msg="Container to stop \"f3dd0401857a1644100bd849a7ac27d9dd853d1feeb23d89ba18383cbcac2679\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:15:08.134702 containerd[1475]: time="2025-01-29T11:15:08.134694696Z" level=info msg="Container to stop \"0382fba2676ded5580433fdbecc3021cdc0275f9b48999d3f080c10ce189b387\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:15:08.134942 containerd[1475]: time="2025-01-29T11:15:08.134731889Z" level=info msg="Container to stop \"6d07717c18899df4c1ff018d869dc9f13a5e3c2b25dcb221e452f3229455353c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:15:08.134942 containerd[1475]: time="2025-01-29T11:15:08.134754541Z" level=info msg="Container to stop \"4298ae8d12ea9b9c8f9e8c2df636c9c60769aa634ca49597611fb8626a2483dc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:15:08.134942 containerd[1475]: time="2025-01-29T11:15:08.134849786Z" level=info msg="Container to stop \"4ce18c9dda5f0d4cc241115893d887ecf9da6ba080744a31a968a074494b31ef\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:15:08.137631 containerd[1475]: time="2025-01-29T11:15:08.134579909Z" level=info msg="Container to stop \"b393a8721e9a1bbc0ac4f533bb834bb99326ffd9ffb06279da9580a5f14a2e5a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:15:08.140049 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3393bdb5c4c58484ff2fd5c7c8430f415a3af55297fd792bba08cce860d0d0f7-shm.mount: Deactivated successfully. Jan 29 11:15:08.146341 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-91f41f437b203672b810b2dbdb40b79d9340d4bc2fc6de5c1fd9d7e5b48ea9a5-shm.mount: Deactivated successfully. Jan 29 11:15:08.159894 systemd[1]: cri-containerd-91f41f437b203672b810b2dbdb40b79d9340d4bc2fc6de5c1fd9d7e5b48ea9a5.scope: Deactivated successfully. Jan 29 11:15:08.174899 systemd[1]: cri-containerd-3393bdb5c4c58484ff2fd5c7c8430f415a3af55297fd792bba08cce860d0d0f7.scope: Deactivated successfully. Jan 29 11:15:08.226995 containerd[1475]: time="2025-01-29T11:15:08.226665690Z" level=info msg="shim disconnected" id=3393bdb5c4c58484ff2fd5c7c8430f415a3af55297fd792bba08cce860d0d0f7 namespace=k8s.io Jan 29 11:15:08.226995 containerd[1475]: time="2025-01-29T11:15:08.226763463Z" level=warning msg="cleaning up after shim disconnected" id=3393bdb5c4c58484ff2fd5c7c8430f415a3af55297fd792bba08cce860d0d0f7 namespace=k8s.io Jan 29 11:15:08.226995 containerd[1475]: time="2025-01-29T11:15:08.226778779Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:15:08.229311 containerd[1475]: time="2025-01-29T11:15:08.228934008Z" level=info msg="shim disconnected" id=91f41f437b203672b810b2dbdb40b79d9340d4bc2fc6de5c1fd9d7e5b48ea9a5 namespace=k8s.io Jan 29 11:15:08.229644 containerd[1475]: time="2025-01-29T11:15:08.229406030Z" level=warning msg="cleaning up after shim disconnected" id=91f41f437b203672b810b2dbdb40b79d9340d4bc2fc6de5c1fd9d7e5b48ea9a5 namespace=k8s.io Jan 29 11:15:08.229644 containerd[1475]: time="2025-01-29T11:15:08.229434846Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:15:08.255280 containerd[1475]: time="2025-01-29T11:15:08.255202581Z" level=warning msg="cleanup warnings time=\"2025-01-29T11:15:08Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 11:15:08.257618 containerd[1475]: time="2025-01-29T11:15:08.257570590Z" level=info msg="TearDown network for sandbox \"3393bdb5c4c58484ff2fd5c7c8430f415a3af55297fd792bba08cce860d0d0f7\" successfully" Jan 29 11:15:08.257955 containerd[1475]: time="2025-01-29T11:15:08.257921547Z" level=info msg="StopPodSandbox for \"3393bdb5c4c58484ff2fd5c7c8430f415a3af55297fd792bba08cce860d0d0f7\" returns successfully" Jan 29 11:15:08.263974 containerd[1475]: time="2025-01-29T11:15:08.263916572Z" level=info msg="TearDown network for sandbox \"91f41f437b203672b810b2dbdb40b79d9340d4bc2fc6de5c1fd9d7e5b48ea9a5\" successfully" Jan 29 11:15:08.265842 containerd[1475]: time="2025-01-29T11:15:08.265318798Z" level=info msg="StopPodSandbox for \"91f41f437b203672b810b2dbdb40b79d9340d4bc2fc6de5c1fd9d7e5b48ea9a5\" returns successfully" Jan 29 11:15:08.448500 kubelet[2655]: I0129 11:15:08.447306 2655 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/be7e408d-2297-4e2e-9590-197a09d4d70c-bpf-maps\") pod \"be7e408d-2297-4e2e-9590-197a09d4d70c\" (UID: \"be7e408d-2297-4e2e-9590-197a09d4d70c\") " Jan 29 11:15:08.448500 kubelet[2655]: I0129 11:15:08.447929 2655 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/be7e408d-2297-4e2e-9590-197a09d4d70c-cilium-cgroup\") pod \"be7e408d-2297-4e2e-9590-197a09d4d70c\" (UID: \"be7e408d-2297-4e2e-9590-197a09d4d70c\") " Jan 29 11:15:08.448500 kubelet[2655]: I0129 11:15:08.447969 2655 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfjp4\" (UniqueName: \"kubernetes.io/projected/be7e408d-2297-4e2e-9590-197a09d4d70c-kube-api-access-dfjp4\") pod \"be7e408d-2297-4e2e-9590-197a09d4d70c\" (UID: \"be7e408d-2297-4e2e-9590-197a09d4d70c\") " Jan 29 11:15:08.448500 kubelet[2655]: I0129 11:15:08.448032 2655 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/be7e408d-2297-4e2e-9590-197a09d4d70c-lib-modules\") pod \"be7e408d-2297-4e2e-9590-197a09d4d70c\" (UID: \"be7e408d-2297-4e2e-9590-197a09d4d70c\") " Jan 29 11:15:08.448500 kubelet[2655]: I0129 11:15:08.448057 2655 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/763e1748-bde8-4902-bb57-72597b8701ef-cilium-config-path\") pod \"763e1748-bde8-4902-bb57-72597b8701ef\" (UID: \"763e1748-bde8-4902-bb57-72597b8701ef\") " Jan 29 11:15:08.448500 kubelet[2655]: I0129 11:15:08.448075 2655 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5srj\" (UniqueName: \"kubernetes.io/projected/763e1748-bde8-4902-bb57-72597b8701ef-kube-api-access-s5srj\") pod \"763e1748-bde8-4902-bb57-72597b8701ef\" (UID: \"763e1748-bde8-4902-bb57-72597b8701ef\") " Jan 29 11:15:08.449597 kubelet[2655]: I0129 11:15:08.448090 2655 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/be7e408d-2297-4e2e-9590-197a09d4d70c-etc-cni-netd\") pod \"be7e408d-2297-4e2e-9590-197a09d4d70c\" (UID: \"be7e408d-2297-4e2e-9590-197a09d4d70c\") " Jan 29 11:15:08.449597 kubelet[2655]: I0129 11:15:08.448079 2655 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be7e408d-2297-4e2e-9590-197a09d4d70c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "be7e408d-2297-4e2e-9590-197a09d4d70c" (UID: "be7e408d-2297-4e2e-9590-197a09d4d70c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:15:08.449597 kubelet[2655]: I0129 11:15:08.448108 2655 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/be7e408d-2297-4e2e-9590-197a09d4d70c-clustermesh-secrets\") pod \"be7e408d-2297-4e2e-9590-197a09d4d70c\" (UID: \"be7e408d-2297-4e2e-9590-197a09d4d70c\") " Jan 29 11:15:08.449597 kubelet[2655]: I0129 11:15:08.448125 2655 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/be7e408d-2297-4e2e-9590-197a09d4d70c-hubble-tls\") pod \"be7e408d-2297-4e2e-9590-197a09d4d70c\" (UID: \"be7e408d-2297-4e2e-9590-197a09d4d70c\") " Jan 29 11:15:08.449597 kubelet[2655]: I0129 11:15:08.448140 2655 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/be7e408d-2297-4e2e-9590-197a09d4d70c-cni-path\") pod \"be7e408d-2297-4e2e-9590-197a09d4d70c\" (UID: \"be7e408d-2297-4e2e-9590-197a09d4d70c\") " Jan 29 11:15:08.449597 kubelet[2655]: I0129 11:15:08.448161 2655 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/be7e408d-2297-4e2e-9590-197a09d4d70c-cilium-config-path\") pod \"be7e408d-2297-4e2e-9590-197a09d4d70c\" (UID: \"be7e408d-2297-4e2e-9590-197a09d4d70c\") " Jan 29 11:15:08.449823 kubelet[2655]: I0129 11:15:08.448177 2655 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/be7e408d-2297-4e2e-9590-197a09d4d70c-host-proc-sys-kernel\") pod \"be7e408d-2297-4e2e-9590-197a09d4d70c\" (UID: \"be7e408d-2297-4e2e-9590-197a09d4d70c\") " Jan 29 11:15:08.449823 kubelet[2655]: I0129 11:15:08.448192 2655 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/be7e408d-2297-4e2e-9590-197a09d4d70c-cilium-run\") pod \"be7e408d-2297-4e2e-9590-197a09d4d70c\" (UID: \"be7e408d-2297-4e2e-9590-197a09d4d70c\") " Jan 29 11:15:08.449823 kubelet[2655]: I0129 11:15:08.448206 2655 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/be7e408d-2297-4e2e-9590-197a09d4d70c-xtables-lock\") pod \"be7e408d-2297-4e2e-9590-197a09d4d70c\" (UID: \"be7e408d-2297-4e2e-9590-197a09d4d70c\") " Jan 29 11:15:08.449823 kubelet[2655]: I0129 11:15:08.448221 2655 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/be7e408d-2297-4e2e-9590-197a09d4d70c-host-proc-sys-net\") pod \"be7e408d-2297-4e2e-9590-197a09d4d70c\" (UID: \"be7e408d-2297-4e2e-9590-197a09d4d70c\") " Jan 29 11:15:08.449823 kubelet[2655]: I0129 11:15:08.448240 2655 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/be7e408d-2297-4e2e-9590-197a09d4d70c-hostproc\") pod \"be7e408d-2297-4e2e-9590-197a09d4d70c\" (UID: \"be7e408d-2297-4e2e-9590-197a09d4d70c\") " Jan 29 11:15:08.449823 kubelet[2655]: I0129 11:15:08.448293 2655 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/be7e408d-2297-4e2e-9590-197a09d4d70c-bpf-maps\") on node \"ci-4186.1.0-4-1698ea429f\" DevicePath \"\"" Jan 29 11:15:08.450278 kubelet[2655]: I0129 11:15:08.448328 2655 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be7e408d-2297-4e2e-9590-197a09d4d70c-hostproc" (OuterVolumeSpecName: "hostproc") pod "be7e408d-2297-4e2e-9590-197a09d4d70c" (UID: "be7e408d-2297-4e2e-9590-197a09d4d70c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:15:08.453746 kubelet[2655]: I0129 11:15:08.451419 2655 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/763e1748-bde8-4902-bb57-72597b8701ef-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "763e1748-bde8-4902-bb57-72597b8701ef" (UID: "763e1748-bde8-4902-bb57-72597b8701ef"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:15:08.453746 kubelet[2655]: I0129 11:15:08.451520 2655 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be7e408d-2297-4e2e-9590-197a09d4d70c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "be7e408d-2297-4e2e-9590-197a09d4d70c" (UID: "be7e408d-2297-4e2e-9590-197a09d4d70c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:15:08.455461 kubelet[2655]: I0129 11:15:08.455398 2655 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be7e408d-2297-4e2e-9590-197a09d4d70c-kube-api-access-dfjp4" (OuterVolumeSpecName: "kube-api-access-dfjp4") pod "be7e408d-2297-4e2e-9590-197a09d4d70c" (UID: "be7e408d-2297-4e2e-9590-197a09d4d70c"). InnerVolumeSpecName "kube-api-access-dfjp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:15:08.455650 kubelet[2655]: I0129 11:15:08.455490 2655 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be7e408d-2297-4e2e-9590-197a09d4d70c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "be7e408d-2297-4e2e-9590-197a09d4d70c" (UID: "be7e408d-2297-4e2e-9590-197a09d4d70c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:15:08.455820 kubelet[2655]: I0129 11:15:08.455772 2655 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/763e1748-bde8-4902-bb57-72597b8701ef-kube-api-access-s5srj" (OuterVolumeSpecName: "kube-api-access-s5srj") pod "763e1748-bde8-4902-bb57-72597b8701ef" (UID: "763e1748-bde8-4902-bb57-72597b8701ef"). InnerVolumeSpecName "kube-api-access-s5srj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:15:08.455971 kubelet[2655]: I0129 11:15:08.455944 2655 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be7e408d-2297-4e2e-9590-197a09d4d70c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "be7e408d-2297-4e2e-9590-197a09d4d70c" (UID: "be7e408d-2297-4e2e-9590-197a09d4d70c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:15:08.456113 kubelet[2655]: I0129 11:15:08.456082 2655 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be7e408d-2297-4e2e-9590-197a09d4d70c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "be7e408d-2297-4e2e-9590-197a09d4d70c" (UID: "be7e408d-2297-4e2e-9590-197a09d4d70c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:15:08.456223 kubelet[2655]: I0129 11:15:08.456205 2655 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be7e408d-2297-4e2e-9590-197a09d4d70c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "be7e408d-2297-4e2e-9590-197a09d4d70c" (UID: "be7e408d-2297-4e2e-9590-197a09d4d70c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:15:08.456340 kubelet[2655]: I0129 11:15:08.456320 2655 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be7e408d-2297-4e2e-9590-197a09d4d70c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "be7e408d-2297-4e2e-9590-197a09d4d70c" (UID: "be7e408d-2297-4e2e-9590-197a09d4d70c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:15:08.459549 kubelet[2655]: I0129 11:15:08.459433 2655 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be7e408d-2297-4e2e-9590-197a09d4d70c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "be7e408d-2297-4e2e-9590-197a09d4d70c" (UID: "be7e408d-2297-4e2e-9590-197a09d4d70c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:15:08.461254 kubelet[2655]: I0129 11:15:08.461194 2655 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be7e408d-2297-4e2e-9590-197a09d4d70c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "be7e408d-2297-4e2e-9590-197a09d4d70c" (UID: "be7e408d-2297-4e2e-9590-197a09d4d70c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:15:08.461530 kubelet[2655]: I0129 11:15:08.461500 2655 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be7e408d-2297-4e2e-9590-197a09d4d70c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "be7e408d-2297-4e2e-9590-197a09d4d70c" (UID: "be7e408d-2297-4e2e-9590-197a09d4d70c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:15:08.461666 kubelet[2655]: I0129 11:15:08.461645 2655 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be7e408d-2297-4e2e-9590-197a09d4d70c-cni-path" (OuterVolumeSpecName: "cni-path") pod "be7e408d-2297-4e2e-9590-197a09d4d70c" (UID: "be7e408d-2297-4e2e-9590-197a09d4d70c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:15:08.463757 kubelet[2655]: I0129 11:15:08.463676 2655 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be7e408d-2297-4e2e-9590-197a09d4d70c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "be7e408d-2297-4e2e-9590-197a09d4d70c" (UID: "be7e408d-2297-4e2e-9590-197a09d4d70c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:15:08.549318 kubelet[2655]: I0129 11:15:08.549249 2655 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/be7e408d-2297-4e2e-9590-197a09d4d70c-cilium-config-path\") on node \"ci-4186.1.0-4-1698ea429f\" DevicePath \"\"" Jan 29 11:15:08.549592 kubelet[2655]: I0129 11:15:08.549566 2655 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/be7e408d-2297-4e2e-9590-197a09d4d70c-hubble-tls\") on node \"ci-4186.1.0-4-1698ea429f\" DevicePath \"\"" Jan 29 11:15:08.550048 kubelet[2655]: I0129 11:15:08.549684 2655 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/be7e408d-2297-4e2e-9590-197a09d4d70c-cni-path\") on node \"ci-4186.1.0-4-1698ea429f\" DevicePath \"\"" Jan 29 11:15:08.550048 kubelet[2655]: I0129 11:15:08.549705 2655 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/be7e408d-2297-4e2e-9590-197a09d4d70c-host-proc-sys-kernel\") on node \"ci-4186.1.0-4-1698ea429f\" DevicePath \"\"" Jan 29 11:15:08.550048 kubelet[2655]: I0129 11:15:08.549742 2655 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/be7e408d-2297-4e2e-9590-197a09d4d70c-xtables-lock\") on node \"ci-4186.1.0-4-1698ea429f\" DevicePath \"\"" Jan 29 11:15:08.550048 kubelet[2655]: I0129 11:15:08.549762 2655 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/be7e408d-2297-4e2e-9590-197a09d4d70c-host-proc-sys-net\") on node \"ci-4186.1.0-4-1698ea429f\" DevicePath \"\"" Jan 29 11:15:08.550048 kubelet[2655]: I0129 11:15:08.549779 2655 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/be7e408d-2297-4e2e-9590-197a09d4d70c-cilium-run\") on node \"ci-4186.1.0-4-1698ea429f\" DevicePath \"\"" Jan 29 11:15:08.550048 kubelet[2655]: I0129 11:15:08.549793 2655 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/be7e408d-2297-4e2e-9590-197a09d4d70c-hostproc\") on node \"ci-4186.1.0-4-1698ea429f\" DevicePath \"\"" Jan 29 11:15:08.550048 kubelet[2655]: I0129 11:15:08.549807 2655 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-dfjp4\" (UniqueName: \"kubernetes.io/projected/be7e408d-2297-4e2e-9590-197a09d4d70c-kube-api-access-dfjp4\") on node \"ci-4186.1.0-4-1698ea429f\" DevicePath \"\"" Jan 29 11:15:08.550048 kubelet[2655]: I0129 11:15:08.549861 2655 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/be7e408d-2297-4e2e-9590-197a09d4d70c-cilium-cgroup\") on node \"ci-4186.1.0-4-1698ea429f\" DevicePath \"\"" Jan 29 11:15:08.550465 kubelet[2655]: I0129 11:15:08.549878 2655 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/be7e408d-2297-4e2e-9590-197a09d4d70c-etc-cni-netd\") on node \"ci-4186.1.0-4-1698ea429f\" DevicePath \"\"" Jan 29 11:15:08.550465 kubelet[2655]: I0129 11:15:08.549891 2655 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/be7e408d-2297-4e2e-9590-197a09d4d70c-clustermesh-secrets\") on node \"ci-4186.1.0-4-1698ea429f\" DevicePath \"\"" Jan 29 11:15:08.550465 kubelet[2655]: I0129 11:15:08.549905 2655 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/be7e408d-2297-4e2e-9590-197a09d4d70c-lib-modules\") on node \"ci-4186.1.0-4-1698ea429f\" DevicePath \"\"" Jan 29 11:15:08.550465 kubelet[2655]: I0129 11:15:08.549919 2655 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/763e1748-bde8-4902-bb57-72597b8701ef-cilium-config-path\") on node \"ci-4186.1.0-4-1698ea429f\" DevicePath \"\"" Jan 29 11:15:08.550465 kubelet[2655]: I0129 11:15:08.549933 2655 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-s5srj\" (UniqueName: \"kubernetes.io/projected/763e1748-bde8-4902-bb57-72597b8701ef-kube-api-access-s5srj\") on node \"ci-4186.1.0-4-1698ea429f\" DevicePath \"\"" Jan 29 11:15:08.875071 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3393bdb5c4c58484ff2fd5c7c8430f415a3af55297fd792bba08cce860d0d0f7-rootfs.mount: Deactivated successfully. Jan 29 11:15:08.875271 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-91f41f437b203672b810b2dbdb40b79d9340d4bc2fc6de5c1fd9d7e5b48ea9a5-rootfs.mount: Deactivated successfully. Jan 29 11:15:08.875383 systemd[1]: var-lib-kubelet-pods-763e1748\x2dbde8\x2d4902\x2dbb57\x2d72597b8701ef-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ds5srj.mount: Deactivated successfully. Jan 29 11:15:08.875487 systemd[1]: var-lib-kubelet-pods-be7e408d\x2d2297\x2d4e2e\x2d9590\x2d197a09d4d70c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddfjp4.mount: Deactivated successfully. Jan 29 11:15:08.875597 systemd[1]: var-lib-kubelet-pods-be7e408d\x2d2297\x2d4e2e\x2d9590\x2d197a09d4d70c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 29 11:15:08.875699 systemd[1]: var-lib-kubelet-pods-be7e408d\x2d2297\x2d4e2e\x2d9590\x2d197a09d4d70c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 29 11:15:08.991766 kubelet[2655]: I0129 11:15:08.990230 2655 scope.go:117] "RemoveContainer" containerID="4ce18c9dda5f0d4cc241115893d887ecf9da6ba080744a31a968a074494b31ef" Jan 29 11:15:09.004873 systemd[1]: Removed slice kubepods-burstable-podbe7e408d_2297_4e2e_9590_197a09d4d70c.slice - libcontainer container kubepods-burstable-podbe7e408d_2297_4e2e_9590_197a09d4d70c.slice. Jan 29 11:15:09.005363 systemd[1]: kubepods-burstable-podbe7e408d_2297_4e2e_9590_197a09d4d70c.slice: Consumed 12.025s CPU time. Jan 29 11:15:09.019089 containerd[1475]: time="2025-01-29T11:15:09.018997658Z" level=info msg="RemoveContainer for \"4ce18c9dda5f0d4cc241115893d887ecf9da6ba080744a31a968a074494b31ef\"" Jan 29 11:15:09.026695 systemd[1]: Removed slice kubepods-besteffort-pod763e1748_bde8_4902_bb57_72597b8701ef.slice - libcontainer container kubepods-besteffort-pod763e1748_bde8_4902_bb57_72597b8701ef.slice. Jan 29 11:15:09.034935 containerd[1475]: time="2025-01-29T11:15:09.034685507Z" level=info msg="RemoveContainer for \"4ce18c9dda5f0d4cc241115893d887ecf9da6ba080744a31a968a074494b31ef\" returns successfully" Jan 29 11:15:09.043916 kubelet[2655]: I0129 11:15:09.043143 2655 scope.go:117] "RemoveContainer" containerID="4298ae8d12ea9b9c8f9e8c2df636c9c60769aa634ca49597611fb8626a2483dc" Jan 29 11:15:09.053221 containerd[1475]: time="2025-01-29T11:15:09.052771241Z" level=info msg="RemoveContainer for \"4298ae8d12ea9b9c8f9e8c2df636c9c60769aa634ca49597611fb8626a2483dc\"" Jan 29 11:15:09.070521 containerd[1475]: time="2025-01-29T11:15:09.069852760Z" level=info msg="RemoveContainer for \"4298ae8d12ea9b9c8f9e8c2df636c9c60769aa634ca49597611fb8626a2483dc\" returns successfully" Jan 29 11:15:09.071589 kubelet[2655]: I0129 11:15:09.071292 2655 scope.go:117] "RemoveContainer" containerID="f3dd0401857a1644100bd849a7ac27d9dd853d1feeb23d89ba18383cbcac2679" Jan 29 11:15:09.077601 containerd[1475]: time="2025-01-29T11:15:09.077549585Z" level=info msg="RemoveContainer for \"f3dd0401857a1644100bd849a7ac27d9dd853d1feeb23d89ba18383cbcac2679\"" Jan 29 11:15:09.088405 containerd[1475]: time="2025-01-29T11:15:09.088349482Z" level=info msg="RemoveContainer for \"f3dd0401857a1644100bd849a7ac27d9dd853d1feeb23d89ba18383cbcac2679\" returns successfully" Jan 29 11:15:09.090494 kubelet[2655]: I0129 11:15:09.089019 2655 scope.go:117] "RemoveContainer" containerID="0382fba2676ded5580433fdbecc3021cdc0275f9b48999d3f080c10ce189b387" Jan 29 11:15:09.094632 containerd[1475]: time="2025-01-29T11:15:09.094556781Z" level=info msg="RemoveContainer for \"0382fba2676ded5580433fdbecc3021cdc0275f9b48999d3f080c10ce189b387\"" Jan 29 11:15:09.105584 containerd[1475]: time="2025-01-29T11:15:09.105531440Z" level=info msg="RemoveContainer for \"0382fba2676ded5580433fdbecc3021cdc0275f9b48999d3f080c10ce189b387\" returns successfully" Jan 29 11:15:09.106244 kubelet[2655]: I0129 11:15:09.106174 2655 scope.go:117] "RemoveContainer" containerID="6d07717c18899df4c1ff018d869dc9f13a5e3c2b25dcb221e452f3229455353c" Jan 29 11:15:09.108040 containerd[1475]: time="2025-01-29T11:15:09.107996612Z" level=info msg="RemoveContainer for \"6d07717c18899df4c1ff018d869dc9f13a5e3c2b25dcb221e452f3229455353c\"" Jan 29 11:15:09.118751 containerd[1475]: time="2025-01-29T11:15:09.118629373Z" level=info msg="RemoveContainer for \"6d07717c18899df4c1ff018d869dc9f13a5e3c2b25dcb221e452f3229455353c\" returns successfully" Jan 29 11:15:09.119970 kubelet[2655]: I0129 11:15:09.119572 2655 scope.go:117] "RemoveContainer" containerID="4ce18c9dda5f0d4cc241115893d887ecf9da6ba080744a31a968a074494b31ef" Jan 29 11:15:09.120845 containerd[1475]: time="2025-01-29T11:15:09.120015449Z" level=error msg="ContainerStatus for \"4ce18c9dda5f0d4cc241115893d887ecf9da6ba080744a31a968a074494b31ef\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4ce18c9dda5f0d4cc241115893d887ecf9da6ba080744a31a968a074494b31ef\": not found" Jan 29 11:15:09.123874 kubelet[2655]: E0129 11:15:09.121654 2655 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4ce18c9dda5f0d4cc241115893d887ecf9da6ba080744a31a968a074494b31ef\": not found" containerID="4ce18c9dda5f0d4cc241115893d887ecf9da6ba080744a31a968a074494b31ef" Jan 29 11:15:09.127257 kubelet[2655]: I0129 11:15:09.121800 2655 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4ce18c9dda5f0d4cc241115893d887ecf9da6ba080744a31a968a074494b31ef"} err="failed to get container status \"4ce18c9dda5f0d4cc241115893d887ecf9da6ba080744a31a968a074494b31ef\": rpc error: code = NotFound desc = an error occurred when try to find container \"4ce18c9dda5f0d4cc241115893d887ecf9da6ba080744a31a968a074494b31ef\": not found" Jan 29 11:15:09.127257 kubelet[2655]: I0129 11:15:09.125781 2655 scope.go:117] "RemoveContainer" containerID="4298ae8d12ea9b9c8f9e8c2df636c9c60769aa634ca49597611fb8626a2483dc" Jan 29 11:15:09.127257 kubelet[2655]: E0129 11:15:09.126925 2655 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4298ae8d12ea9b9c8f9e8c2df636c9c60769aa634ca49597611fb8626a2483dc\": not found" containerID="4298ae8d12ea9b9c8f9e8c2df636c9c60769aa634ca49597611fb8626a2483dc" Jan 29 11:15:09.127257 kubelet[2655]: I0129 11:15:09.126967 2655 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4298ae8d12ea9b9c8f9e8c2df636c9c60769aa634ca49597611fb8626a2483dc"} err="failed to get container status \"4298ae8d12ea9b9c8f9e8c2df636c9c60769aa634ca49597611fb8626a2483dc\": rpc error: code = NotFound desc = an error occurred when try to find container \"4298ae8d12ea9b9c8f9e8c2df636c9c60769aa634ca49597611fb8626a2483dc\": not found" Jan 29 11:15:09.127257 kubelet[2655]: I0129 11:15:09.127006 2655 scope.go:117] "RemoveContainer" containerID="f3dd0401857a1644100bd849a7ac27d9dd853d1feeb23d89ba18383cbcac2679" Jan 29 11:15:09.127564 containerd[1475]: time="2025-01-29T11:15:09.126447989Z" level=error msg="ContainerStatus for \"4298ae8d12ea9b9c8f9e8c2df636c9c60769aa634ca49597611fb8626a2483dc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4298ae8d12ea9b9c8f9e8c2df636c9c60769aa634ca49597611fb8626a2483dc\": not found" Jan 29 11:15:09.127564 containerd[1475]: time="2025-01-29T11:15:09.127303998Z" level=error msg="ContainerStatus for \"f3dd0401857a1644100bd849a7ac27d9dd853d1feeb23d89ba18383cbcac2679\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f3dd0401857a1644100bd849a7ac27d9dd853d1feeb23d89ba18383cbcac2679\": not found" Jan 29 11:15:09.128445 kubelet[2655]: E0129 11:15:09.128366 2655 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f3dd0401857a1644100bd849a7ac27d9dd853d1feeb23d89ba18383cbcac2679\": not found" containerID="f3dd0401857a1644100bd849a7ac27d9dd853d1feeb23d89ba18383cbcac2679" Jan 29 11:15:09.128521 kubelet[2655]: I0129 11:15:09.128461 2655 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f3dd0401857a1644100bd849a7ac27d9dd853d1feeb23d89ba18383cbcac2679"} err="failed to get container status \"f3dd0401857a1644100bd849a7ac27d9dd853d1feeb23d89ba18383cbcac2679\": rpc error: code = NotFound desc = an error occurred when try to find container \"f3dd0401857a1644100bd849a7ac27d9dd853d1feeb23d89ba18383cbcac2679\": not found" Jan 29 11:15:09.128521 kubelet[2655]: I0129 11:15:09.128494 2655 scope.go:117] "RemoveContainer" containerID="0382fba2676ded5580433fdbecc3021cdc0275f9b48999d3f080c10ce189b387" Jan 29 11:15:09.129288 containerd[1475]: time="2025-01-29T11:15:09.129223651Z" level=error msg="ContainerStatus for \"0382fba2676ded5580433fdbecc3021cdc0275f9b48999d3f080c10ce189b387\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0382fba2676ded5580433fdbecc3021cdc0275f9b48999d3f080c10ce189b387\": not found" Jan 29 11:15:09.130113 kubelet[2655]: E0129 11:15:09.130073 2655 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0382fba2676ded5580433fdbecc3021cdc0275f9b48999d3f080c10ce189b387\": not found" containerID="0382fba2676ded5580433fdbecc3021cdc0275f9b48999d3f080c10ce189b387" Jan 29 11:15:09.130240 kubelet[2655]: I0129 11:15:09.130150 2655 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0382fba2676ded5580433fdbecc3021cdc0275f9b48999d3f080c10ce189b387"} err="failed to get container status \"0382fba2676ded5580433fdbecc3021cdc0275f9b48999d3f080c10ce189b387\": rpc error: code = NotFound desc = an error occurred when try to find container \"0382fba2676ded5580433fdbecc3021cdc0275f9b48999d3f080c10ce189b387\": not found" Jan 29 11:15:09.130240 kubelet[2655]: I0129 11:15:09.130212 2655 scope.go:117] "RemoveContainer" containerID="6d07717c18899df4c1ff018d869dc9f13a5e3c2b25dcb221e452f3229455353c" Jan 29 11:15:09.130611 containerd[1475]: time="2025-01-29T11:15:09.130564163Z" level=error msg="ContainerStatus for \"6d07717c18899df4c1ff018d869dc9f13a5e3c2b25dcb221e452f3229455353c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6d07717c18899df4c1ff018d869dc9f13a5e3c2b25dcb221e452f3229455353c\": not found" Jan 29 11:15:09.130841 kubelet[2655]: E0129 11:15:09.130794 2655 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6d07717c18899df4c1ff018d869dc9f13a5e3c2b25dcb221e452f3229455353c\": not found" containerID="6d07717c18899df4c1ff018d869dc9f13a5e3c2b25dcb221e452f3229455353c" Jan 29 11:15:09.130954 kubelet[2655]: I0129 11:15:09.130833 2655 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6d07717c18899df4c1ff018d869dc9f13a5e3c2b25dcb221e452f3229455353c"} err="failed to get container status \"6d07717c18899df4c1ff018d869dc9f13a5e3c2b25dcb221e452f3229455353c\": rpc error: code = NotFound desc = an error occurred when try to find container \"6d07717c18899df4c1ff018d869dc9f13a5e3c2b25dcb221e452f3229455353c\": not found" Jan 29 11:15:09.130954 kubelet[2655]: I0129 11:15:09.130896 2655 scope.go:117] "RemoveContainer" containerID="b393a8721e9a1bbc0ac4f533bb834bb99326ffd9ffb06279da9580a5f14a2e5a" Jan 29 11:15:09.132865 containerd[1475]: time="2025-01-29T11:15:09.132828842Z" level=info msg="RemoveContainer for \"b393a8721e9a1bbc0ac4f533bb834bb99326ffd9ffb06279da9580a5f14a2e5a\"" Jan 29 11:15:09.142332 containerd[1475]: time="2025-01-29T11:15:09.142278555Z" level=info msg="RemoveContainer for \"b393a8721e9a1bbc0ac4f533bb834bb99326ffd9ffb06279da9580a5f14a2e5a\" returns successfully" Jan 29 11:15:09.142914 kubelet[2655]: I0129 11:15:09.142855 2655 scope.go:117] "RemoveContainer" containerID="b393a8721e9a1bbc0ac4f533bb834bb99326ffd9ffb06279da9580a5f14a2e5a" Jan 29 11:15:09.143371 containerd[1475]: time="2025-01-29T11:15:09.143315382Z" level=error msg="ContainerStatus for \"b393a8721e9a1bbc0ac4f533bb834bb99326ffd9ffb06279da9580a5f14a2e5a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b393a8721e9a1bbc0ac4f533bb834bb99326ffd9ffb06279da9580a5f14a2e5a\": not found" Jan 29 11:15:09.143539 kubelet[2655]: E0129 11:15:09.143497 2655 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b393a8721e9a1bbc0ac4f533bb834bb99326ffd9ffb06279da9580a5f14a2e5a\": not found" containerID="b393a8721e9a1bbc0ac4f533bb834bb99326ffd9ffb06279da9580a5f14a2e5a" Jan 29 11:15:09.143629 kubelet[2655]: I0129 11:15:09.143534 2655 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b393a8721e9a1bbc0ac4f533bb834bb99326ffd9ffb06279da9580a5f14a2e5a"} err="failed to get container status \"b393a8721e9a1bbc0ac4f533bb834bb99326ffd9ffb06279da9580a5f14a2e5a\": rpc error: code = NotFound desc = an error occurred when try to find container \"b393a8721e9a1bbc0ac4f533bb834bb99326ffd9ffb06279da9580a5f14a2e5a\": not found" Jan 29 11:15:09.747915 sshd[4414]: Connection closed by 139.178.89.65 port 49386 Jan 29 11:15:09.750156 sshd-session[4412]: pam_unix(sshd:session): session closed for user core Jan 29 11:15:09.762659 systemd[1]: sshd@35-64.23.245.19:22-139.178.89.65:49386.service: Deactivated successfully. Jan 29 11:15:09.766019 systemd[1]: session-36.scope: Deactivated successfully. Jan 29 11:15:09.768902 systemd-logind[1450]: Session 36 logged out. Waiting for processes to exit. Jan 29 11:15:09.776261 systemd[1]: Started sshd@36-64.23.245.19:22-139.178.89.65:49402.service - OpenSSH per-connection server daemon (139.178.89.65:49402). Jan 29 11:15:09.780262 systemd-logind[1450]: Removed session 36. Jan 29 11:15:09.850772 sshd[4571]: Accepted publickey for core from 139.178.89.65 port 49402 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:15:09.852628 sshd-session[4571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:15:09.860849 systemd-logind[1450]: New session 37 of user core. Jan 29 11:15:09.868135 systemd[1]: Started session-37.scope - Session 37 of User core. Jan 29 11:15:10.307748 kubelet[2655]: I0129 11:15:10.306995 2655 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="763e1748-bde8-4902-bb57-72597b8701ef" path="/var/lib/kubelet/pods/763e1748-bde8-4902-bb57-72597b8701ef/volumes" Jan 29 11:15:10.307748 kubelet[2655]: I0129 11:15:10.307742 2655 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be7e408d-2297-4e2e-9590-197a09d4d70c" path="/var/lib/kubelet/pods/be7e408d-2297-4e2e-9590-197a09d4d70c/volumes" Jan 29 11:15:10.978857 sshd[4573]: Connection closed by 139.178.89.65 port 49402 Jan 29 11:15:10.980108 sshd-session[4571]: pam_unix(sshd:session): session closed for user core Jan 29 11:15:10.992455 systemd[1]: sshd@36-64.23.245.19:22-139.178.89.65:49402.service: Deactivated successfully. Jan 29 11:15:10.999044 systemd[1]: session-37.scope: Deactivated successfully. Jan 29 11:15:11.002121 systemd-logind[1450]: Session 37 logged out. Waiting for processes to exit. Jan 29 11:15:11.013509 systemd[1]: Started sshd@37-64.23.245.19:22-139.178.89.65:56072.service - OpenSSH per-connection server daemon (139.178.89.65:56072). Jan 29 11:15:11.016157 kubelet[2655]: I0129 11:15:11.013632 2655 topology_manager.go:215] "Topology Admit Handler" podUID="46885544-b08a-41b3-83e6-7fcd3b71d9a9" podNamespace="kube-system" podName="cilium-q77s6" Jan 29 11:15:11.022380 systemd-logind[1450]: Removed session 37. Jan 29 11:15:11.025582 kubelet[2655]: E0129 11:15:11.025527 2655 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="763e1748-bde8-4902-bb57-72597b8701ef" containerName="cilium-operator" Jan 29 11:15:11.025776 kubelet[2655]: E0129 11:15:11.025765 2655 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="be7e408d-2297-4e2e-9590-197a09d4d70c" containerName="clean-cilium-state" Jan 29 11:15:11.025879 kubelet[2655]: E0129 11:15:11.025869 2655 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="be7e408d-2297-4e2e-9590-197a09d4d70c" containerName="mount-bpf-fs" Jan 29 11:15:11.026044 kubelet[2655]: E0129 11:15:11.025952 2655 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="be7e408d-2297-4e2e-9590-197a09d4d70c" containerName="cilium-agent" Jan 29 11:15:11.026044 kubelet[2655]: E0129 11:15:11.025964 2655 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="be7e408d-2297-4e2e-9590-197a09d4d70c" containerName="mount-cgroup" Jan 29 11:15:11.026044 kubelet[2655]: E0129 11:15:11.025972 2655 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="be7e408d-2297-4e2e-9590-197a09d4d70c" containerName="apply-sysctl-overwrites" Jan 29 11:15:11.026528 kubelet[2655]: I0129 11:15:11.026246 2655 memory_manager.go:354] "RemoveStaleState removing state" podUID="763e1748-bde8-4902-bb57-72597b8701ef" containerName="cilium-operator" Jan 29 11:15:11.026528 kubelet[2655]: I0129 11:15:11.026260 2655 memory_manager.go:354] "RemoveStaleState removing state" podUID="be7e408d-2297-4e2e-9590-197a09d4d70c" containerName="cilium-agent" Jan 29 11:15:11.070883 kubelet[2655]: I0129 11:15:11.070822 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/46885544-b08a-41b3-83e6-7fcd3b71d9a9-hubble-tls\") pod \"cilium-q77s6\" (UID: \"46885544-b08a-41b3-83e6-7fcd3b71d9a9\") " pod="kube-system/cilium-q77s6" Jan 29 11:15:11.070883 kubelet[2655]: I0129 11:15:11.070886 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/46885544-b08a-41b3-83e6-7fcd3b71d9a9-cilium-run\") pod \"cilium-q77s6\" (UID: \"46885544-b08a-41b3-83e6-7fcd3b71d9a9\") " pod="kube-system/cilium-q77s6" Jan 29 11:15:11.071132 kubelet[2655]: I0129 11:15:11.070920 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/46885544-b08a-41b3-83e6-7fcd3b71d9a9-clustermesh-secrets\") pod \"cilium-q77s6\" (UID: \"46885544-b08a-41b3-83e6-7fcd3b71d9a9\") " pod="kube-system/cilium-q77s6" Jan 29 11:15:11.071132 kubelet[2655]: I0129 11:15:11.070946 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/46885544-b08a-41b3-83e6-7fcd3b71d9a9-bpf-maps\") pod \"cilium-q77s6\" (UID: \"46885544-b08a-41b3-83e6-7fcd3b71d9a9\") " pod="kube-system/cilium-q77s6" Jan 29 11:15:11.071132 kubelet[2655]: I0129 11:15:11.070978 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/46885544-b08a-41b3-83e6-7fcd3b71d9a9-cilium-cgroup\") pod \"cilium-q77s6\" (UID: \"46885544-b08a-41b3-83e6-7fcd3b71d9a9\") " pod="kube-system/cilium-q77s6" Jan 29 11:15:11.071132 kubelet[2655]: I0129 11:15:11.071002 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/46885544-b08a-41b3-83e6-7fcd3b71d9a9-cni-path\") pod \"cilium-q77s6\" (UID: \"46885544-b08a-41b3-83e6-7fcd3b71d9a9\") " pod="kube-system/cilium-q77s6" Jan 29 11:15:11.071132 kubelet[2655]: I0129 11:15:11.071030 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/46885544-b08a-41b3-83e6-7fcd3b71d9a9-host-proc-sys-net\") pod \"cilium-q77s6\" (UID: \"46885544-b08a-41b3-83e6-7fcd3b71d9a9\") " pod="kube-system/cilium-q77s6" Jan 29 11:15:11.071132 kubelet[2655]: I0129 11:15:11.071055 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/46885544-b08a-41b3-83e6-7fcd3b71d9a9-etc-cni-netd\") pod \"cilium-q77s6\" (UID: \"46885544-b08a-41b3-83e6-7fcd3b71d9a9\") " pod="kube-system/cilium-q77s6" Jan 29 11:15:11.071395 kubelet[2655]: I0129 11:15:11.071085 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/46885544-b08a-41b3-83e6-7fcd3b71d9a9-cilium-config-path\") pod \"cilium-q77s6\" (UID: \"46885544-b08a-41b3-83e6-7fcd3b71d9a9\") " pod="kube-system/cilium-q77s6" Jan 29 11:15:11.071395 kubelet[2655]: I0129 11:15:11.071112 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjjgw\" (UniqueName: \"kubernetes.io/projected/46885544-b08a-41b3-83e6-7fcd3b71d9a9-kube-api-access-sjjgw\") pod \"cilium-q77s6\" (UID: \"46885544-b08a-41b3-83e6-7fcd3b71d9a9\") " pod="kube-system/cilium-q77s6" Jan 29 11:15:11.071395 kubelet[2655]: I0129 11:15:11.071139 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/46885544-b08a-41b3-83e6-7fcd3b71d9a9-xtables-lock\") pod \"cilium-q77s6\" (UID: \"46885544-b08a-41b3-83e6-7fcd3b71d9a9\") " pod="kube-system/cilium-q77s6" Jan 29 11:15:11.071395 kubelet[2655]: I0129 11:15:11.071163 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/46885544-b08a-41b3-83e6-7fcd3b71d9a9-host-proc-sys-kernel\") pod \"cilium-q77s6\" (UID: \"46885544-b08a-41b3-83e6-7fcd3b71d9a9\") " pod="kube-system/cilium-q77s6" Jan 29 11:15:11.071395 kubelet[2655]: I0129 11:15:11.071192 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/46885544-b08a-41b3-83e6-7fcd3b71d9a9-cilium-ipsec-secrets\") pod \"cilium-q77s6\" (UID: \"46885544-b08a-41b3-83e6-7fcd3b71d9a9\") " pod="kube-system/cilium-q77s6" Jan 29 11:15:11.071625 kubelet[2655]: I0129 11:15:11.071222 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/46885544-b08a-41b3-83e6-7fcd3b71d9a9-hostproc\") pod \"cilium-q77s6\" (UID: \"46885544-b08a-41b3-83e6-7fcd3b71d9a9\") " pod="kube-system/cilium-q77s6" Jan 29 11:15:11.071625 kubelet[2655]: I0129 11:15:11.071248 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/46885544-b08a-41b3-83e6-7fcd3b71d9a9-lib-modules\") pod \"cilium-q77s6\" (UID: \"46885544-b08a-41b3-83e6-7fcd3b71d9a9\") " pod="kube-system/cilium-q77s6" Jan 29 11:15:11.109848 systemd[1]: Created slice kubepods-burstable-pod46885544_b08a_41b3_83e6_7fcd3b71d9a9.slice - libcontainer container kubepods-burstable-pod46885544_b08a_41b3_83e6_7fcd3b71d9a9.slice. Jan 29 11:15:11.140502 sshd[4582]: Accepted publickey for core from 139.178.89.65 port 56072 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:15:11.146098 sshd-session[4582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:15:11.158653 systemd-logind[1450]: New session 38 of user core. Jan 29 11:15:11.165098 systemd[1]: Started session-38.scope - Session 38 of User core. Jan 29 11:15:11.235029 sshd[4585]: Connection closed by 139.178.89.65 port 56072 Jan 29 11:15:11.235774 sshd-session[4582]: pam_unix(sshd:session): session closed for user core Jan 29 11:15:11.252187 systemd[1]: sshd@37-64.23.245.19:22-139.178.89.65:56072.service: Deactivated successfully. Jan 29 11:15:11.256287 systemd[1]: session-38.scope: Deactivated successfully. Jan 29 11:15:11.260921 systemd-logind[1450]: Session 38 logged out. Waiting for processes to exit. Jan 29 11:15:11.270427 systemd[1]: Started sshd@38-64.23.245.19:22-139.178.89.65:56084.service - OpenSSH per-connection server daemon (139.178.89.65:56084). Jan 29 11:15:11.277823 systemd-logind[1450]: Removed session 38. Jan 29 11:15:11.337787 sshd[4595]: Accepted publickey for core from 139.178.89.65 port 56084 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:15:11.338997 sshd-session[4595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:15:11.346772 systemd-logind[1450]: New session 39 of user core. Jan 29 11:15:11.353044 systemd[1]: Started session-39.scope - Session 39 of User core. Jan 29 11:15:11.424818 kubelet[2655]: E0129 11:15:11.422840 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:15:11.428380 containerd[1475]: time="2025-01-29T11:15:11.425962663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q77s6,Uid:46885544-b08a-41b3-83e6-7fcd3b71d9a9,Namespace:kube-system,Attempt:0,}" Jan 29 11:15:11.499192 containerd[1475]: time="2025-01-29T11:15:11.498908751Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:15:11.499192 containerd[1475]: time="2025-01-29T11:15:11.499071959Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:15:11.499192 containerd[1475]: time="2025-01-29T11:15:11.499107543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:15:11.501693 containerd[1475]: time="2025-01-29T11:15:11.501604817Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:15:11.542076 systemd[1]: Started cri-containerd-79003d7f56046feb8b58a34fc23ffbc33a06a4f29b1604c1560546549a0595af.scope - libcontainer container 79003d7f56046feb8b58a34fc23ffbc33a06a4f29b1604c1560546549a0595af. Jan 29 11:15:11.584385 containerd[1475]: time="2025-01-29T11:15:11.584313251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q77s6,Uid:46885544-b08a-41b3-83e6-7fcd3b71d9a9,Namespace:kube-system,Attempt:0,} returns sandbox id \"79003d7f56046feb8b58a34fc23ffbc33a06a4f29b1604c1560546549a0595af\"" Jan 29 11:15:11.588499 kubelet[2655]: E0129 11:15:11.588447 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:15:11.595173 containerd[1475]: time="2025-01-29T11:15:11.594732036Z" level=info msg="CreateContainer within sandbox \"79003d7f56046feb8b58a34fc23ffbc33a06a4f29b1604c1560546549a0595af\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 11:15:11.614460 kubelet[2655]: E0129 11:15:11.614355 2655 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 29 11:15:11.629660 containerd[1475]: time="2025-01-29T11:15:11.629590993Z" level=info msg="CreateContainer within sandbox \"79003d7f56046feb8b58a34fc23ffbc33a06a4f29b1604c1560546549a0595af\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b8e65bf48110011167e6403eb7f585530601afbb21f215dfe27feb5cf2fed647\"" Jan 29 11:15:11.631833 containerd[1475]: time="2025-01-29T11:15:11.630964300Z" level=info msg="StartContainer for \"b8e65bf48110011167e6403eb7f585530601afbb21f215dfe27feb5cf2fed647\"" Jan 29 11:15:11.674531 systemd[1]: Started cri-containerd-b8e65bf48110011167e6403eb7f585530601afbb21f215dfe27feb5cf2fed647.scope - libcontainer container b8e65bf48110011167e6403eb7f585530601afbb21f215dfe27feb5cf2fed647. Jan 29 11:15:11.722269 containerd[1475]: time="2025-01-29T11:15:11.722148517Z" level=info msg="StartContainer for \"b8e65bf48110011167e6403eb7f585530601afbb21f215dfe27feb5cf2fed647\" returns successfully" Jan 29 11:15:11.735921 systemd[1]: cri-containerd-b8e65bf48110011167e6403eb7f585530601afbb21f215dfe27feb5cf2fed647.scope: Deactivated successfully. Jan 29 11:15:11.781741 containerd[1475]: time="2025-01-29T11:15:11.781487614Z" level=info msg="shim disconnected" id=b8e65bf48110011167e6403eb7f585530601afbb21f215dfe27feb5cf2fed647 namespace=k8s.io Jan 29 11:15:11.781741 containerd[1475]: time="2025-01-29T11:15:11.781555599Z" level=warning msg="cleaning up after shim disconnected" id=b8e65bf48110011167e6403eb7f585530601afbb21f215dfe27feb5cf2fed647 namespace=k8s.io Jan 29 11:15:11.781741 containerd[1475]: time="2025-01-29T11:15:11.781567061Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:15:12.028390 kubelet[2655]: E0129 11:15:12.028341 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:15:12.031038 containerd[1475]: time="2025-01-29T11:15:12.030968265Z" level=info msg="CreateContainer within sandbox \"79003d7f56046feb8b58a34fc23ffbc33a06a4f29b1604c1560546549a0595af\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 11:15:12.059923 containerd[1475]: time="2025-01-29T11:15:12.059648868Z" level=info msg="CreateContainer within sandbox \"79003d7f56046feb8b58a34fc23ffbc33a06a4f29b1604c1560546549a0595af\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"35687fde179ddd0e9491a29c96d0b6966ea265c0d6cc3320385960d733744232\"" Jan 29 11:15:12.062666 containerd[1475]: time="2025-01-29T11:15:12.061075913Z" level=info msg="StartContainer for \"35687fde179ddd0e9491a29c96d0b6966ea265c0d6cc3320385960d733744232\"" Jan 29 11:15:12.108066 systemd[1]: Started cri-containerd-35687fde179ddd0e9491a29c96d0b6966ea265c0d6cc3320385960d733744232.scope - libcontainer container 35687fde179ddd0e9491a29c96d0b6966ea265c0d6cc3320385960d733744232. Jan 29 11:15:12.148068 containerd[1475]: time="2025-01-29T11:15:12.147978244Z" level=info msg="StartContainer for \"35687fde179ddd0e9491a29c96d0b6966ea265c0d6cc3320385960d733744232\" returns successfully" Jan 29 11:15:12.159041 systemd[1]: cri-containerd-35687fde179ddd0e9491a29c96d0b6966ea265c0d6cc3320385960d733744232.scope: Deactivated successfully. Jan 29 11:15:12.202890 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-35687fde179ddd0e9491a29c96d0b6966ea265c0d6cc3320385960d733744232-rootfs.mount: Deactivated successfully. Jan 29 11:15:12.215780 containerd[1475]: time="2025-01-29T11:15:12.215442037Z" level=info msg="shim disconnected" id=35687fde179ddd0e9491a29c96d0b6966ea265c0d6cc3320385960d733744232 namespace=k8s.io Jan 29 11:15:12.215780 containerd[1475]: time="2025-01-29T11:15:12.215517803Z" level=warning msg="cleaning up after shim disconnected" id=35687fde179ddd0e9491a29c96d0b6966ea265c0d6cc3320385960d733744232 namespace=k8s.io Jan 29 11:15:12.215780 containerd[1475]: time="2025-01-29T11:15:12.215529029Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:15:13.032705 kubelet[2655]: E0129 11:15:13.032647 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:15:13.039223 containerd[1475]: time="2025-01-29T11:15:13.038918698Z" level=info msg="CreateContainer within sandbox \"79003d7f56046feb8b58a34fc23ffbc33a06a4f29b1604c1560546549a0595af\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 11:15:13.084058 containerd[1475]: time="2025-01-29T11:15:13.083502136Z" level=info msg="CreateContainer within sandbox \"79003d7f56046feb8b58a34fc23ffbc33a06a4f29b1604c1560546549a0595af\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"26c4cf71c68ef221114524cea7d9924bdc53a3990ac2ffc4ace567eb82f22cb6\"" Jan 29 11:15:13.084766 containerd[1475]: time="2025-01-29T11:15:13.084383422Z" level=info msg="StartContainer for \"26c4cf71c68ef221114524cea7d9924bdc53a3990ac2ffc4ace567eb82f22cb6\"" Jan 29 11:15:13.146755 systemd[1]: Started cri-containerd-26c4cf71c68ef221114524cea7d9924bdc53a3990ac2ffc4ace567eb82f22cb6.scope - libcontainer container 26c4cf71c68ef221114524cea7d9924bdc53a3990ac2ffc4ace567eb82f22cb6. Jan 29 11:15:13.208473 containerd[1475]: time="2025-01-29T11:15:13.208415450Z" level=info msg="StartContainer for \"26c4cf71c68ef221114524cea7d9924bdc53a3990ac2ffc4ace567eb82f22cb6\" returns successfully" Jan 29 11:15:13.221241 systemd[1]: cri-containerd-26c4cf71c68ef221114524cea7d9924bdc53a3990ac2ffc4ace567eb82f22cb6.scope: Deactivated successfully. Jan 29 11:15:13.271409 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-26c4cf71c68ef221114524cea7d9924bdc53a3990ac2ffc4ace567eb82f22cb6-rootfs.mount: Deactivated successfully. Jan 29 11:15:13.274642 containerd[1475]: time="2025-01-29T11:15:13.274556667Z" level=info msg="shim disconnected" id=26c4cf71c68ef221114524cea7d9924bdc53a3990ac2ffc4ace567eb82f22cb6 namespace=k8s.io Jan 29 11:15:13.275100 containerd[1475]: time="2025-01-29T11:15:13.274904263Z" level=warning msg="cleaning up after shim disconnected" id=26c4cf71c68ef221114524cea7d9924bdc53a3990ac2ffc4ace567eb82f22cb6 namespace=k8s.io Jan 29 11:15:13.275100 containerd[1475]: time="2025-01-29T11:15:13.274940659Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:15:14.038356 kubelet[2655]: E0129 11:15:14.038033 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:15:14.043751 containerd[1475]: time="2025-01-29T11:15:14.042221039Z" level=info msg="CreateContainer within sandbox \"79003d7f56046feb8b58a34fc23ffbc33a06a4f29b1604c1560546549a0595af\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 11:15:14.107281 containerd[1475]: time="2025-01-29T11:15:14.106481379Z" level=info msg="CreateContainer within sandbox \"79003d7f56046feb8b58a34fc23ffbc33a06a4f29b1604c1560546549a0595af\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b6d8a257a6f5e7c6e46e572ba6205ab94f94f8b45866b199e5335a6eed71bf20\"" Jan 29 11:15:14.109778 containerd[1475]: time="2025-01-29T11:15:14.108932427Z" level=info msg="StartContainer for \"b6d8a257a6f5e7c6e46e572ba6205ab94f94f8b45866b199e5335a6eed71bf20\"" Jan 29 11:15:14.167102 systemd[1]: Started cri-containerd-b6d8a257a6f5e7c6e46e572ba6205ab94f94f8b45866b199e5335a6eed71bf20.scope - libcontainer container b6d8a257a6f5e7c6e46e572ba6205ab94f94f8b45866b199e5335a6eed71bf20. Jan 29 11:15:14.206311 systemd[1]: cri-containerd-b6d8a257a6f5e7c6e46e572ba6205ab94f94f8b45866b199e5335a6eed71bf20.scope: Deactivated successfully. Jan 29 11:15:14.209179 containerd[1475]: time="2025-01-29T11:15:14.209000540Z" level=info msg="StartContainer for \"b6d8a257a6f5e7c6e46e572ba6205ab94f94f8b45866b199e5335a6eed71bf20\" returns successfully" Jan 29 11:15:14.244699 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b6d8a257a6f5e7c6e46e572ba6205ab94f94f8b45866b199e5335a6eed71bf20-rootfs.mount: Deactivated successfully. Jan 29 11:15:14.249984 containerd[1475]: time="2025-01-29T11:15:14.249678793Z" level=info msg="shim disconnected" id=b6d8a257a6f5e7c6e46e572ba6205ab94f94f8b45866b199e5335a6eed71bf20 namespace=k8s.io Jan 29 11:15:14.249984 containerd[1475]: time="2025-01-29T11:15:14.249959289Z" level=warning msg="cleaning up after shim disconnected" id=b6d8a257a6f5e7c6e46e572ba6205ab94f94f8b45866b199e5335a6eed71bf20 namespace=k8s.io Jan 29 11:15:14.250205 containerd[1475]: time="2025-01-29T11:15:14.250011597Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:15:15.044864 kubelet[2655]: E0129 11:15:15.044795 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:15:15.053269 containerd[1475]: time="2025-01-29T11:15:15.051394627Z" level=info msg="CreateContainer within sandbox \"79003d7f56046feb8b58a34fc23ffbc33a06a4f29b1604c1560546549a0595af\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 11:15:15.100373 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1915002768.mount: Deactivated successfully. Jan 29 11:15:15.109163 containerd[1475]: time="2025-01-29T11:15:15.108935761Z" level=info msg="CreateContainer within sandbox \"79003d7f56046feb8b58a34fc23ffbc33a06a4f29b1604c1560546549a0595af\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f3a50b0b1de0abb4f4214041607aed3d2320e96255926d46237b5644b6c9747b\"" Jan 29 11:15:15.111637 containerd[1475]: time="2025-01-29T11:15:15.111575976Z" level=info msg="StartContainer for \"f3a50b0b1de0abb4f4214041607aed3d2320e96255926d46237b5644b6c9747b\"" Jan 29 11:15:15.167100 systemd[1]: Started cri-containerd-f3a50b0b1de0abb4f4214041607aed3d2320e96255926d46237b5644b6c9747b.scope - libcontainer container f3a50b0b1de0abb4f4214041607aed3d2320e96255926d46237b5644b6c9747b. Jan 29 11:15:15.224292 containerd[1475]: time="2025-01-29T11:15:15.224225862Z" level=info msg="StartContainer for \"f3a50b0b1de0abb4f4214041607aed3d2320e96255926d46237b5644b6c9747b\" returns successfully" Jan 29 11:15:15.969874 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 29 11:15:16.055742 kubelet[2655]: E0129 11:15:16.055031 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:15:17.425599 kubelet[2655]: E0129 11:15:17.425448 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:15:20.190942 systemd-networkd[1357]: lxc_health: Link UP Jan 29 11:15:20.198665 systemd-networkd[1357]: lxc_health: Gained carrier Jan 29 11:15:20.367280 systemd[1]: run-containerd-runc-k8s.io-f3a50b0b1de0abb4f4214041607aed3d2320e96255926d46237b5644b6c9747b-runc.EVTsDm.mount: Deactivated successfully. Jan 29 11:15:21.427451 kubelet[2655]: E0129 11:15:21.427403 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:15:21.456509 kubelet[2655]: I0129 11:15:21.455851 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-q77s6" podStartSLOduration=11.455825654 podStartE2EDuration="11.455825654s" podCreationTimestamp="2025-01-29 11:15:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:15:16.088056417 +0000 UTC m=+189.974032506" watchObservedRunningTime="2025-01-29 11:15:21.455825654 +0000 UTC m=+195.341801725" Jan 29 11:15:21.578964 systemd-networkd[1357]: lxc_health: Gained IPv6LL Jan 29 11:15:22.072765 kubelet[2655]: E0129 11:15:22.071477 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:15:23.083914 kubelet[2655]: E0129 11:15:23.082809 2655 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:15:23.122683 systemd[1]: run-containerd-runc-k8s.io-f3a50b0b1de0abb4f4214041607aed3d2320e96255926d46237b5644b6c9747b-runc.eDp5cm.mount: Deactivated successfully. Jan 29 11:15:27.749262 systemd[1]: run-containerd-runc-k8s.io-f3a50b0b1de0abb4f4214041607aed3d2320e96255926d46237b5644b6c9747b-runc.lFxsPW.mount: Deactivated successfully. Jan 29 11:15:30.295188 sshd[4597]: Connection closed by 139.178.89.65 port 56084 Jan 29 11:15:30.299643 sshd-session[4595]: pam_unix(sshd:session): session closed for user core Jan 29 11:15:30.310117 systemd-logind[1450]: Session 39 logged out. Waiting for processes to exit. Jan 29 11:15:30.314568 systemd[1]: sshd@38-64.23.245.19:22-139.178.89.65:56084.service: Deactivated successfully. Jan 29 11:15:30.348282 systemd[1]: session-39.scope: Deactivated successfully. Jan 29 11:15:30.376100 systemd-logind[1450]: Removed session 39.