Jan 16 08:57:48.042258 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 19:01:45 -00 2025 Jan 16 08:57:48.042299 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 16 08:57:48.042317 kernel: BIOS-provided physical RAM map: Jan 16 08:57:48.042327 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 16 08:57:48.042336 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 16 08:57:48.042348 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 16 08:57:48.042365 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Jan 16 08:57:48.042380 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Jan 16 08:57:48.042390 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 16 08:57:48.042405 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 16 08:57:48.042438 kernel: NX (Execute Disable) protection: active Jan 16 08:57:48.042451 kernel: APIC: Static calls initialized Jan 16 08:57:48.042482 kernel: SMBIOS 2.8 present. Jan 16 08:57:48.042507 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Jan 16 08:57:48.042526 kernel: Hypervisor detected: KVM Jan 16 08:57:48.042548 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 16 08:57:48.042569 kernel: kvm-clock: using sched offset of 3477398145 cycles Jan 16 08:57:48.042584 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 16 08:57:48.042599 kernel: tsc: Detected 2494.134 MHz processor Jan 16 08:57:48.042615 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 16 08:57:48.042630 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 16 08:57:48.042645 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Jan 16 08:57:48.042659 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 16 08:57:48.042676 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 16 08:57:48.042696 kernel: ACPI: Early table checksum verification disabled Jan 16 08:57:48.042712 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Jan 16 08:57:48.042728 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 08:57:48.042745 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 08:57:48.042762 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 08:57:48.042800 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jan 16 08:57:48.042814 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 08:57:48.042827 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 08:57:48.042841 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 08:57:48.042860 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 08:57:48.042872 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Jan 16 08:57:48.042887 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Jan 16 08:57:48.042900 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jan 16 08:57:48.042916 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Jan 16 08:57:48.042931 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Jan 16 08:57:48.042944 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Jan 16 08:57:48.042969 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Jan 16 08:57:48.042983 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 16 08:57:48.042996 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 16 08:57:48.043010 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 16 08:57:48.043022 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 16 08:57:48.043042 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Jan 16 08:57:48.043058 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Jan 16 08:57:48.043076 kernel: Zone ranges: Jan 16 08:57:48.043090 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 16 08:57:48.043109 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Jan 16 08:57:48.043126 kernel: Normal empty Jan 16 08:57:48.043144 kernel: Movable zone start for each node Jan 16 08:57:48.043161 kernel: Early memory node ranges Jan 16 08:57:48.043179 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 16 08:57:48.043196 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Jan 16 08:57:48.043214 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Jan 16 08:57:48.043243 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 16 08:57:48.043261 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 16 08:57:48.043283 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Jan 16 08:57:48.043301 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 16 08:57:48.043318 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 16 08:57:48.043336 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 16 08:57:48.043349 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 16 08:57:48.043364 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 16 08:57:48.043378 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 16 08:57:48.043399 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 16 08:57:48.045472 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 16 08:57:48.045520 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 16 08:57:48.045534 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 16 08:57:48.045547 kernel: TSC deadline timer available Jan 16 08:57:48.045560 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 16 08:57:48.045572 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 16 08:57:48.045584 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Jan 16 08:57:48.045602 kernel: Booting paravirtualized kernel on KVM Jan 16 08:57:48.045614 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 16 08:57:48.045639 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 16 08:57:48.045652 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 16 08:57:48.045664 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 16 08:57:48.045677 kernel: pcpu-alloc: [0] 0 1 Jan 16 08:57:48.045689 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 16 08:57:48.045705 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 16 08:57:48.045719 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 16 08:57:48.045737 kernel: random: crng init done Jan 16 08:57:48.045749 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 16 08:57:48.045763 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 16 08:57:48.045774 kernel: Fallback order for Node 0: 0 Jan 16 08:57:48.045786 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Jan 16 08:57:48.045798 kernel: Policy zone: DMA32 Jan 16 08:57:48.045810 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 16 08:57:48.045824 kernel: Memory: 1971204K/2096612K available (12288K kernel code, 2299K rwdata, 22736K rodata, 42976K init, 2216K bss, 125148K reserved, 0K cma-reserved) Jan 16 08:57:48.045837 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 16 08:57:48.045856 kernel: Kernel/User page tables isolation: enabled Jan 16 08:57:48.045868 kernel: ftrace: allocating 37920 entries in 149 pages Jan 16 08:57:48.045880 kernel: ftrace: allocated 149 pages with 4 groups Jan 16 08:57:48.045892 kernel: Dynamic Preempt: voluntary Jan 16 08:57:48.045905 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 16 08:57:48.045920 kernel: rcu: RCU event tracing is enabled. Jan 16 08:57:48.045934 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 16 08:57:48.045947 kernel: Trampoline variant of Tasks RCU enabled. Jan 16 08:57:48.045959 kernel: Rude variant of Tasks RCU enabled. Jan 16 08:57:48.045971 kernel: Tracing variant of Tasks RCU enabled. Jan 16 08:57:48.045988 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 16 08:57:48.046000 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 16 08:57:48.046012 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 16 08:57:48.046026 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 16 08:57:48.046047 kernel: Console: colour VGA+ 80x25 Jan 16 08:57:48.046061 kernel: printk: console [tty0] enabled Jan 16 08:57:48.046074 kernel: printk: console [ttyS0] enabled Jan 16 08:57:48.046085 kernel: ACPI: Core revision 20230628 Jan 16 08:57:48.046100 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 16 08:57:48.046116 kernel: APIC: Switch to symmetric I/O mode setup Jan 16 08:57:48.046130 kernel: x2apic enabled Jan 16 08:57:48.046142 kernel: APIC: Switched APIC routing to: physical x2apic Jan 16 08:57:48.046157 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 16 08:57:48.046171 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f3946f721, max_idle_ns: 440795294991 ns Jan 16 08:57:48.046183 kernel: Calibrating delay loop (skipped) preset value.. 4988.26 BogoMIPS (lpj=2494134) Jan 16 08:57:48.046195 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 16 08:57:48.046207 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 16 08:57:48.046239 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 16 08:57:48.046253 kernel: Spectre V2 : Mitigation: Retpolines Jan 16 08:57:48.046268 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 16 08:57:48.046285 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 16 08:57:48.046297 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 16 08:57:48.046310 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 16 08:57:48.046324 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 16 08:57:48.046336 kernel: MDS: Mitigation: Clear CPU buffers Jan 16 08:57:48.046348 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 16 08:57:48.046372 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 16 08:57:48.046386 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 16 08:57:48.046398 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 16 08:57:48.046411 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 16 08:57:48.046439 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 16 08:57:48.046453 kernel: Freeing SMP alternatives memory: 32K Jan 16 08:57:48.046467 kernel: pid_max: default: 32768 minimum: 301 Jan 16 08:57:48.046481 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 16 08:57:48.046501 kernel: landlock: Up and running. Jan 16 08:57:48.046514 kernel: SELinux: Initializing. Jan 16 08:57:48.046527 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 16 08:57:48.046540 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 16 08:57:48.046553 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Jan 16 08:57:48.046566 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 16 08:57:48.046579 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 16 08:57:48.046593 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 16 08:57:48.046607 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Jan 16 08:57:48.046625 kernel: signal: max sigframe size: 1776 Jan 16 08:57:48.046640 kernel: rcu: Hierarchical SRCU implementation. Jan 16 08:57:48.046654 kernel: rcu: Max phase no-delay instances is 400. Jan 16 08:57:48.046670 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 16 08:57:48.046683 kernel: smp: Bringing up secondary CPUs ... Jan 16 08:57:48.046696 kernel: smpboot: x86: Booting SMP configuration: Jan 16 08:57:48.046710 kernel: .... node #0, CPUs: #1 Jan 16 08:57:48.046724 kernel: smp: Brought up 1 node, 2 CPUs Jan 16 08:57:48.046744 kernel: smpboot: Max logical packages: 1 Jan 16 08:57:48.046766 kernel: smpboot: Total of 2 processors activated (9976.53 BogoMIPS) Jan 16 08:57:48.046780 kernel: devtmpfs: initialized Jan 16 08:57:48.046792 kernel: x86/mm: Memory block size: 128MB Jan 16 08:57:48.046806 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 16 08:57:48.046819 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 16 08:57:48.046832 kernel: pinctrl core: initialized pinctrl subsystem Jan 16 08:57:48.046847 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 16 08:57:48.046861 kernel: audit: initializing netlink subsys (disabled) Jan 16 08:57:48.046874 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 16 08:57:48.046891 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 16 08:57:48.046906 kernel: audit: type=2000 audit(1737017866.158:1): state=initialized audit_enabled=0 res=1 Jan 16 08:57:48.046922 kernel: cpuidle: using governor menu Jan 16 08:57:48.046938 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 16 08:57:48.046951 kernel: dca service started, version 1.12.1 Jan 16 08:57:48.046966 kernel: PCI: Using configuration type 1 for base access Jan 16 08:57:48.046980 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 16 08:57:48.046994 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 16 08:57:48.047008 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 16 08:57:48.047026 kernel: ACPI: Added _OSI(Module Device) Jan 16 08:57:48.047040 kernel: ACPI: Added _OSI(Processor Device) Jan 16 08:57:48.047055 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 16 08:57:48.047068 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 16 08:57:48.047082 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 16 08:57:48.047097 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 16 08:57:48.047112 kernel: ACPI: Interpreter enabled Jan 16 08:57:48.047126 kernel: ACPI: PM: (supports S0 S5) Jan 16 08:57:48.047140 kernel: ACPI: Using IOAPIC for interrupt routing Jan 16 08:57:48.047161 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 16 08:57:48.047176 kernel: PCI: Using E820 reservations for host bridge windows Jan 16 08:57:48.047190 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 16 08:57:48.047206 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 16 08:57:48.051674 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 16 08:57:48.051931 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 16 08:57:48.052059 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 16 08:57:48.052092 kernel: acpiphp: Slot [3] registered Jan 16 08:57:48.052139 kernel: acpiphp: Slot [4] registered Jan 16 08:57:48.052152 kernel: acpiphp: Slot [5] registered Jan 16 08:57:48.052166 kernel: acpiphp: Slot [6] registered Jan 16 08:57:48.052179 kernel: acpiphp: Slot [7] registered Jan 16 08:57:48.052192 kernel: acpiphp: Slot [8] registered Jan 16 08:57:48.052205 kernel: acpiphp: Slot [9] registered Jan 16 08:57:48.052216 kernel: acpiphp: Slot [10] registered Jan 16 08:57:48.052225 kernel: acpiphp: Slot [11] registered Jan 16 08:57:48.052245 kernel: acpiphp: Slot [12] registered Jan 16 08:57:48.052255 kernel: acpiphp: Slot [13] registered Jan 16 08:57:48.052268 kernel: acpiphp: Slot [14] registered Jan 16 08:57:48.052277 kernel: acpiphp: Slot [15] registered Jan 16 08:57:48.052286 kernel: acpiphp: Slot [16] registered Jan 16 08:57:48.052296 kernel: acpiphp: Slot [17] registered Jan 16 08:57:48.052305 kernel: acpiphp: Slot [18] registered Jan 16 08:57:48.052314 kernel: acpiphp: Slot [19] registered Jan 16 08:57:48.052323 kernel: acpiphp: Slot [20] registered Jan 16 08:57:48.052332 kernel: acpiphp: Slot [21] registered Jan 16 08:57:48.052345 kernel: acpiphp: Slot [22] registered Jan 16 08:57:48.052355 kernel: acpiphp: Slot [23] registered Jan 16 08:57:48.052364 kernel: acpiphp: Slot [24] registered Jan 16 08:57:48.052372 kernel: acpiphp: Slot [25] registered Jan 16 08:57:48.052382 kernel: acpiphp: Slot [26] registered Jan 16 08:57:48.052391 kernel: acpiphp: Slot [27] registered Jan 16 08:57:48.052400 kernel: acpiphp: Slot [28] registered Jan 16 08:57:48.052513 kernel: acpiphp: Slot [29] registered Jan 16 08:57:48.052524 kernel: acpiphp: Slot [30] registered Jan 16 08:57:48.052538 kernel: acpiphp: Slot [31] registered Jan 16 08:57:48.052547 kernel: PCI host bridge to bus 0000:00 Jan 16 08:57:48.052761 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 16 08:57:48.052892 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 16 08:57:48.052982 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 16 08:57:48.053069 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 16 08:57:48.053156 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jan 16 08:57:48.053242 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 16 08:57:48.053379 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 16 08:57:48.053609 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 16 08:57:48.053740 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jan 16 08:57:48.053849 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Jan 16 08:57:48.053992 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 16 08:57:48.054159 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 16 08:57:48.054302 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 16 08:57:48.054424 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 16 08:57:48.054623 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Jan 16 08:57:48.054740 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Jan 16 08:57:48.054871 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 16 08:57:48.055046 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jan 16 08:57:48.055241 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jan 16 08:57:48.055526 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jan 16 08:57:48.055731 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jan 16 08:57:48.055838 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Jan 16 08:57:48.055936 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Jan 16 08:57:48.056035 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jan 16 08:57:48.056131 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 16 08:57:48.056257 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 16 08:57:48.056398 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Jan 16 08:57:48.056543 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Jan 16 08:57:48.056705 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Jan 16 08:57:48.056858 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 16 08:57:48.057045 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Jan 16 08:57:48.057157 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Jan 16 08:57:48.057303 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Jan 16 08:57:48.057630 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Jan 16 08:57:48.057803 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Jan 16 08:57:48.057963 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Jan 16 08:57:48.058116 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Jan 16 08:57:48.058317 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Jan 16 08:57:48.058521 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Jan 16 08:57:48.058698 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Jan 16 08:57:48.058858 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Jan 16 08:57:48.059031 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Jan 16 08:57:48.059194 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Jan 16 08:57:48.059349 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Jan 16 08:57:48.059682 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Jan 16 08:57:48.059861 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Jan 16 08:57:48.060033 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Jan 16 08:57:48.060186 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Jan 16 08:57:48.060207 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 16 08:57:48.060225 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 16 08:57:48.060240 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 16 08:57:48.060253 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 16 08:57:48.060275 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 16 08:57:48.060291 kernel: iommu: Default domain type: Translated Jan 16 08:57:48.060306 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 16 08:57:48.060321 kernel: PCI: Using ACPI for IRQ routing Jan 16 08:57:48.060335 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 16 08:57:48.060351 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 16 08:57:48.060366 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Jan 16 08:57:48.060570 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jan 16 08:57:48.060730 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jan 16 08:57:48.060895 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 16 08:57:48.060917 kernel: vgaarb: loaded Jan 16 08:57:48.060934 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 16 08:57:48.060948 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 16 08:57:48.060962 kernel: clocksource: Switched to clocksource kvm-clock Jan 16 08:57:48.060977 kernel: VFS: Disk quotas dquot_6.6.0 Jan 16 08:57:48.060992 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 16 08:57:48.061006 kernel: pnp: PnP ACPI init Jan 16 08:57:48.061021 kernel: pnp: PnP ACPI: found 4 devices Jan 16 08:57:48.061045 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 16 08:57:48.061060 kernel: NET: Registered PF_INET protocol family Jan 16 08:57:48.061076 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 16 08:57:48.061091 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 16 08:57:48.061106 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 16 08:57:48.061121 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 16 08:57:48.061136 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 16 08:57:48.061150 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 16 08:57:48.061164 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 16 08:57:48.061186 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 16 08:57:48.061202 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 16 08:57:48.061217 kernel: NET: Registered PF_XDP protocol family Jan 16 08:57:48.061397 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 16 08:57:48.061621 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 16 08:57:48.061753 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 16 08:57:48.061877 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 16 08:57:48.061966 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jan 16 08:57:48.062082 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jan 16 08:57:48.062198 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 16 08:57:48.062213 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 16 08:57:48.062315 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 37446 usecs Jan 16 08:57:48.062328 kernel: PCI: CLS 0 bytes, default 64 Jan 16 08:57:48.062338 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 16 08:57:48.062353 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f3946f721, max_idle_ns: 440795294991 ns Jan 16 08:57:48.062368 kernel: Initialise system trusted keyrings Jan 16 08:57:48.062387 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 16 08:57:48.062401 kernel: Key type asymmetric registered Jan 16 08:57:48.062414 kernel: Asymmetric key parser 'x509' registered Jan 16 08:57:48.062456 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 16 08:57:48.062483 kernel: io scheduler mq-deadline registered Jan 16 08:57:48.062493 kernel: io scheduler kyber registered Jan 16 08:57:48.062503 kernel: io scheduler bfq registered Jan 16 08:57:48.062512 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 16 08:57:48.062522 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jan 16 08:57:48.062536 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 16 08:57:48.062546 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 16 08:57:48.062555 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 16 08:57:48.062565 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 16 08:57:48.062574 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 16 08:57:48.062584 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 16 08:57:48.062593 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 16 08:57:48.062756 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 16 08:57:48.062772 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Jan 16 08:57:48.062886 kernel: rtc_cmos 00:03: registered as rtc0 Jan 16 08:57:48.062979 kernel: rtc_cmos 00:03: setting system clock to 2025-01-16T08:57:47 UTC (1737017867) Jan 16 08:57:48.063078 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 16 08:57:48.063090 kernel: intel_pstate: CPU model not supported Jan 16 08:57:48.063100 kernel: NET: Registered PF_INET6 protocol family Jan 16 08:57:48.063110 kernel: Segment Routing with IPv6 Jan 16 08:57:48.063120 kernel: In-situ OAM (IOAM) with IPv6 Jan 16 08:57:48.063129 kernel: NET: Registered PF_PACKET protocol family Jan 16 08:57:48.063143 kernel: Key type dns_resolver registered Jan 16 08:57:48.063152 kernel: IPI shorthand broadcast: enabled Jan 16 08:57:48.063162 kernel: sched_clock: Marking stable (1300005937, 103575628)->(1427399967, -23818402) Jan 16 08:57:48.063172 kernel: registered taskstats version 1 Jan 16 08:57:48.063181 kernel: Loading compiled-in X.509 certificates Jan 16 08:57:48.063208 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 98739e9049f62881f4df7ffd1e39335f7f55b344' Jan 16 08:57:48.063217 kernel: Key type .fscrypt registered Jan 16 08:57:48.063226 kernel: Key type fscrypt-provisioning registered Jan 16 08:57:48.063235 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 16 08:57:48.063247 kernel: ima: Allocated hash algorithm: sha1 Jan 16 08:57:48.063256 kernel: ima: No architecture policies found Jan 16 08:57:48.063265 kernel: clk: Disabling unused clocks Jan 16 08:57:48.063274 kernel: Freeing unused kernel image (initmem) memory: 42976K Jan 16 08:57:48.063283 kernel: Write protecting the kernel read-only data: 36864k Jan 16 08:57:48.063315 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Jan 16 08:57:48.063327 kernel: Run /init as init process Jan 16 08:57:48.063337 kernel: with arguments: Jan 16 08:57:48.063347 kernel: /init Jan 16 08:57:48.063360 kernel: with environment: Jan 16 08:57:48.063369 kernel: HOME=/ Jan 16 08:57:48.063378 kernel: TERM=linux Jan 16 08:57:48.063388 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 16 08:57:48.063401 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 16 08:57:48.063418 systemd[1]: Detected virtualization kvm. Jan 16 08:57:48.063508 systemd[1]: Detected architecture x86-64. Jan 16 08:57:48.063519 systemd[1]: Running in initrd. Jan 16 08:57:48.063535 systemd[1]: No hostname configured, using default hostname. Jan 16 08:57:48.063672 systemd[1]: Hostname set to . Jan 16 08:57:48.063688 systemd[1]: Initializing machine ID from VM UUID. Jan 16 08:57:48.063702 systemd[1]: Queued start job for default target initrd.target. Jan 16 08:57:48.063717 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 16 08:57:48.063733 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 16 08:57:48.063746 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 16 08:57:48.063756 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 16 08:57:48.063772 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 16 08:57:48.063782 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 16 08:57:48.063794 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 16 08:57:48.063805 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 16 08:57:48.063816 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 16 08:57:48.063826 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 16 08:57:48.063840 systemd[1]: Reached target paths.target - Path Units. Jan 16 08:57:48.063850 systemd[1]: Reached target slices.target - Slice Units. Jan 16 08:57:48.063860 systemd[1]: Reached target swap.target - Swaps. Jan 16 08:57:48.063874 systemd[1]: Reached target timers.target - Timer Units. Jan 16 08:57:48.063884 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 16 08:57:48.063895 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 16 08:57:48.063908 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 16 08:57:48.063919 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 16 08:57:48.063929 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 16 08:57:48.063940 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 16 08:57:48.063950 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 16 08:57:48.063960 systemd[1]: Reached target sockets.target - Socket Units. Jan 16 08:57:48.063970 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 16 08:57:48.063980 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 16 08:57:48.063994 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 16 08:57:48.064004 systemd[1]: Starting systemd-fsck-usr.service... Jan 16 08:57:48.064014 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 16 08:57:48.064025 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 16 08:57:48.064035 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 08:57:48.064045 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 16 08:57:48.064056 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 16 08:57:48.064066 systemd[1]: Finished systemd-fsck-usr.service. Jan 16 08:57:48.064079 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 16 08:57:48.064131 systemd-journald[184]: Collecting audit messages is disabled. Jan 16 08:57:48.064164 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 16 08:57:48.064181 systemd-journald[184]: Journal started Jan 16 08:57:48.064214 systemd-journald[184]: Runtime Journal (/run/log/journal/2ce4ad752a934db2a00c5550db9fb708) is 4.9M, max 39.3M, 34.4M free. Jan 16 08:57:48.050511 systemd-modules-load[185]: Inserted module 'overlay' Jan 16 08:57:48.105483 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 16 08:57:48.105529 kernel: Bridge firewalling registered Jan 16 08:57:48.103882 systemd-modules-load[185]: Inserted module 'br_netfilter' Jan 16 08:57:48.107482 systemd[1]: Started systemd-journald.service - Journal Service. Jan 16 08:57:48.111738 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 16 08:57:48.112707 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 08:57:48.121785 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 16 08:57:48.129898 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 16 08:57:48.136012 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 16 08:57:48.138671 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 16 08:57:48.158881 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 16 08:57:48.163887 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 16 08:57:48.168893 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 16 08:57:48.174908 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 16 08:57:48.180894 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 16 08:57:48.188688 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 16 08:57:48.201841 dracut-cmdline[216]: dracut-dracut-053 Jan 16 08:57:48.209446 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 16 08:57:48.236021 systemd-resolved[219]: Positive Trust Anchors: Jan 16 08:57:48.236052 systemd-resolved[219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 16 08:57:48.236108 systemd-resolved[219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 16 08:57:48.243770 systemd-resolved[219]: Defaulting to hostname 'linux'. Jan 16 08:57:48.246954 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 16 08:57:48.247678 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 16 08:57:48.323493 kernel: SCSI subsystem initialized Jan 16 08:57:48.338466 kernel: Loading iSCSI transport class v2.0-870. Jan 16 08:57:48.355469 kernel: iscsi: registered transport (tcp) Jan 16 08:57:48.387734 kernel: iscsi: registered transport (qla4xxx) Jan 16 08:57:48.387834 kernel: QLogic iSCSI HBA Driver Jan 16 08:57:48.450634 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 16 08:57:48.456743 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 16 08:57:48.495670 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 16 08:57:48.495769 kernel: device-mapper: uevent: version 1.0.3 Jan 16 08:57:48.497073 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 16 08:57:48.547488 kernel: raid6: avx2x4 gen() 12928 MB/s Jan 16 08:57:48.564500 kernel: raid6: avx2x2 gen() 13618 MB/s Jan 16 08:57:48.581920 kernel: raid6: avx2x1 gen() 10775 MB/s Jan 16 08:57:48.582029 kernel: raid6: using algorithm avx2x2 gen() 13618 MB/s Jan 16 08:57:48.599936 kernel: raid6: .... xor() 15815 MB/s, rmw enabled Jan 16 08:57:48.600059 kernel: raid6: using avx2x2 recovery algorithm Jan 16 08:57:48.625474 kernel: xor: automatically using best checksumming function avx Jan 16 08:57:48.822553 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 16 08:57:48.841093 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 16 08:57:48.849824 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 16 08:57:48.876124 systemd-udevd[402]: Using default interface naming scheme 'v255'. Jan 16 08:57:48.884967 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 16 08:57:48.894559 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 16 08:57:48.925659 dracut-pre-trigger[408]: rd.md=0: removing MD RAID activation Jan 16 08:57:48.973256 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 16 08:57:48.981787 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 16 08:57:49.067456 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 16 08:57:49.073973 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 16 08:57:49.113207 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 16 08:57:49.115135 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 16 08:57:49.116524 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 16 08:57:49.117028 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 16 08:57:49.124789 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 16 08:57:49.160233 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 16 08:57:49.200343 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Jan 16 08:57:49.319727 kernel: scsi host0: Virtio SCSI HBA Jan 16 08:57:49.319969 kernel: cryptd: max_cpu_qlen set to 1000 Jan 16 08:57:49.319993 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 16 08:57:49.320176 kernel: ACPI: bus type USB registered Jan 16 08:57:49.320198 kernel: usbcore: registered new interface driver usbfs Jan 16 08:57:49.320217 kernel: usbcore: registered new interface driver hub Jan 16 08:57:49.320236 kernel: usbcore: registered new device driver usb Jan 16 08:57:49.320253 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 16 08:57:49.320271 kernel: GPT:9289727 != 125829119 Jan 16 08:57:49.320289 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 16 08:57:49.320304 kernel: GPT:9289727 != 125829119 Jan 16 08:57:49.320326 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 16 08:57:49.320344 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 16 08:57:49.320363 kernel: AVX2 version of gcm_enc/dec engaged. Jan 16 08:57:49.320382 kernel: AES CTR mode by8 optimization enabled Jan 16 08:57:49.320401 kernel: libata version 3.00 loaded. Jan 16 08:57:49.320443 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Jan 16 08:57:49.333818 kernel: virtio_blk virtio5: [vdb] 968 512-byte logical blocks (496 kB/484 KiB) Jan 16 08:57:49.235845 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 16 08:57:49.385699 kernel: ata_piix 0000:00:01.1: version 2.13 Jan 16 08:57:49.386095 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Jan 16 08:57:49.386350 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Jan 16 08:57:49.386580 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Jan 16 08:57:49.386737 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Jan 16 08:57:49.386859 kernel: hub 1-0:1.0: USB hub found Jan 16 08:57:49.387144 kernel: hub 1-0:1.0: 2 ports detected Jan 16 08:57:49.387386 kernel: scsi host1: ata_piix Jan 16 08:57:49.387896 kernel: scsi host2: ata_piix Jan 16 08:57:49.388116 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Jan 16 08:57:49.388148 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Jan 16 08:57:49.236069 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 16 08:57:49.238321 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 16 08:57:49.239447 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 16 08:57:49.239840 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 08:57:49.241578 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 08:57:49.252876 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 08:57:49.391753 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 08:57:49.407450 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (463) Jan 16 08:57:49.412260 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 16 08:57:49.416451 kernel: BTRFS: device fsid 5e7921ba-229a-48a0-bc77-9b30aaa34aeb devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (450) Jan 16 08:57:49.424689 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 16 08:57:49.433408 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 16 08:57:49.441796 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 16 08:57:49.446350 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 16 08:57:49.448792 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 16 08:57:49.457692 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 16 08:57:49.467478 disk-uuid[547]: Primary Header is updated. Jan 16 08:57:49.467478 disk-uuid[547]: Secondary Entries is updated. Jan 16 08:57:49.467478 disk-uuid[547]: Secondary Header is updated. Jan 16 08:57:49.473459 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 16 08:57:49.475292 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 16 08:57:49.479581 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 16 08:57:50.487438 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 16 08:57:50.489017 disk-uuid[550]: The operation has completed successfully. Jan 16 08:57:50.542731 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 16 08:57:50.542863 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 16 08:57:50.554713 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 16 08:57:50.560306 sh[562]: Success Jan 16 08:57:50.580583 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 16 08:57:50.640843 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 16 08:57:50.649612 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 16 08:57:50.658229 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 16 08:57:50.688625 kernel: BTRFS info (device dm-0): first mount of filesystem 5e7921ba-229a-48a0-bc77-9b30aaa34aeb Jan 16 08:57:50.688732 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 16 08:57:50.688759 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 16 08:57:50.689709 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 16 08:57:50.690450 kernel: BTRFS info (device dm-0): using free space tree Jan 16 08:57:50.700781 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 16 08:57:50.701949 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 16 08:57:50.714795 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 16 08:57:50.718645 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 16 08:57:50.736174 kernel: BTRFS info (device vda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 16 08:57:50.736278 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 16 08:57:50.736300 kernel: BTRFS info (device vda6): using free space tree Jan 16 08:57:50.744055 kernel: BTRFS info (device vda6): auto enabling async discard Jan 16 08:57:50.758952 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 16 08:57:50.760033 kernel: BTRFS info (device vda6): last unmount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 16 08:57:50.769459 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 16 08:57:50.776677 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 16 08:57:50.896165 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 16 08:57:50.904876 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 16 08:57:50.943284 systemd-networkd[745]: lo: Link UP Jan 16 08:57:50.943297 systemd-networkd[745]: lo: Gained carrier Jan 16 08:57:50.947685 systemd-networkd[745]: Enumeration completed Jan 16 08:57:50.948244 systemd-networkd[745]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 16 08:57:50.948249 systemd-networkd[745]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Jan 16 08:57:50.949072 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 16 08:57:50.950431 systemd[1]: Reached target network.target - Network. Jan 16 08:57:50.952951 systemd-networkd[745]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 16 08:57:50.952956 systemd-networkd[745]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 16 08:57:50.954756 systemd-networkd[745]: eth0: Link UP Jan 16 08:57:50.954761 systemd-networkd[745]: eth0: Gained carrier Jan 16 08:57:50.954774 systemd-networkd[745]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 16 08:57:50.958381 systemd-networkd[745]: eth1: Link UP Jan 16 08:57:50.958387 systemd-networkd[745]: eth1: Gained carrier Jan 16 08:57:50.958407 systemd-networkd[745]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 16 08:57:50.973567 systemd-networkd[745]: eth1: DHCPv4 address 10.124.0.30/20 acquired from 169.254.169.253 Jan 16 08:57:50.980551 systemd-networkd[745]: eth0: DHCPv4 address 24.199.127.61/20, gateway 24.199.112.1 acquired from 169.254.169.253 Jan 16 08:57:50.987016 ignition[650]: Ignition 2.20.0 Jan 16 08:57:50.987039 ignition[650]: Stage: fetch-offline Jan 16 08:57:50.987134 ignition[650]: no configs at "/usr/lib/ignition/base.d" Jan 16 08:57:50.987153 ignition[650]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 16 08:57:50.989919 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 16 08:57:50.987393 ignition[650]: parsed url from cmdline: "" Jan 16 08:57:50.987400 ignition[650]: no config URL provided Jan 16 08:57:50.987410 ignition[650]: reading system config file "/usr/lib/ignition/user.ign" Jan 16 08:57:50.987443 ignition[650]: no config at "/usr/lib/ignition/user.ign" Jan 16 08:57:50.987468 ignition[650]: failed to fetch config: resource requires networking Jan 16 08:57:50.988067 ignition[650]: Ignition finished successfully Jan 16 08:57:50.995762 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 16 08:57:51.016693 ignition[755]: Ignition 2.20.0 Jan 16 08:57:51.016727 ignition[755]: Stage: fetch Jan 16 08:57:51.017009 ignition[755]: no configs at "/usr/lib/ignition/base.d" Jan 16 08:57:51.017031 ignition[755]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 16 08:57:51.017184 ignition[755]: parsed url from cmdline: "" Jan 16 08:57:51.017191 ignition[755]: no config URL provided Jan 16 08:57:51.017199 ignition[755]: reading system config file "/usr/lib/ignition/user.ign" Jan 16 08:57:51.017227 ignition[755]: no config at "/usr/lib/ignition/user.ign" Jan 16 08:57:51.017265 ignition[755]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Jan 16 08:57:51.034698 ignition[755]: GET result: OK Jan 16 08:57:51.034853 ignition[755]: parsing config with SHA512: 5009a97f03f1e09cba05bc6ee7afc2496bf8afb374562130cd52ba34b19135bf2d5fcc41007614cd850285aa77552c5cc873f445d6f4160f9202e6263980300d Jan 16 08:57:51.044812 unknown[755]: fetched base config from "system" Jan 16 08:57:51.044841 unknown[755]: fetched base config from "system" Jan 16 08:57:51.045939 ignition[755]: fetch: fetch complete Jan 16 08:57:51.044853 unknown[755]: fetched user config from "digitalocean" Jan 16 08:57:51.045951 ignition[755]: fetch: fetch passed Jan 16 08:57:51.049781 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 16 08:57:51.046065 ignition[755]: Ignition finished successfully Jan 16 08:57:51.062896 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 16 08:57:51.084010 ignition[761]: Ignition 2.20.0 Jan 16 08:57:51.084027 ignition[761]: Stage: kargs Jan 16 08:57:51.084322 ignition[761]: no configs at "/usr/lib/ignition/base.d" Jan 16 08:57:51.084342 ignition[761]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 16 08:57:51.086038 ignition[761]: kargs: kargs passed Jan 16 08:57:51.086138 ignition[761]: Ignition finished successfully Jan 16 08:57:51.087705 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 16 08:57:51.095944 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 16 08:57:51.119742 ignition[767]: Ignition 2.20.0 Jan 16 08:57:51.119762 ignition[767]: Stage: disks Jan 16 08:57:51.120051 ignition[767]: no configs at "/usr/lib/ignition/base.d" Jan 16 08:57:51.120069 ignition[767]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 16 08:57:51.121635 ignition[767]: disks: disks passed Jan 16 08:57:51.123042 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 16 08:57:51.121716 ignition[767]: Ignition finished successfully Jan 16 08:57:51.127205 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 16 08:57:51.128052 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 16 08:57:51.128776 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 16 08:57:51.129732 systemd[1]: Reached target sysinit.target - System Initialization. Jan 16 08:57:51.130614 systemd[1]: Reached target basic.target - Basic System. Jan 16 08:57:51.142768 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 16 08:57:51.164388 systemd-fsck[775]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 16 08:57:51.168745 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 16 08:57:51.174611 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 16 08:57:51.321478 kernel: EXT4-fs (vda9): mounted filesystem 84bcd1b2-5573-4e91-8fd5-f97782397085 r/w with ordered data mode. Quota mode: none. Jan 16 08:57:51.321783 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 16 08:57:51.322796 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 16 08:57:51.329613 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 16 08:57:51.340547 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 16 08:57:51.344879 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... Jan 16 08:57:51.348197 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 16 08:57:51.351370 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 16 08:57:51.352076 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 16 08:57:51.362461 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (783) Jan 16 08:57:51.363658 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 16 08:57:51.367621 kernel: BTRFS info (device vda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 16 08:57:51.369827 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 16 08:57:51.369911 kernel: BTRFS info (device vda6): using free space tree Jan 16 08:57:51.388625 kernel: BTRFS info (device vda6): auto enabling async discard Jan 16 08:57:51.384344 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 16 08:57:51.391301 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 16 08:57:51.472464 coreos-metadata[786]: Jan 16 08:57:51.472 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 16 08:57:51.476871 initrd-setup-root[814]: cut: /sysroot/etc/passwd: No such file or directory Jan 16 08:57:51.484687 coreos-metadata[785]: Jan 16 08:57:51.484 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 16 08:57:51.487533 initrd-setup-root[821]: cut: /sysroot/etc/group: No such file or directory Jan 16 08:57:51.494778 initrd-setup-root[828]: cut: /sysroot/etc/shadow: No such file or directory Jan 16 08:57:51.501405 initrd-setup-root[835]: cut: /sysroot/etc/gshadow: No such file or directory Jan 16 08:57:51.514885 coreos-metadata[786]: Jan 16 08:57:51.513 INFO Fetch successful Jan 16 08:57:51.516260 coreos-metadata[785]: Jan 16 08:57:51.516 INFO Fetch successful Jan 16 08:57:51.522316 coreos-metadata[786]: Jan 16 08:57:51.522 INFO wrote hostname ci-4152.2.0-e-393f89f1d0 to /sysroot/etc/hostname Jan 16 08:57:51.524315 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. Jan 16 08:57:51.524559 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. Jan 16 08:57:51.527319 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 16 08:57:51.653117 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 16 08:57:51.659704 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 16 08:57:51.663092 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 16 08:57:51.686450 kernel: BTRFS info (device vda6): last unmount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 16 08:57:51.687140 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 16 08:57:51.707346 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 16 08:57:51.722140 ignition[906]: INFO : Ignition 2.20.0 Jan 16 08:57:51.722140 ignition[906]: INFO : Stage: mount Jan 16 08:57:51.724159 ignition[906]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 16 08:57:51.724159 ignition[906]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 16 08:57:51.725502 ignition[906]: INFO : mount: mount passed Jan 16 08:57:51.725502 ignition[906]: INFO : Ignition finished successfully Jan 16 08:57:51.726812 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 16 08:57:51.733653 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 16 08:57:51.761803 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 16 08:57:51.775617 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (916) Jan 16 08:57:51.779214 kernel: BTRFS info (device vda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 16 08:57:51.779309 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 16 08:57:51.779331 kernel: BTRFS info (device vda6): using free space tree Jan 16 08:57:51.784467 kernel: BTRFS info (device vda6): auto enabling async discard Jan 16 08:57:51.788112 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 16 08:57:51.826127 ignition[933]: INFO : Ignition 2.20.0 Jan 16 08:57:51.826127 ignition[933]: INFO : Stage: files Jan 16 08:57:51.827377 ignition[933]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 16 08:57:51.827377 ignition[933]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 16 08:57:51.828519 ignition[933]: DEBUG : files: compiled without relabeling support, skipping Jan 16 08:57:51.829021 ignition[933]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 16 08:57:51.829021 ignition[933]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 16 08:57:51.832317 ignition[933]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 16 08:57:51.833106 ignition[933]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 16 08:57:51.833106 ignition[933]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 16 08:57:51.833068 unknown[933]: wrote ssh authorized keys file for user: core Jan 16 08:57:51.835224 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 16 08:57:51.836052 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 16 08:57:51.836052 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 16 08:57:51.836052 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 16 08:57:51.882523 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 16 08:57:52.016299 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 16 08:57:52.016299 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 16 08:57:52.018342 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 16 08:57:52.386981 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Jan 16 08:57:52.463666 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 16 08:57:52.464579 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Jan 16 08:57:52.464579 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Jan 16 08:57:52.464579 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 16 08:57:52.464579 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 16 08:57:52.464579 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 16 08:57:52.464579 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 16 08:57:52.464579 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 16 08:57:52.464579 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 16 08:57:52.473586 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 16 08:57:52.473586 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 16 08:57:52.473586 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 16 08:57:52.473586 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 16 08:57:52.473586 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 16 08:57:52.473586 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jan 16 08:57:52.768113 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Jan 16 08:57:52.885646 systemd-networkd[745]: eth1: Gained IPv6LL Jan 16 08:57:53.013608 systemd-networkd[745]: eth0: Gained IPv6LL Jan 16 08:57:53.096904 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 16 08:57:53.098807 ignition[933]: INFO : files: op(d): [started] processing unit "containerd.service" Jan 16 08:57:53.100751 ignition[933]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 16 08:57:53.101860 ignition[933]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 16 08:57:53.101860 ignition[933]: INFO : files: op(d): [finished] processing unit "containerd.service" Jan 16 08:57:53.101860 ignition[933]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Jan 16 08:57:53.101860 ignition[933]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 16 08:57:53.101860 ignition[933]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 16 08:57:53.101860 ignition[933]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Jan 16 08:57:53.101860 ignition[933]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 16 08:57:53.101860 ignition[933]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 16 08:57:53.101860 ignition[933]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 16 08:57:53.101860 ignition[933]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 16 08:57:53.101860 ignition[933]: INFO : files: files passed Jan 16 08:57:53.113081 ignition[933]: INFO : Ignition finished successfully Jan 16 08:57:53.103830 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 16 08:57:53.109793 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 16 08:57:53.119848 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 16 08:57:53.126320 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 16 08:57:53.126483 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 16 08:57:53.141335 initrd-setup-root-after-ignition[961]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 16 08:57:53.141335 initrd-setup-root-after-ignition[961]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 16 08:57:53.143495 initrd-setup-root-after-ignition[965]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 16 08:57:53.147042 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 16 08:57:53.148116 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 16 08:57:53.155813 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 16 08:57:53.219972 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 16 08:57:53.220154 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 16 08:57:53.222519 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 16 08:57:53.223153 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 16 08:57:53.224352 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 16 08:57:53.231779 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 16 08:57:53.260312 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 16 08:57:53.267874 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 16 08:57:53.297371 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 16 08:57:53.298260 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 16 08:57:53.299333 systemd[1]: Stopped target timers.target - Timer Units. Jan 16 08:57:53.300355 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 16 08:57:53.300592 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 16 08:57:53.301939 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 16 08:57:53.303030 systemd[1]: Stopped target basic.target - Basic System. Jan 16 08:57:53.303916 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 16 08:57:53.304804 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 16 08:57:53.305839 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 16 08:57:53.306809 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 16 08:57:53.307784 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 16 08:57:53.308774 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 16 08:57:53.309781 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 16 08:57:53.310774 systemd[1]: Stopped target swap.target - Swaps. Jan 16 08:57:53.311691 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 16 08:57:53.311949 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 16 08:57:53.313177 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 16 08:57:53.314448 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 16 08:57:53.315504 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 16 08:57:53.316625 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 16 08:57:53.317927 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 16 08:57:53.318152 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 16 08:57:53.319738 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 16 08:57:53.319940 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 16 08:57:53.321012 systemd[1]: ignition-files.service: Deactivated successfully. Jan 16 08:57:53.321136 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 16 08:57:53.322033 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 16 08:57:53.322235 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 16 08:57:53.334871 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 16 08:57:53.336886 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 16 08:57:53.337119 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 16 08:57:53.341861 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 16 08:57:53.342347 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 16 08:57:53.342636 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 16 08:57:53.350521 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 16 08:57:53.350848 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 16 08:57:53.365968 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 16 08:57:53.366770 ignition[985]: INFO : Ignition 2.20.0 Jan 16 08:57:53.366770 ignition[985]: INFO : Stage: umount Jan 16 08:57:53.366770 ignition[985]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 16 08:57:53.366770 ignition[985]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 16 08:57:53.371222 ignition[985]: INFO : umount: umount passed Jan 16 08:57:53.371222 ignition[985]: INFO : Ignition finished successfully Jan 16 08:57:53.368678 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 16 08:57:53.380298 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 16 08:57:53.380501 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 16 08:57:53.386372 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 16 08:57:53.386500 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 16 08:57:53.387154 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 16 08:57:53.387231 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 16 08:57:53.388827 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 16 08:57:53.388903 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 16 08:57:53.389726 systemd[1]: Stopped target network.target - Network. Jan 16 08:57:53.390284 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 16 08:57:53.390402 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 16 08:57:53.390939 systemd[1]: Stopped target paths.target - Path Units. Jan 16 08:57:53.391373 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 16 08:57:53.397734 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 16 08:57:53.398622 systemd[1]: Stopped target slices.target - Slice Units. Jan 16 08:57:53.399066 systemd[1]: Stopped target sockets.target - Socket Units. Jan 16 08:57:53.400695 systemd[1]: iscsid.socket: Deactivated successfully. Jan 16 08:57:53.400774 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 16 08:57:53.402184 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 16 08:57:53.402266 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 16 08:57:53.406550 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 16 08:57:53.406679 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 16 08:57:53.407798 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 16 08:57:53.407896 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 16 08:57:53.408570 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 16 08:57:53.409148 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 16 08:57:53.412912 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 16 08:57:53.414527 systemd-networkd[745]: eth0: DHCPv6 lease lost Jan 16 08:57:53.414563 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 16 08:57:53.414707 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 16 08:57:53.416873 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 16 08:57:53.417040 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 16 08:57:53.417537 systemd-networkd[745]: eth1: DHCPv6 lease lost Jan 16 08:57:53.422398 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 16 08:57:53.422647 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 16 08:57:53.427216 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 16 08:57:53.427491 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 16 08:57:53.430753 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 16 08:57:53.430843 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 16 08:57:53.438662 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 16 08:57:53.439340 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 16 08:57:53.439656 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 16 08:57:53.440767 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 16 08:57:53.440866 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 16 08:57:53.442945 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 16 08:57:53.443056 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 16 08:57:53.446834 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 16 08:57:53.446939 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 16 08:57:53.453995 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 16 08:57:53.473006 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 16 08:57:53.474028 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 16 08:57:53.474956 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 16 08:57:53.476394 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 16 08:57:53.478068 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 16 08:57:53.478189 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 16 08:57:53.479742 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 16 08:57:53.479791 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 16 08:57:53.482761 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 16 08:57:53.484307 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 16 08:57:53.485648 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 16 08:57:53.485726 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 16 08:57:53.486595 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 16 08:57:53.486660 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 16 08:57:53.491824 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 16 08:57:53.492287 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 16 08:57:53.492393 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 16 08:57:53.493013 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 16 08:57:53.493078 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 08:57:53.505561 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 16 08:57:53.505751 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 16 08:57:53.507851 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 16 08:57:53.513733 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 16 08:57:53.534620 systemd[1]: Switching root. Jan 16 08:57:53.563904 systemd-journald[184]: Journal stopped Jan 16 08:57:55.238606 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Jan 16 08:57:55.238741 kernel: SELinux: policy capability network_peer_controls=1 Jan 16 08:57:55.238766 kernel: SELinux: policy capability open_perms=1 Jan 16 08:57:55.238787 kernel: SELinux: policy capability extended_socket_class=1 Jan 16 08:57:55.238822 kernel: SELinux: policy capability always_check_network=0 Jan 16 08:57:55.238846 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 16 08:57:55.238866 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 16 08:57:55.238889 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 16 08:57:55.238921 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 16 08:57:55.238942 kernel: audit: type=1403 audit(1737017873.932:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 16 08:57:55.238965 systemd[1]: Successfully loaded SELinux policy in 48.203ms. Jan 16 08:57:55.238998 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 17.833ms. Jan 16 08:57:55.239023 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 16 08:57:55.239047 systemd[1]: Detected virtualization kvm. Jan 16 08:57:55.239070 systemd[1]: Detected architecture x86-64. Jan 16 08:57:55.239091 systemd[1]: Detected first boot. Jan 16 08:57:55.239119 systemd[1]: Hostname set to . Jan 16 08:57:55.239146 systemd[1]: Initializing machine ID from VM UUID. Jan 16 08:57:55.239168 zram_generator::config[1049]: No configuration found. Jan 16 08:57:55.239201 systemd[1]: Populated /etc with preset unit settings. Jan 16 08:57:55.239224 systemd[1]: Queued start job for default target multi-user.target. Jan 16 08:57:55.239247 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 16 08:57:55.239268 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 16 08:57:55.239284 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 16 08:57:55.239311 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 16 08:57:55.239349 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 16 08:57:55.239369 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 16 08:57:55.239389 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 16 08:57:55.239408 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 16 08:57:55.245479 systemd[1]: Created slice user.slice - User and Session Slice. Jan 16 08:57:55.245514 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 16 08:57:55.245530 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 16 08:57:55.245545 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 16 08:57:55.245560 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 16 08:57:55.245589 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 16 08:57:55.245610 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 16 08:57:55.245629 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 16 08:57:55.245650 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 16 08:57:55.245669 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 16 08:57:55.245687 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 16 08:57:55.245705 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 16 08:57:55.245728 systemd[1]: Reached target slices.target - Slice Units. Jan 16 08:57:55.245747 systemd[1]: Reached target swap.target - Swaps. Jan 16 08:57:55.245765 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 16 08:57:55.245783 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 16 08:57:55.245802 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 16 08:57:55.245823 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 16 08:57:55.245843 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 16 08:57:55.245878 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 16 08:57:55.245898 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 16 08:57:55.245916 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 16 08:57:55.245931 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 16 08:57:55.245944 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 16 08:57:55.245959 systemd[1]: Mounting media.mount - External Media Directory... Jan 16 08:57:55.245973 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 08:57:55.246017 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 16 08:57:55.246031 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 16 08:57:55.246051 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 16 08:57:55.246072 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 16 08:57:55.246090 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 08:57:55.246109 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 16 08:57:55.246128 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 16 08:57:55.246146 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 16 08:57:55.246166 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 16 08:57:55.246186 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 16 08:57:55.246204 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 16 08:57:55.246226 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 16 08:57:55.246261 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 16 08:57:55.246290 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 16 08:57:55.246312 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 16 08:57:55.246331 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 16 08:57:55.246345 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 16 08:57:55.246358 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 16 08:57:55.246396 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 16 08:57:55.246410 kernel: loop: module loaded Jan 16 08:57:55.246445 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 16 08:57:55.246460 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 08:57:55.246473 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 16 08:57:55.246487 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 16 08:57:55.246500 systemd[1]: Mounted media.mount - External Media Directory. Jan 16 08:57:55.246577 systemd-journald[1140]: Collecting audit messages is disabled. Jan 16 08:57:55.246606 kernel: fuse: init (API version 7.39) Jan 16 08:57:55.246624 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 16 08:57:55.246638 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 16 08:57:55.246650 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 16 08:57:55.246664 systemd-journald[1140]: Journal started Jan 16 08:57:55.246691 systemd-journald[1140]: Runtime Journal (/run/log/journal/2ce4ad752a934db2a00c5550db9fb708) is 4.9M, max 39.3M, 34.4M free. Jan 16 08:57:55.254275 systemd[1]: Started systemd-journald.service - Journal Service. Jan 16 08:57:55.258098 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 16 08:57:55.260308 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 16 08:57:55.260639 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 16 08:57:55.263014 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 16 08:57:55.263301 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 16 08:57:55.266071 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 16 08:57:55.266355 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 16 08:57:55.267540 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 16 08:57:55.267823 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 16 08:57:55.271309 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 16 08:57:55.272788 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 16 08:57:55.273501 kernel: ACPI: bus type drm_connector registered Jan 16 08:57:55.275395 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 16 08:57:55.281927 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 16 08:57:55.282242 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 16 08:57:55.286923 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 16 08:57:55.287772 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 16 08:57:55.295624 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 16 08:57:55.312286 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 16 08:57:55.319597 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 16 08:57:55.329763 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 16 08:57:55.330756 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 16 08:57:55.339708 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 16 08:57:55.358359 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 16 08:57:55.359199 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 16 08:57:55.368556 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 16 08:57:55.369669 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 16 08:57:55.381754 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 16 08:57:55.397622 systemd-journald[1140]: Time spent on flushing to /var/log/journal/2ce4ad752a934db2a00c5550db9fb708 is 79.645ms for 975 entries. Jan 16 08:57:55.397622 systemd-journald[1140]: System Journal (/var/log/journal/2ce4ad752a934db2a00c5550db9fb708) is 8.0M, max 195.6M, 187.6M free. Jan 16 08:57:55.505074 systemd-journald[1140]: Received client request to flush runtime journal. Jan 16 08:57:55.396715 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 16 08:57:55.412325 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 16 08:57:55.414075 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 16 08:57:55.431948 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 16 08:57:55.433017 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 16 08:57:55.498084 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 16 08:57:55.510209 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 16 08:57:55.514947 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Jan 16 08:57:55.514973 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Jan 16 08:57:55.525845 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 16 08:57:55.527349 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 16 08:57:55.539656 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 16 08:57:55.548730 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 16 08:57:55.590586 udevadm[1202]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 16 08:57:55.625836 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 16 08:57:55.636768 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 16 08:57:55.677403 systemd-tmpfiles[1210]: ACLs are not supported, ignoring. Jan 16 08:57:55.677455 systemd-tmpfiles[1210]: ACLs are not supported, ignoring. Jan 16 08:57:55.689150 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 16 08:57:56.396166 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 16 08:57:56.403777 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 16 08:57:56.438551 systemd-udevd[1216]: Using default interface naming scheme 'v255'. Jan 16 08:57:56.467272 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 16 08:57:56.479735 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 16 08:57:56.517711 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 16 08:57:56.556472 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1227) Jan 16 08:57:56.618599 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 08:57:56.619121 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 08:57:56.628688 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 16 08:57:56.643669 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 16 08:57:56.666796 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 16 08:57:56.669588 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 16 08:57:56.669678 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 16 08:57:56.669758 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 08:57:56.699404 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 16 08:57:56.699745 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 16 08:57:56.725739 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 16 08:57:56.726039 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 16 08:57:56.727258 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 16 08:57:56.729751 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 16 08:57:56.742131 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 16 08:57:56.764935 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 16 08:57:56.768632 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 16 08:57:56.768693 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 16 08:57:56.777619 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 16 08:57:56.842636 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jan 16 08:57:56.843455 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 16 08:57:56.849504 kernel: ACPI: button: Power Button [PWRF] Jan 16 08:57:56.902850 systemd-networkd[1220]: lo: Link UP Jan 16 08:57:56.903398 systemd-networkd[1220]: lo: Gained carrier Jan 16 08:57:56.907853 systemd-networkd[1220]: Enumeration completed Jan 16 08:57:56.908090 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 16 08:57:56.909137 systemd-networkd[1220]: eth0: Configuring with /run/systemd/network/10-ba:29:2a:a4:6b:eb.network. Jan 16 08:57:56.912776 systemd-networkd[1220]: eth1: Configuring with /run/systemd/network/10-82:bb:fc:0b:6a:6b.network. Jan 16 08:57:56.913872 systemd-networkd[1220]: eth0: Link UP Jan 16 08:57:56.914000 systemd-networkd[1220]: eth0: Gained carrier Jan 16 08:57:56.916719 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 16 08:57:56.917925 systemd-networkd[1220]: eth1: Link UP Jan 16 08:57:56.918119 systemd-networkd[1220]: eth1: Gained carrier Jan 16 08:57:56.965445 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 16 08:57:56.988451 kernel: mousedev: PS/2 mouse device common for all mice Jan 16 08:57:56.985947 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 08:57:57.043840 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jan 16 08:57:57.043951 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jan 16 08:57:57.051779 kernel: Console: switching to colour dummy device 80x25 Jan 16 08:57:57.051888 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 16 08:57:57.051915 kernel: [drm] features: -context_init Jan 16 08:57:57.051940 kernel: [drm] number of scanouts: 1 Jan 16 08:57:57.059474 kernel: [drm] number of cap sets: 0 Jan 16 08:57:57.059584 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jan 16 08:57:57.076978 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 16 08:57:57.077098 kernel: Console: switching to colour frame buffer device 128x48 Jan 16 08:57:57.092445 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 16 08:57:57.143694 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 16 08:57:57.144142 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 08:57:57.154838 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 08:57:57.207469 kernel: EDAC MC: Ver: 3.0.0 Jan 16 08:57:57.242612 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 08:57:57.243390 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 16 08:57:57.252787 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 16 08:57:57.286452 lvm[1279]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 16 08:57:57.324146 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 16 08:57:57.325802 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 16 08:57:57.339819 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 16 08:57:57.346685 lvm[1282]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 16 08:57:57.380222 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 16 08:57:57.381807 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 16 08:57:57.394675 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Jan 16 08:57:57.394912 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 16 08:57:57.394969 systemd[1]: Reached target machines.target - Containers. Jan 16 08:57:57.399710 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 16 08:57:57.421451 kernel: ISO 9660 Extensions: RRIP_1991A Jan 16 08:57:57.421123 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Jan 16 08:57:57.424444 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 16 08:57:57.426547 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 16 08:57:57.435748 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 16 08:57:57.440255 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 16 08:57:57.443566 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 08:57:57.448691 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 16 08:57:57.459695 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 16 08:57:57.485723 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 16 08:57:57.489889 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 16 08:57:57.514407 kernel: loop0: detected capacity change from 0 to 138184 Jan 16 08:57:57.522303 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 16 08:57:57.527160 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 16 08:57:57.554560 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 16 08:57:57.583217 kernel: loop1: detected capacity change from 0 to 140992 Jan 16 08:57:57.632880 kernel: loop2: detected capacity change from 0 to 211296 Jan 16 08:57:57.689523 kernel: loop3: detected capacity change from 0 to 8 Jan 16 08:57:57.716471 kernel: loop4: detected capacity change from 0 to 138184 Jan 16 08:57:57.748193 kernel: loop5: detected capacity change from 0 to 140992 Jan 16 08:57:57.769734 kernel: loop6: detected capacity change from 0 to 211296 Jan 16 08:57:57.793470 kernel: loop7: detected capacity change from 0 to 8 Jan 16 08:57:57.795525 (sd-merge)[1307]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Jan 16 08:57:57.796221 (sd-merge)[1307]: Merged extensions into '/usr'. Jan 16 08:57:57.802278 systemd[1]: Reloading requested from client PID 1296 ('systemd-sysext') (unit systemd-sysext.service)... Jan 16 08:57:57.802556 systemd[1]: Reloading... Jan 16 08:57:57.905834 zram_generator::config[1342]: No configuration found. Jan 16 08:57:58.114953 ldconfig[1293]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 16 08:57:58.140247 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 16 08:57:58.246374 systemd[1]: Reloading finished in 443 ms. Jan 16 08:57:58.264205 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 16 08:57:58.268165 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 16 08:57:58.281848 systemd[1]: Starting ensure-sysext.service... Jan 16 08:57:58.295733 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 16 08:57:58.307529 systemd[1]: Reloading requested from client PID 1386 ('systemctl') (unit ensure-sysext.service)... Jan 16 08:57:58.307576 systemd[1]: Reloading... Jan 16 08:57:58.326703 systemd-networkd[1220]: eth1: Gained IPv6LL Jan 16 08:57:58.362715 systemd-tmpfiles[1387]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 16 08:57:58.364063 systemd-tmpfiles[1387]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 16 08:57:58.366963 systemd-tmpfiles[1387]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 16 08:57:58.367857 systemd-tmpfiles[1387]: ACLs are not supported, ignoring. Jan 16 08:57:58.367959 systemd-tmpfiles[1387]: ACLs are not supported, ignoring. Jan 16 08:57:58.375047 systemd-tmpfiles[1387]: Detected autofs mount point /boot during canonicalization of boot. Jan 16 08:57:58.375289 systemd-tmpfiles[1387]: Skipping /boot Jan 16 08:57:58.398289 systemd-tmpfiles[1387]: Detected autofs mount point /boot during canonicalization of boot. Jan 16 08:57:58.398309 systemd-tmpfiles[1387]: Skipping /boot Jan 16 08:57:58.455847 zram_generator::config[1416]: No configuration found. Jan 16 08:57:58.617170 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 16 08:57:58.690198 systemd[1]: Reloading finished in 381 ms. Jan 16 08:57:58.706919 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 16 08:57:58.709588 systemd-networkd[1220]: eth0: Gained IPv6LL Jan 16 08:57:58.714340 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 16 08:57:58.736749 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 16 08:57:58.753818 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 16 08:57:58.762399 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 16 08:57:58.776818 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 16 08:57:58.792688 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 16 08:57:58.811184 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 08:57:58.811541 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 08:57:58.817304 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 16 08:57:58.834630 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 16 08:57:58.854096 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 16 08:57:58.856485 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 08:57:58.856660 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 08:57:58.866905 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 08:57:58.867349 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 08:57:58.867655 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 08:57:58.867807 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 08:57:58.875565 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 16 08:57:58.882374 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 08:57:58.887240 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 08:57:58.892779 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 16 08:57:58.895355 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 08:57:58.895572 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 08:57:58.906431 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 16 08:57:58.916112 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 16 08:57:58.934875 systemd[1]: Finished ensure-sysext.service. Jan 16 08:57:58.936268 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 16 08:57:58.940635 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 16 08:57:58.941875 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 16 08:57:58.942089 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 16 08:57:58.946268 systemd-resolved[1472]: Positive Trust Anchors: Jan 16 08:57:58.946799 systemd-resolved[1472]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 16 08:57:58.946844 systemd-resolved[1472]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 16 08:57:58.950291 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 16 08:57:58.952028 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 16 08:57:58.957248 systemd-resolved[1472]: Using system hostname 'ci-4152.2.0-e-393f89f1d0'. Jan 16 08:57:58.958248 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 16 08:57:58.958578 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 16 08:57:58.965828 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 16 08:57:58.981136 systemd[1]: Reached target network.target - Network. Jan 16 08:57:58.983274 systemd[1]: Reached target network-online.target - Network is Online. Jan 16 08:57:58.986008 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 16 08:57:58.986338 augenrules[1514]: No rules Jan 16 08:57:58.986871 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 16 08:57:58.986977 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 16 08:57:58.996897 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 16 08:57:59.003812 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 16 08:57:59.008337 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 16 08:57:59.008825 systemd[1]: audit-rules.service: Deactivated successfully. Jan 16 08:57:59.009094 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 16 08:57:59.031036 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 16 08:57:59.095110 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 16 08:57:59.096323 systemd[1]: Reached target sysinit.target - System Initialization. Jan 16 08:57:59.098814 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 16 08:57:59.099643 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 16 08:57:59.100403 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 16 08:57:59.102140 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 16 08:57:59.102197 systemd[1]: Reached target paths.target - Path Units. Jan 16 08:57:59.102660 systemd[1]: Reached target time-set.target - System Time Set. Jan 16 08:57:59.104286 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 16 08:57:59.105623 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 16 08:57:59.106046 systemd[1]: Reached target timers.target - Timer Units. Jan 16 08:57:59.108058 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 16 08:57:59.116523 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 16 08:57:59.120736 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 16 08:57:59.122149 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 16 08:57:59.125119 systemd[1]: Reached target sockets.target - Socket Units. Jan 16 08:57:59.125817 systemd[1]: Reached target basic.target - Basic System. Jan 16 08:57:59.127579 systemd[1]: System is tainted: cgroupsv1 Jan 16 08:57:59.127660 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 16 08:57:59.127687 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 16 08:57:59.128761 systemd-timesyncd[1520]: Contacted time server 15.204.87.223:123 (0.flatcar.pool.ntp.org). Jan 16 08:57:59.128825 systemd-timesyncd[1520]: Initial clock synchronization to Thu 2025-01-16 08:57:59.087491 UTC. Jan 16 08:57:59.135646 systemd[1]: Starting containerd.service - containerd container runtime... Jan 16 08:57:59.141515 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 16 08:57:59.153699 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 16 08:57:59.163606 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 16 08:57:59.178778 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 16 08:57:59.182660 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 16 08:57:59.194580 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 08:57:59.206480 jq[1534]: false Jan 16 08:57:59.204218 dbus-daemon[1531]: [system] SELinux support is enabled Jan 16 08:57:59.208765 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 16 08:57:59.221526 coreos-metadata[1529]: Jan 16 08:57:59.221 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 16 08:57:59.222779 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 16 08:57:59.241516 coreos-metadata[1529]: Jan 16 08:57:59.241 INFO Fetch successful Jan 16 08:57:59.246266 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 16 08:57:59.254758 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 16 08:57:59.272789 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 16 08:57:59.282864 extend-filesystems[1535]: Found loop4 Jan 16 08:57:59.288083 extend-filesystems[1535]: Found loop5 Jan 16 08:57:59.288083 extend-filesystems[1535]: Found loop6 Jan 16 08:57:59.288083 extend-filesystems[1535]: Found loop7 Jan 16 08:57:59.288083 extend-filesystems[1535]: Found vda Jan 16 08:57:59.288083 extend-filesystems[1535]: Found vda1 Jan 16 08:57:59.288083 extend-filesystems[1535]: Found vda2 Jan 16 08:57:59.288083 extend-filesystems[1535]: Found vda3 Jan 16 08:57:59.288083 extend-filesystems[1535]: Found usr Jan 16 08:57:59.288083 extend-filesystems[1535]: Found vda4 Jan 16 08:57:59.288083 extend-filesystems[1535]: Found vda6 Jan 16 08:57:59.288083 extend-filesystems[1535]: Found vda7 Jan 16 08:57:59.288083 extend-filesystems[1535]: Found vda9 Jan 16 08:57:59.288083 extend-filesystems[1535]: Checking size of /dev/vda9 Jan 16 08:57:59.304171 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 16 08:57:59.326837 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 16 08:57:59.341924 systemd[1]: Starting update-engine.service - Update Engine... Jan 16 08:57:59.367536 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 16 08:57:59.370022 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 16 08:57:59.386516 extend-filesystems[1535]: Resized partition /dev/vda9 Jan 16 08:57:59.409589 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 16 08:57:59.410015 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 16 08:57:59.410793 extend-filesystems[1572]: resize2fs 1.47.1 (20-May-2024) Jan 16 08:57:59.426322 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Jan 16 08:57:59.426458 update_engine[1563]: I20250116 08:57:59.419314 1563 main.cc:92] Flatcar Update Engine starting Jan 16 08:57:59.427258 systemd[1]: motdgen.service: Deactivated successfully. Jan 16 08:57:59.427718 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 16 08:57:59.440122 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 16 08:57:59.459481 update_engine[1563]: I20250116 08:57:59.459129 1563 update_check_scheduler.cc:74] Next update check in 5m51s Jan 16 08:57:59.466488 jq[1569]: true Jan 16 08:57:59.462947 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 16 08:57:59.463340 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 16 08:57:59.487484 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1224) Jan 16 08:57:59.496168 (ntainerd)[1577]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 16 08:57:59.598528 jq[1579]: true Jan 16 08:57:59.615358 tar[1576]: linux-amd64/helm Jan 16 08:57:59.622247 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 16 08:57:59.651407 systemd[1]: Started update-engine.service - Update Engine. Jan 16 08:57:59.658263 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 16 08:57:59.659354 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 16 08:57:59.664036 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 16 08:57:59.664113 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 16 08:57:59.665990 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 16 08:57:59.666135 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Jan 16 08:57:59.666168 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 16 08:57:59.670841 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 16 08:57:59.681223 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 16 08:57:59.719539 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 16 08:57:59.783580 extend-filesystems[1572]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 16 08:57:59.783580 extend-filesystems[1572]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 16 08:57:59.783580 extend-filesystems[1572]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 16 08:57:59.774090 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 16 08:57:59.803805 extend-filesystems[1535]: Resized filesystem in /dev/vda9 Jan 16 08:57:59.803805 extend-filesystems[1535]: Found vdb Jan 16 08:57:59.774551 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 16 08:57:59.812252 systemd-logind[1555]: New seat seat0. Jan 16 08:57:59.825171 systemd-logind[1555]: Watching system buttons on /dev/input/event1 (Power Button) Jan 16 08:57:59.825202 systemd-logind[1555]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 16 08:57:59.825505 systemd[1]: Started systemd-logind.service - User Login Management. Jan 16 08:57:59.852464 bash[1620]: Updated "/home/core/.ssh/authorized_keys" Jan 16 08:57:59.853402 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 16 08:57:59.875814 systemd[1]: Starting sshkeys.service... Jan 16 08:57:59.946534 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 16 08:57:59.962907 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 16 08:58:00.051107 locksmithd[1602]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 16 08:58:00.076463 containerd[1577]: time="2025-01-16T08:58:00.075143958Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 16 08:58:00.098462 coreos-metadata[1639]: Jan 16 08:58:00.098 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 16 08:58:00.112738 coreos-metadata[1639]: Jan 16 08:58:00.112 INFO Fetch successful Jan 16 08:58:00.141587 unknown[1639]: wrote ssh authorized keys file for user: core Jan 16 08:58:00.148816 containerd[1577]: time="2025-01-16T08:58:00.148454545Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 16 08:58:00.164596 containerd[1577]: time="2025-01-16T08:58:00.164483816Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 16 08:58:00.164596 containerd[1577]: time="2025-01-16T08:58:00.164555970Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 16 08:58:00.165154 containerd[1577]: time="2025-01-16T08:58:00.164825110Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 16 08:58:00.165568 containerd[1577]: time="2025-01-16T08:58:00.165493310Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 16 08:58:00.165822 containerd[1577]: time="2025-01-16T08:58:00.165537499Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 16 08:58:00.167596 containerd[1577]: time="2025-01-16T08:58:00.166076757Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 16 08:58:00.167596 containerd[1577]: time="2025-01-16T08:58:00.166147450Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 16 08:58:00.167596 containerd[1577]: time="2025-01-16T08:58:00.166595271Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 16 08:58:00.167596 containerd[1577]: time="2025-01-16T08:58:00.166626090Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 16 08:58:00.167596 containerd[1577]: time="2025-01-16T08:58:00.166651342Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 16 08:58:00.167596 containerd[1577]: time="2025-01-16T08:58:00.166667514Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 16 08:58:00.167596 containerd[1577]: time="2025-01-16T08:58:00.166813847Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 16 08:58:00.167596 containerd[1577]: time="2025-01-16T08:58:00.167112276Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 16 08:58:00.167596 containerd[1577]: time="2025-01-16T08:58:00.167382185Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 16 08:58:00.169516 containerd[1577]: time="2025-01-16T08:58:00.167406583Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 16 08:58:00.170028 containerd[1577]: time="2025-01-16T08:58:00.169983123Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 16 08:58:00.170377 containerd[1577]: time="2025-01-16T08:58:00.170335779Z" level=info msg="metadata content store policy set" policy=shared Jan 16 08:58:00.178917 containerd[1577]: time="2025-01-16T08:58:00.178668356Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 16 08:58:00.178917 containerd[1577]: time="2025-01-16T08:58:00.178780856Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 16 08:58:00.179373 containerd[1577]: time="2025-01-16T08:58:00.179146707Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 16 08:58:00.182226 containerd[1577]: time="2025-01-16T08:58:00.181524784Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 16 08:58:00.182226 containerd[1577]: time="2025-01-16T08:58:00.181605874Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 16 08:58:00.182226 containerd[1577]: time="2025-01-16T08:58:00.181950712Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 16 08:58:00.185743 containerd[1577]: time="2025-01-16T08:58:00.185672801Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 16 08:58:00.187584 containerd[1577]: time="2025-01-16T08:58:00.187464908Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 16 08:58:00.187848 containerd[1577]: time="2025-01-16T08:58:00.187815760Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 16 08:58:00.188682 containerd[1577]: time="2025-01-16T08:58:00.188474052Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 16 08:58:00.188682 containerd[1577]: time="2025-01-16T08:58:00.188536294Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 16 08:58:00.188682 containerd[1577]: time="2025-01-16T08:58:00.188579079Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 16 08:58:00.188682 containerd[1577]: time="2025-01-16T08:58:00.188598332Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 16 08:58:00.188682 containerd[1577]: time="2025-01-16T08:58:00.188619415Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 16 08:58:00.189861 containerd[1577]: time="2025-01-16T08:58:00.189022999Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 16 08:58:00.189861 containerd[1577]: time="2025-01-16T08:58:00.189063867Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 16 08:58:00.189952 update-ssh-keys[1649]: Updated "/home/core/.ssh/authorized_keys" Jan 16 08:58:00.193138 containerd[1577]: time="2025-01-16T08:58:00.189716437Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 16 08:58:00.193138 containerd[1577]: time="2025-01-16T08:58:00.190477455Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 16 08:58:00.193138 containerd[1577]: time="2025-01-16T08:58:00.190537806Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 16 08:58:00.193138 containerd[1577]: time="2025-01-16T08:58:00.190568076Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 16 08:58:00.193138 containerd[1577]: time="2025-01-16T08:58:00.190601510Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 16 08:58:00.193138 containerd[1577]: time="2025-01-16T08:58:00.190625259Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 16 08:58:00.193138 containerd[1577]: time="2025-01-16T08:58:00.190644778Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 16 08:58:00.193138 containerd[1577]: time="2025-01-16T08:58:00.190692464Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 16 08:58:00.193138 containerd[1577]: time="2025-01-16T08:58:00.190714980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 16 08:58:00.193138 containerd[1577]: time="2025-01-16T08:58:00.191048612Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 16 08:58:00.193138 containerd[1577]: time="2025-01-16T08:58:00.191163638Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 16 08:58:00.193138 containerd[1577]: time="2025-01-16T08:58:00.191196365Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 16 08:58:00.193138 containerd[1577]: time="2025-01-16T08:58:00.191216630Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 16 08:58:00.193138 containerd[1577]: time="2025-01-16T08:58:00.191587773Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 16 08:58:00.193814 containerd[1577]: time="2025-01-16T08:58:00.191616178Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 16 08:58:00.193814 containerd[1577]: time="2025-01-16T08:58:00.191641145Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 16 08:58:00.193814 containerd[1577]: time="2025-01-16T08:58:00.191686619Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 16 08:58:00.193814 containerd[1577]: time="2025-01-16T08:58:00.191718605Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 16 08:58:00.193814 containerd[1577]: time="2025-01-16T08:58:00.191738100Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 16 08:58:00.193814 containerd[1577]: time="2025-01-16T08:58:00.191825685Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 16 08:58:00.193814 containerd[1577]: time="2025-01-16T08:58:00.191850990Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 16 08:58:00.193814 containerd[1577]: time="2025-01-16T08:58:00.191867814Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 16 08:58:00.193814 containerd[1577]: time="2025-01-16T08:58:00.191885121Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 16 08:58:00.193814 containerd[1577]: time="2025-01-16T08:58:00.191897635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 16 08:58:00.193814 containerd[1577]: time="2025-01-16T08:58:00.191915207Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 16 08:58:00.193814 containerd[1577]: time="2025-01-16T08:58:00.191930510Z" level=info msg="NRI interface is disabled by configuration." Jan 16 08:58:00.193814 containerd[1577]: time="2025-01-16T08:58:00.191948016Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 16 08:58:00.196202 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 16 08:58:00.201176 containerd[1577]: time="2025-01-16T08:58:00.192388321Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 16 08:58:00.201176 containerd[1577]: time="2025-01-16T08:58:00.195768890Z" level=info msg="Connect containerd service" Jan 16 08:58:00.201176 containerd[1577]: time="2025-01-16T08:58:00.195896044Z" level=info msg="using legacy CRI server" Jan 16 08:58:00.201176 containerd[1577]: time="2025-01-16T08:58:00.195950799Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 16 08:58:00.201176 containerd[1577]: time="2025-01-16T08:58:00.196679011Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 16 08:58:00.205583 systemd[1]: Finished sshkeys.service. Jan 16 08:58:00.217447 containerd[1577]: time="2025-01-16T08:58:00.212318795Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 16 08:58:00.218762 containerd[1577]: time="2025-01-16T08:58:00.217755568Z" level=info msg="Start subscribing containerd event" Jan 16 08:58:00.218762 containerd[1577]: time="2025-01-16T08:58:00.217879743Z" level=info msg="Start recovering state" Jan 16 08:58:00.218762 containerd[1577]: time="2025-01-16T08:58:00.218009470Z" level=info msg="Start event monitor" Jan 16 08:58:00.218762 containerd[1577]: time="2025-01-16T08:58:00.218046666Z" level=info msg="Start snapshots syncer" Jan 16 08:58:00.218762 containerd[1577]: time="2025-01-16T08:58:00.218063104Z" level=info msg="Start cni network conf syncer for default" Jan 16 08:58:00.218762 containerd[1577]: time="2025-01-16T08:58:00.218075346Z" level=info msg="Start streaming server" Jan 16 08:58:00.220045 containerd[1577]: time="2025-01-16T08:58:00.219994836Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 16 08:58:00.220328 containerd[1577]: time="2025-01-16T08:58:00.220300800Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 16 08:58:00.220567 containerd[1577]: time="2025-01-16T08:58:00.220544858Z" level=info msg="containerd successfully booted in 0.146811s" Jan 16 08:58:00.220913 systemd[1]: Started containerd.service - containerd container runtime. Jan 16 08:58:00.369342 sshd_keygen[1570]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 16 08:58:00.416054 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 16 08:58:00.429025 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 16 08:58:00.443902 systemd[1]: Started sshd@0-24.199.127.61:22-147.75.109.163:47476.service - OpenSSH per-connection server daemon (147.75.109.163:47476). Jan 16 08:58:00.504217 systemd[1]: issuegen.service: Deactivated successfully. Jan 16 08:58:00.504731 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 16 08:58:00.524124 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 16 08:58:00.595059 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 16 08:58:00.611349 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 16 08:58:00.631428 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 16 08:58:00.638747 systemd[1]: Reached target getty.target - Login Prompts. Jan 16 08:58:00.723316 sshd[1667]: Accepted publickey for core from 147.75.109.163 port 47476 ssh2: RSA SHA256:JFCq2iHRoEjWAf+XB9dAYBdNgVKarxLIY/Gd6UT86ms Jan 16 08:58:00.728867 sshd-session[1667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 08:58:00.760671 systemd-logind[1555]: New session 1 of user core. Jan 16 08:58:00.765377 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 16 08:58:00.783313 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 16 08:58:00.840875 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 16 08:58:00.863060 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 16 08:58:00.894845 (systemd)[1683]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 16 08:58:01.131520 tar[1576]: linux-amd64/LICENSE Jan 16 08:58:01.131520 tar[1576]: linux-amd64/README.md Jan 16 08:58:01.167115 systemd[1683]: Queued start job for default target default.target. Jan 16 08:58:01.168646 systemd[1683]: Created slice app.slice - User Application Slice. Jan 16 08:58:01.168688 systemd[1683]: Reached target paths.target - Paths. Jan 16 08:58:01.168710 systemd[1683]: Reached target timers.target - Timers. Jan 16 08:58:01.181826 systemd[1683]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 16 08:58:01.183178 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 16 08:58:01.215676 systemd[1683]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 16 08:58:01.215784 systemd[1683]: Reached target sockets.target - Sockets. Jan 16 08:58:01.215808 systemd[1683]: Reached target basic.target - Basic System. Jan 16 08:58:01.215895 systemd[1683]: Reached target default.target - Main User Target. Jan 16 08:58:01.215943 systemd[1683]: Startup finished in 292ms. Jan 16 08:58:01.216311 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 16 08:58:01.231925 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 16 08:58:01.330941 systemd[1]: Started sshd@1-24.199.127.61:22-147.75.109.163:47480.service - OpenSSH per-connection server daemon (147.75.109.163:47480). Jan 16 08:58:01.437876 sshd[1700]: Accepted publickey for core from 147.75.109.163 port 47480 ssh2: RSA SHA256:JFCq2iHRoEjWAf+XB9dAYBdNgVKarxLIY/Gd6UT86ms Jan 16 08:58:01.439356 sshd-session[1700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 08:58:01.453034 systemd-logind[1555]: New session 2 of user core. Jan 16 08:58:01.461060 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 16 08:58:01.551108 sshd[1703]: Connection closed by 147.75.109.163 port 47480 Jan 16 08:58:01.550980 sshd-session[1700]: pam_unix(sshd:session): session closed for user core Jan 16 08:58:01.565788 systemd[1]: Started sshd@2-24.199.127.61:22-147.75.109.163:47494.service - OpenSSH per-connection server daemon (147.75.109.163:47494). Jan 16 08:58:01.576231 systemd[1]: sshd@1-24.199.127.61:22-147.75.109.163:47480.service: Deactivated successfully. Jan 16 08:58:01.584247 systemd-logind[1555]: Session 2 logged out. Waiting for processes to exit. Jan 16 08:58:01.586063 systemd[1]: session-2.scope: Deactivated successfully. Jan 16 08:58:01.594700 systemd-logind[1555]: Removed session 2. Jan 16 08:58:01.681552 sshd[1705]: Accepted publickey for core from 147.75.109.163 port 47494 ssh2: RSA SHA256:JFCq2iHRoEjWAf+XB9dAYBdNgVKarxLIY/Gd6UT86ms Jan 16 08:58:01.684285 sshd-session[1705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 08:58:01.695545 systemd-logind[1555]: New session 3 of user core. Jan 16 08:58:01.707306 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 16 08:58:01.794746 sshd[1711]: Connection closed by 147.75.109.163 port 47494 Jan 16 08:58:01.797714 sshd-session[1705]: pam_unix(sshd:session): session closed for user core Jan 16 08:58:01.802794 systemd[1]: sshd@2-24.199.127.61:22-147.75.109.163:47494.service: Deactivated successfully. Jan 16 08:58:01.808993 systemd-logind[1555]: Session 3 logged out. Waiting for processes to exit. Jan 16 08:58:01.810193 systemd[1]: session-3.scope: Deactivated successfully. Jan 16 08:58:01.813780 systemd-logind[1555]: Removed session 3. Jan 16 08:58:01.956905 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 08:58:01.959534 (kubelet)[1724]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 16 08:58:01.960939 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 16 08:58:01.964921 systemd[1]: Startup finished in 7.658s (kernel) + 8.079s (userspace) = 15.738s. Jan 16 08:58:03.203659 kubelet[1724]: E0116 08:58:03.203526 1724 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 16 08:58:03.206768 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 16 08:58:03.207051 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 16 08:58:11.790811 systemd[1]: Started sshd@3-24.199.127.61:22-147.75.109.163:47298.service - OpenSSH per-connection server daemon (147.75.109.163:47298). Jan 16 08:58:11.838878 sshd[1736]: Accepted publickey for core from 147.75.109.163 port 47298 ssh2: RSA SHA256:JFCq2iHRoEjWAf+XB9dAYBdNgVKarxLIY/Gd6UT86ms Jan 16 08:58:11.840958 sshd-session[1736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 08:58:11.847175 systemd-logind[1555]: New session 4 of user core. Jan 16 08:58:11.855083 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 16 08:58:11.921789 sshd[1739]: Connection closed by 147.75.109.163 port 47298 Jan 16 08:58:11.924486 sshd-session[1736]: pam_unix(sshd:session): session closed for user core Jan 16 08:58:11.933010 systemd[1]: Started sshd@4-24.199.127.61:22-147.75.109.163:47306.service - OpenSSH per-connection server daemon (147.75.109.163:47306). Jan 16 08:58:11.933748 systemd[1]: sshd@3-24.199.127.61:22-147.75.109.163:47298.service: Deactivated successfully. Jan 16 08:58:11.936625 systemd[1]: session-4.scope: Deactivated successfully. Jan 16 08:58:11.942705 systemd-logind[1555]: Session 4 logged out. Waiting for processes to exit. Jan 16 08:58:11.944828 systemd-logind[1555]: Removed session 4. Jan 16 08:58:11.990110 sshd[1741]: Accepted publickey for core from 147.75.109.163 port 47306 ssh2: RSA SHA256:JFCq2iHRoEjWAf+XB9dAYBdNgVKarxLIY/Gd6UT86ms Jan 16 08:58:11.992225 sshd-session[1741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 08:58:11.998000 systemd-logind[1555]: New session 5 of user core. Jan 16 08:58:12.006904 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 16 08:58:12.067656 sshd[1747]: Connection closed by 147.75.109.163 port 47306 Jan 16 08:58:12.068645 sshd-session[1741]: pam_unix(sshd:session): session closed for user core Jan 16 08:58:12.079902 systemd[1]: Started sshd@5-24.199.127.61:22-147.75.109.163:47310.service - OpenSSH per-connection server daemon (147.75.109.163:47310). Jan 16 08:58:12.081145 systemd[1]: sshd@4-24.199.127.61:22-147.75.109.163:47306.service: Deactivated successfully. Jan 16 08:58:12.082978 systemd[1]: session-5.scope: Deactivated successfully. Jan 16 08:58:12.084709 systemd-logind[1555]: Session 5 logged out. Waiting for processes to exit. Jan 16 08:58:12.087452 systemd-logind[1555]: Removed session 5. Jan 16 08:58:12.128632 sshd[1749]: Accepted publickey for core from 147.75.109.163 port 47310 ssh2: RSA SHA256:JFCq2iHRoEjWAf+XB9dAYBdNgVKarxLIY/Gd6UT86ms Jan 16 08:58:12.130743 sshd-session[1749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 08:58:12.138230 systemd-logind[1555]: New session 6 of user core. Jan 16 08:58:12.146071 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 16 08:58:12.211197 sshd[1755]: Connection closed by 147.75.109.163 port 47310 Jan 16 08:58:12.212272 sshd-session[1749]: pam_unix(sshd:session): session closed for user core Jan 16 08:58:12.217431 systemd[1]: sshd@5-24.199.127.61:22-147.75.109.163:47310.service: Deactivated successfully. Jan 16 08:58:12.221548 systemd-logind[1555]: Session 6 logged out. Waiting for processes to exit. Jan 16 08:58:12.232013 systemd[1]: Started sshd@6-24.199.127.61:22-147.75.109.163:47322.service - OpenSSH per-connection server daemon (147.75.109.163:47322). Jan 16 08:58:12.232674 systemd[1]: session-6.scope: Deactivated successfully. Jan 16 08:58:12.235379 systemd-logind[1555]: Removed session 6. Jan 16 08:58:12.284481 sshd[1760]: Accepted publickey for core from 147.75.109.163 port 47322 ssh2: RSA SHA256:JFCq2iHRoEjWAf+XB9dAYBdNgVKarxLIY/Gd6UT86ms Jan 16 08:58:12.286645 sshd-session[1760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 08:58:12.294763 systemd-logind[1555]: New session 7 of user core. Jan 16 08:58:12.304916 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 16 08:58:12.385881 sudo[1764]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 16 08:58:12.386259 sudo[1764]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 16 08:58:12.404681 sudo[1764]: pam_unix(sudo:session): session closed for user root Jan 16 08:58:12.410447 sshd[1763]: Connection closed by 147.75.109.163 port 47322 Jan 16 08:58:12.409278 sshd-session[1760]: pam_unix(sshd:session): session closed for user core Jan 16 08:58:12.420025 systemd[1]: Started sshd@7-24.199.127.61:22-147.75.109.163:47326.service - OpenSSH per-connection server daemon (147.75.109.163:47326). Jan 16 08:58:12.421144 systemd[1]: sshd@6-24.199.127.61:22-147.75.109.163:47322.service: Deactivated successfully. Jan 16 08:58:12.426938 systemd[1]: session-7.scope: Deactivated successfully. Jan 16 08:58:12.430526 systemd-logind[1555]: Session 7 logged out. Waiting for processes to exit. Jan 16 08:58:12.434212 systemd-logind[1555]: Removed session 7. Jan 16 08:58:12.487726 sshd[1766]: Accepted publickey for core from 147.75.109.163 port 47326 ssh2: RSA SHA256:JFCq2iHRoEjWAf+XB9dAYBdNgVKarxLIY/Gd6UT86ms Jan 16 08:58:12.490718 sshd-session[1766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 08:58:12.500025 systemd-logind[1555]: New session 8 of user core. Jan 16 08:58:12.506042 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 16 08:58:12.572669 sudo[1774]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 16 08:58:12.573156 sudo[1774]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 16 08:58:12.578927 sudo[1774]: pam_unix(sudo:session): session closed for user root Jan 16 08:58:12.587275 sudo[1773]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 16 08:58:12.587783 sudo[1773]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 16 08:58:12.613975 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 16 08:58:12.656619 augenrules[1796]: No rules Jan 16 08:58:12.657357 systemd[1]: audit-rules.service: Deactivated successfully. Jan 16 08:58:12.657738 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 16 08:58:12.661846 sudo[1773]: pam_unix(sudo:session): session closed for user root Jan 16 08:58:12.666708 sshd[1772]: Connection closed by 147.75.109.163 port 47326 Jan 16 08:58:12.667499 sshd-session[1766]: pam_unix(sshd:session): session closed for user core Jan 16 08:58:12.681020 systemd[1]: Started sshd@8-24.199.127.61:22-147.75.109.163:47330.service - OpenSSH per-connection server daemon (147.75.109.163:47330). Jan 16 08:58:12.681955 systemd[1]: sshd@7-24.199.127.61:22-147.75.109.163:47326.service: Deactivated successfully. Jan 16 08:58:12.686249 systemd[1]: session-8.scope: Deactivated successfully. Jan 16 08:58:12.687222 systemd-logind[1555]: Session 8 logged out. Waiting for processes to exit. Jan 16 08:58:12.689703 systemd-logind[1555]: Removed session 8. Jan 16 08:58:12.736962 sshd[1802]: Accepted publickey for core from 147.75.109.163 port 47330 ssh2: RSA SHA256:JFCq2iHRoEjWAf+XB9dAYBdNgVKarxLIY/Gd6UT86ms Jan 16 08:58:12.739165 sshd-session[1802]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 08:58:12.747818 systemd-logind[1555]: New session 9 of user core. Jan 16 08:58:12.755093 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 16 08:58:12.821168 sudo[1809]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 16 08:58:12.821490 sudo[1809]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 16 08:58:13.374639 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 16 08:58:13.384812 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 16 08:58:13.387200 (dockerd)[1827]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 16 08:58:13.394408 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 08:58:13.571676 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 08:58:13.584071 (kubelet)[1839]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 16 08:58:13.685479 kubelet[1839]: E0116 08:58:13.683660 1839 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 16 08:58:13.692848 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 16 08:58:13.693155 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 16 08:58:13.959696 dockerd[1827]: time="2025-01-16T08:58:13.959014297Z" level=info msg="Starting up" Jan 16 08:58:14.087875 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2860487137-merged.mount: Deactivated successfully. Jan 16 08:58:14.145334 dockerd[1827]: time="2025-01-16T08:58:14.145272279Z" level=info msg="Loading containers: start." Jan 16 08:58:14.393703 kernel: Initializing XFRM netlink socket Jan 16 08:58:14.531101 systemd-networkd[1220]: docker0: Link UP Jan 16 08:58:14.576290 dockerd[1827]: time="2025-01-16T08:58:14.576234232Z" level=info msg="Loading containers: done." Jan 16 08:58:14.600565 dockerd[1827]: time="2025-01-16T08:58:14.599861326Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 16 08:58:14.600565 dockerd[1827]: time="2025-01-16T08:58:14.600070873Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Jan 16 08:58:14.600565 dockerd[1827]: time="2025-01-16T08:58:14.600250778Z" level=info msg="Daemon has completed initialization" Jan 16 08:58:14.656003 dockerd[1827]: time="2025-01-16T08:58:14.655719801Z" level=info msg="API listen on /run/docker.sock" Jan 16 08:58:14.656768 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 16 08:58:15.076447 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3484192464-merged.mount: Deactivated successfully. Jan 16 08:58:15.850924 containerd[1577]: time="2025-01-16T08:58:15.850819652Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.13\"" Jan 16 08:58:16.450813 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3484486031.mount: Deactivated successfully. Jan 16 08:58:18.312459 containerd[1577]: time="2025-01-16T08:58:18.312287190Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:58:18.314368 containerd[1577]: time="2025-01-16T08:58:18.314297580Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.13: active requests=0, bytes read=35140730" Jan 16 08:58:18.315202 containerd[1577]: time="2025-01-16T08:58:18.315096379Z" level=info msg="ImageCreate event name:\"sha256:724efdc6b8440d2c78ced040ad90bb8af5553b7ed46439937b567cca86ae5e1b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:58:18.320463 containerd[1577]: time="2025-01-16T08:58:18.320099549Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e5c42861045d0615769fad8a4e32e476fc5e59020157b60ced1bb7a69d4a5ce9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:58:18.322707 containerd[1577]: time="2025-01-16T08:58:18.322383212Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.13\" with image id \"sha256:724efdc6b8440d2c78ced040ad90bb8af5553b7ed46439937b567cca86ae5e1b\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e5c42861045d0615769fad8a4e32e476fc5e59020157b60ced1bb7a69d4a5ce9\", size \"35137530\" in 2.471500935s" Jan 16 08:58:18.322707 containerd[1577]: time="2025-01-16T08:58:18.322476282Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.13\" returns image reference \"sha256:724efdc6b8440d2c78ced040ad90bb8af5553b7ed46439937b567cca86ae5e1b\"" Jan 16 08:58:18.361668 containerd[1577]: time="2025-01-16T08:58:18.361439681Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.13\"" Jan 16 08:58:20.393721 containerd[1577]: time="2025-01-16T08:58:20.393646705Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:58:20.396456 containerd[1577]: time="2025-01-16T08:58:20.395608567Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.13: active requests=0, bytes read=32216641" Jan 16 08:58:20.396456 containerd[1577]: time="2025-01-16T08:58:20.395913830Z" level=info msg="ImageCreate event name:\"sha256:04dd549807d4487a115aab24e9c53dbb8c711ed9a3b138a206e161800b9975ab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:58:20.402468 containerd[1577]: time="2025-01-16T08:58:20.400533765Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:fc2838399752740bdd36c7e9287d4406feff6bef2baff393174b34ccd447b780\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:58:20.402826 containerd[1577]: time="2025-01-16T08:58:20.402773838Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.13\" with image id \"sha256:04dd549807d4487a115aab24e9c53dbb8c711ed9a3b138a206e161800b9975ab\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:fc2838399752740bdd36c7e9287d4406feff6bef2baff393174b34ccd447b780\", size \"33663223\" in 2.041276497s" Jan 16 08:58:20.402982 containerd[1577]: time="2025-01-16T08:58:20.402957398Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.13\" returns image reference \"sha256:04dd549807d4487a115aab24e9c53dbb8c711ed9a3b138a206e161800b9975ab\"" Jan 16 08:58:20.444981 containerd[1577]: time="2025-01-16T08:58:20.444926809Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.13\"" Jan 16 08:58:21.753854 containerd[1577]: time="2025-01-16T08:58:21.753768170Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:58:21.756141 containerd[1577]: time="2025-01-16T08:58:21.755865457Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.13: active requests=0, bytes read=17332841" Jan 16 08:58:21.756141 containerd[1577]: time="2025-01-16T08:58:21.756024855Z" level=info msg="ImageCreate event name:\"sha256:42b8a40668702c6f34141af8c536b486852dd3b2483c9b50a608d2377da8c8e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:58:21.762481 containerd[1577]: time="2025-01-16T08:58:21.760983938Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:a4f1649a5249c0784963d85644b1e614548f032da9b4fb00a760bac02818ce4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:58:21.763171 containerd[1577]: time="2025-01-16T08:58:21.762870381Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.13\" with image id \"sha256:42b8a40668702c6f34141af8c536b486852dd3b2483c9b50a608d2377da8c8e8\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:a4f1649a5249c0784963d85644b1e614548f032da9b4fb00a760bac02818ce4f\", size \"18779441\" in 1.317684321s" Jan 16 08:58:21.763171 containerd[1577]: time="2025-01-16T08:58:21.762930375Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.13\" returns image reference \"sha256:42b8a40668702c6f34141af8c536b486852dd3b2483c9b50a608d2377da8c8e8\"" Jan 16 08:58:21.809731 containerd[1577]: time="2025-01-16T08:58:21.809289980Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.13\"" Jan 16 08:58:22.217127 systemd-resolved[1472]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Jan 16 08:58:23.051851 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3581083181.mount: Deactivated successfully. Jan 16 08:58:23.697089 containerd[1577]: time="2025-01-16T08:58:23.697020093Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:58:23.698385 containerd[1577]: time="2025-01-16T08:58:23.698014761Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.13: active requests=0, bytes read=28620941" Jan 16 08:58:23.699612 containerd[1577]: time="2025-01-16T08:58:23.699114696Z" level=info msg="ImageCreate event name:\"sha256:f20cf1600da6cce7b7d3fdd3b5ff91243983ea8be3907cccaee1a956770a2f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:58:23.702656 containerd[1577]: time="2025-01-16T08:58:23.702571775Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd45846de733434501e436638a7a240f2d379bf0a6bb0404a7684e0cf52c4011\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:58:23.704000 containerd[1577]: time="2025-01-16T08:58:23.703938490Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.13\" with image id \"sha256:f20cf1600da6cce7b7d3fdd3b5ff91243983ea8be3907cccaee1a956770a2f15\", repo tag \"registry.k8s.io/kube-proxy:v1.29.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd45846de733434501e436638a7a240f2d379bf0a6bb0404a7684e0cf52c4011\", size \"28619960\" in 1.894595523s" Jan 16 08:58:23.704207 containerd[1577]: time="2025-01-16T08:58:23.704183518Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.13\" returns image reference \"sha256:f20cf1600da6cce7b7d3fdd3b5ff91243983ea8be3907cccaee1a956770a2f15\"" Jan 16 08:58:23.739338 containerd[1577]: time="2025-01-16T08:58:23.739251702Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 16 08:58:23.943482 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 16 08:58:23.957805 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 08:58:24.147929 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 08:58:24.160466 (kubelet)[2146]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 16 08:58:24.281789 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount158297075.mount: Deactivated successfully. Jan 16 08:58:24.307215 kubelet[2146]: E0116 08:58:24.307125 2146 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 16 08:58:24.310257 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 16 08:58:24.310951 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 16 08:58:25.269649 systemd-resolved[1472]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Jan 16 08:58:25.388071 containerd[1577]: time="2025-01-16T08:58:25.387981178Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:58:25.389940 containerd[1577]: time="2025-01-16T08:58:25.389859219Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 16 08:58:25.390766 containerd[1577]: time="2025-01-16T08:58:25.390714487Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:58:25.393705 containerd[1577]: time="2025-01-16T08:58:25.393636575Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:58:25.396168 containerd[1577]: time="2025-01-16T08:58:25.395935241Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.65635691s" Jan 16 08:58:25.396168 containerd[1577]: time="2025-01-16T08:58:25.396000115Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 16 08:58:25.436124 containerd[1577]: time="2025-01-16T08:58:25.436069191Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 16 08:58:25.880844 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount472270221.mount: Deactivated successfully. Jan 16 08:58:25.885776 containerd[1577]: time="2025-01-16T08:58:25.885697267Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:58:25.887572 containerd[1577]: time="2025-01-16T08:58:25.887492301Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jan 16 08:58:25.889483 containerd[1577]: time="2025-01-16T08:58:25.889403180Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:58:25.891700 containerd[1577]: time="2025-01-16T08:58:25.891611265Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:58:25.894066 containerd[1577]: time="2025-01-16T08:58:25.893861556Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 457.744724ms" Jan 16 08:58:25.894066 containerd[1577]: time="2025-01-16T08:58:25.893923936Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 16 08:58:25.929868 containerd[1577]: time="2025-01-16T08:58:25.929821447Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jan 16 08:58:26.455140 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4233125420.mount: Deactivated successfully. Jan 16 08:58:28.327091 containerd[1577]: time="2025-01-16T08:58:28.327015669Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:58:28.328703 containerd[1577]: time="2025-01-16T08:58:28.328632692Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Jan 16 08:58:28.329442 containerd[1577]: time="2025-01-16T08:58:28.328942796Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:58:28.334453 containerd[1577]: time="2025-01-16T08:58:28.332914817Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:58:28.335256 containerd[1577]: time="2025-01-16T08:58:28.335193467Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 2.405073748s" Jan 16 08:58:28.335492 containerd[1577]: time="2025-01-16T08:58:28.335461714Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jan 16 08:58:32.232608 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 08:58:32.243869 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 08:58:32.272274 systemd[1]: Reloading requested from client PID 2323 ('systemctl') (unit session-9.scope)... Jan 16 08:58:32.272291 systemd[1]: Reloading... Jan 16 08:58:32.401448 zram_generator::config[2363]: No configuration found. Jan 16 08:58:32.541751 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 16 08:58:32.635735 systemd[1]: Reloading finished in 362 ms. Jan 16 08:58:32.695970 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 16 08:58:32.696079 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 16 08:58:32.696395 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 08:58:32.706080 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 08:58:32.837666 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 08:58:32.849239 (kubelet)[2428]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 16 08:58:32.919824 kubelet[2428]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 16 08:58:32.920440 kubelet[2428]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 16 08:58:32.920545 kubelet[2428]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 16 08:58:32.920761 kubelet[2428]: I0116 08:58:32.920707 2428 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 16 08:58:33.431901 kubelet[2428]: I0116 08:58:33.431828 2428 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 16 08:58:33.431901 kubelet[2428]: I0116 08:58:33.431874 2428 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 16 08:58:33.432272 kubelet[2428]: I0116 08:58:33.432159 2428 server.go:919] "Client rotation is on, will bootstrap in background" Jan 16 08:58:33.456039 kubelet[2428]: E0116 08:58:33.455776 2428 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://24.199.127.61:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 24.199.127.61:6443: connect: connection refused Jan 16 08:58:33.456039 kubelet[2428]: I0116 08:58:33.455825 2428 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 16 08:58:33.471297 kubelet[2428]: I0116 08:58:33.471232 2428 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 16 08:58:33.472028 kubelet[2428]: I0116 08:58:33.471989 2428 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 16 08:58:33.473516 kubelet[2428]: I0116 08:58:33.473406 2428 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 16 08:58:33.473516 kubelet[2428]: I0116 08:58:33.473512 2428 topology_manager.go:138] "Creating topology manager with none policy" Jan 16 08:58:33.473516 kubelet[2428]: I0116 08:58:33.473530 2428 container_manager_linux.go:301] "Creating device plugin manager" Jan 16 08:58:33.474038 kubelet[2428]: I0116 08:58:33.473843 2428 state_mem.go:36] "Initialized new in-memory state store" Jan 16 08:58:33.476614 kubelet[2428]: I0116 08:58:33.476196 2428 kubelet.go:396] "Attempting to sync node with API server" Jan 16 08:58:33.476614 kubelet[2428]: I0116 08:58:33.476251 2428 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 16 08:58:33.476614 kubelet[2428]: I0116 08:58:33.476300 2428 kubelet.go:312] "Adding apiserver pod source" Jan 16 08:58:33.476614 kubelet[2428]: I0116 08:58:33.476321 2428 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 16 08:58:33.477437 kubelet[2428]: W0116 08:58:33.477215 2428 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://24.199.127.61:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152.2.0-e-393f89f1d0&limit=500&resourceVersion=0": dial tcp 24.199.127.61:6443: connect: connection refused Jan 16 08:58:33.477437 kubelet[2428]: E0116 08:58:33.477275 2428 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://24.199.127.61:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152.2.0-e-393f89f1d0&limit=500&resourceVersion=0": dial tcp 24.199.127.61:6443: connect: connection refused Jan 16 08:58:33.478625 kubelet[2428]: I0116 08:58:33.478598 2428 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 16 08:58:33.484276 kubelet[2428]: I0116 08:58:33.483772 2428 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 16 08:58:33.484276 kubelet[2428]: W0116 08:58:33.483905 2428 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 16 08:58:33.485754 kubelet[2428]: I0116 08:58:33.485699 2428 server.go:1256] "Started kubelet" Jan 16 08:58:33.496020 kubelet[2428]: W0116 08:58:33.495914 2428 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://24.199.127.61:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 24.199.127.61:6443: connect: connection refused Jan 16 08:58:33.496020 kubelet[2428]: E0116 08:58:33.496032 2428 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://24.199.127.61:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 24.199.127.61:6443: connect: connection refused Jan 16 08:58:33.498604 kubelet[2428]: E0116 08:58:33.497878 2428 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://24.199.127.61:6443/api/v1/namespaces/default/events\": dial tcp 24.199.127.61:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4152.2.0-e-393f89f1d0.181b2096d5cd1c43 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152.2.0-e-393f89f1d0,UID:ci-4152.2.0-e-393f89f1d0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4152.2.0-e-393f89f1d0,},FirstTimestamp:2025-01-16 08:58:33.485630531 +0000 UTC m=+0.629054343,LastTimestamp:2025-01-16 08:58:33.485630531 +0000 UTC m=+0.629054343,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152.2.0-e-393f89f1d0,}" Jan 16 08:58:33.498604 kubelet[2428]: I0116 08:58:33.498011 2428 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 16 08:58:33.500458 kubelet[2428]: I0116 08:58:33.500179 2428 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 16 08:58:33.500458 kubelet[2428]: I0116 08:58:33.500243 2428 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 16 08:58:33.502334 kubelet[2428]: I0116 08:58:33.502287 2428 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 16 08:58:33.504457 kubelet[2428]: I0116 08:58:33.503501 2428 server.go:461] "Adding debug handlers to kubelet server" Jan 16 08:58:33.509951 kubelet[2428]: I0116 08:58:33.509918 2428 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 16 08:58:33.510088 kubelet[2428]: I0116 08:58:33.510056 2428 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 16 08:58:33.511292 kubelet[2428]: I0116 08:58:33.510126 2428 reconciler_new.go:29] "Reconciler: start to sync state" Jan 16 08:58:33.511292 kubelet[2428]: W0116 08:58:33.510679 2428 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://24.199.127.61:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 24.199.127.61:6443: connect: connection refused Jan 16 08:58:33.511292 kubelet[2428]: E0116 08:58:33.510745 2428 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://24.199.127.61:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 24.199.127.61:6443: connect: connection refused Jan 16 08:58:33.512161 kubelet[2428]: E0116 08:58:33.512126 2428 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://24.199.127.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152.2.0-e-393f89f1d0?timeout=10s\": dial tcp 24.199.127.61:6443: connect: connection refused" interval="200ms" Jan 16 08:58:33.514965 kubelet[2428]: I0116 08:58:33.514925 2428 factory.go:221] Registration of the systemd container factory successfully Jan 16 08:58:33.515130 kubelet[2428]: I0116 08:58:33.515040 2428 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 16 08:58:33.517915 kubelet[2428]: E0116 08:58:33.517879 2428 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 16 08:58:33.520714 kubelet[2428]: I0116 08:58:33.520684 2428 factory.go:221] Registration of the containerd container factory successfully Jan 16 08:58:33.528533 kubelet[2428]: I0116 08:58:33.527183 2428 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 16 08:58:33.528533 kubelet[2428]: I0116 08:58:33.528526 2428 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 16 08:58:33.528682 kubelet[2428]: I0116 08:58:33.528561 2428 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 16 08:58:33.528682 kubelet[2428]: I0116 08:58:33.528585 2428 kubelet.go:2329] "Starting kubelet main sync loop" Jan 16 08:58:33.528682 kubelet[2428]: E0116 08:58:33.528646 2428 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 16 08:58:33.542472 kubelet[2428]: W0116 08:58:33.542031 2428 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://24.199.127.61:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 24.199.127.61:6443: connect: connection refused Jan 16 08:58:33.542472 kubelet[2428]: E0116 08:58:33.542113 2428 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://24.199.127.61:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 24.199.127.61:6443: connect: connection refused Jan 16 08:58:33.555069 kubelet[2428]: I0116 08:58:33.555025 2428 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 16 08:58:33.555069 kubelet[2428]: I0116 08:58:33.555083 2428 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 16 08:58:33.555277 kubelet[2428]: I0116 08:58:33.555138 2428 state_mem.go:36] "Initialized new in-memory state store" Jan 16 08:58:33.557148 kubelet[2428]: I0116 08:58:33.557099 2428 policy_none.go:49] "None policy: Start" Jan 16 08:58:33.558510 kubelet[2428]: I0116 08:58:33.558352 2428 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 16 08:58:33.558510 kubelet[2428]: I0116 08:58:33.558443 2428 state_mem.go:35] "Initializing new in-memory state store" Jan 16 08:58:33.569492 kubelet[2428]: I0116 08:58:33.568959 2428 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 16 08:58:33.572463 kubelet[2428]: I0116 08:58:33.571658 2428 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 16 08:58:33.574491 kubelet[2428]: E0116 08:58:33.574387 2428 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4152.2.0-e-393f89f1d0\" not found" Jan 16 08:58:33.612596 kubelet[2428]: I0116 08:58:33.611976 2428 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.2.0-e-393f89f1d0" Jan 16 08:58:33.612596 kubelet[2428]: E0116 08:58:33.612447 2428 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://24.199.127.61:6443/api/v1/nodes\": dial tcp 24.199.127.61:6443: connect: connection refused" node="ci-4152.2.0-e-393f89f1d0" Jan 16 08:58:33.629293 kubelet[2428]: I0116 08:58:33.629242 2428 topology_manager.go:215] "Topology Admit Handler" podUID="6b66b3a99bb8f264ccb2affb5ccc1acf" podNamespace="kube-system" podName="kube-scheduler-ci-4152.2.0-e-393f89f1d0" Jan 16 08:58:33.630575 kubelet[2428]: I0116 08:58:33.630548 2428 topology_manager.go:215] "Topology Admit Handler" podUID="61700e2d41671f66da0e5b135c339170" podNamespace="kube-system" podName="kube-apiserver-ci-4152.2.0-e-393f89f1d0" Jan 16 08:58:33.636049 kubelet[2428]: I0116 08:58:33.635360 2428 topology_manager.go:215] "Topology Admit Handler" podUID="9ee436fab466530b790c130af8cc2ca0" podNamespace="kube-system" podName="kube-controller-manager-ci-4152.2.0-e-393f89f1d0" Jan 16 08:58:33.712105 kubelet[2428]: I0116 08:58:33.711969 2428 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9ee436fab466530b790c130af8cc2ca0-kubeconfig\") pod \"kube-controller-manager-ci-4152.2.0-e-393f89f1d0\" (UID: \"9ee436fab466530b790c130af8cc2ca0\") " pod="kube-system/kube-controller-manager-ci-4152.2.0-e-393f89f1d0" Jan 16 08:58:33.712769 kubelet[2428]: E0116 08:58:33.712734 2428 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://24.199.127.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152.2.0-e-393f89f1d0?timeout=10s\": dial tcp 24.199.127.61:6443: connect: connection refused" interval="400ms" Jan 16 08:58:33.713024 kubelet[2428]: I0116 08:58:33.712924 2428 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9ee436fab466530b790c130af8cc2ca0-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152.2.0-e-393f89f1d0\" (UID: \"9ee436fab466530b790c130af8cc2ca0\") " pod="kube-system/kube-controller-manager-ci-4152.2.0-e-393f89f1d0" Jan 16 08:58:33.713248 kubelet[2428]: I0116 08:58:33.713204 2428 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6b66b3a99bb8f264ccb2affb5ccc1acf-kubeconfig\") pod \"kube-scheduler-ci-4152.2.0-e-393f89f1d0\" (UID: \"6b66b3a99bb8f264ccb2affb5ccc1acf\") " pod="kube-system/kube-scheduler-ci-4152.2.0-e-393f89f1d0" Jan 16 08:58:33.713795 kubelet[2428]: I0116 08:58:33.713410 2428 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/61700e2d41671f66da0e5b135c339170-k8s-certs\") pod \"kube-apiserver-ci-4152.2.0-e-393f89f1d0\" (UID: \"61700e2d41671f66da0e5b135c339170\") " pod="kube-system/kube-apiserver-ci-4152.2.0-e-393f89f1d0" Jan 16 08:58:33.713795 kubelet[2428]: I0116 08:58:33.713457 2428 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/61700e2d41671f66da0e5b135c339170-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152.2.0-e-393f89f1d0\" (UID: \"61700e2d41671f66da0e5b135c339170\") " pod="kube-system/kube-apiserver-ci-4152.2.0-e-393f89f1d0" Jan 16 08:58:33.713795 kubelet[2428]: I0116 08:58:33.713487 2428 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9ee436fab466530b790c130af8cc2ca0-k8s-certs\") pod \"kube-controller-manager-ci-4152.2.0-e-393f89f1d0\" (UID: \"9ee436fab466530b790c130af8cc2ca0\") " pod="kube-system/kube-controller-manager-ci-4152.2.0-e-393f89f1d0" Jan 16 08:58:33.713795 kubelet[2428]: I0116 08:58:33.713512 2428 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/61700e2d41671f66da0e5b135c339170-ca-certs\") pod \"kube-apiserver-ci-4152.2.0-e-393f89f1d0\" (UID: \"61700e2d41671f66da0e5b135c339170\") " pod="kube-system/kube-apiserver-ci-4152.2.0-e-393f89f1d0" Jan 16 08:58:33.713795 kubelet[2428]: I0116 08:58:33.713533 2428 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9ee436fab466530b790c130af8cc2ca0-ca-certs\") pod \"kube-controller-manager-ci-4152.2.0-e-393f89f1d0\" (UID: \"9ee436fab466530b790c130af8cc2ca0\") " pod="kube-system/kube-controller-manager-ci-4152.2.0-e-393f89f1d0" Jan 16 08:58:33.714248 kubelet[2428]: I0116 08:58:33.713555 2428 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9ee436fab466530b790c130af8cc2ca0-flexvolume-dir\") pod \"kube-controller-manager-ci-4152.2.0-e-393f89f1d0\" (UID: \"9ee436fab466530b790c130af8cc2ca0\") " pod="kube-system/kube-controller-manager-ci-4152.2.0-e-393f89f1d0" Jan 16 08:58:33.815395 kubelet[2428]: I0116 08:58:33.814955 2428 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.2.0-e-393f89f1d0" Jan 16 08:58:33.815395 kubelet[2428]: E0116 08:58:33.815360 2428 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://24.199.127.61:6443/api/v1/nodes\": dial tcp 24.199.127.61:6443: connect: connection refused" node="ci-4152.2.0-e-393f89f1d0" Jan 16 08:58:33.936185 kubelet[2428]: E0116 08:58:33.936119 2428 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:58:33.938480 containerd[1577]: time="2025-01-16T08:58:33.938054244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152.2.0-e-393f89f1d0,Uid:6b66b3a99bb8f264ccb2affb5ccc1acf,Namespace:kube-system,Attempt:0,}" Jan 16 08:58:33.942403 kubelet[2428]: E0116 08:58:33.942351 2428 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:58:33.943062 containerd[1577]: time="2025-01-16T08:58:33.942989062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152.2.0-e-393f89f1d0,Uid:61700e2d41671f66da0e5b135c339170,Namespace:kube-system,Attempt:0,}" Jan 16 08:58:33.944091 kubelet[2428]: E0116 08:58:33.944057 2428 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:58:33.945067 systemd-resolved[1472]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Jan 16 08:58:33.946099 containerd[1577]: time="2025-01-16T08:58:33.945071016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152.2.0-e-393f89f1d0,Uid:9ee436fab466530b790c130af8cc2ca0,Namespace:kube-system,Attempt:0,}" Jan 16 08:58:34.114380 kubelet[2428]: E0116 08:58:34.114257 2428 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://24.199.127.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152.2.0-e-393f89f1d0?timeout=10s\": dial tcp 24.199.127.61:6443: connect: connection refused" interval="800ms" Jan 16 08:58:34.217674 kubelet[2428]: I0116 08:58:34.217633 2428 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.2.0-e-393f89f1d0" Jan 16 08:58:34.218141 kubelet[2428]: E0116 08:58:34.218105 2428 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://24.199.127.61:6443/api/v1/nodes\": dial tcp 24.199.127.61:6443: connect: connection refused" node="ci-4152.2.0-e-393f89f1d0" Jan 16 08:58:34.459034 kubelet[2428]: W0116 08:58:34.458766 2428 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://24.199.127.61:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 24.199.127.61:6443: connect: connection refused Jan 16 08:58:34.459034 kubelet[2428]: E0116 08:58:34.458859 2428 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://24.199.127.61:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 24.199.127.61:6443: connect: connection refused Jan 16 08:58:34.460360 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1478905792.mount: Deactivated successfully. Jan 16 08:58:34.464582 containerd[1577]: time="2025-01-16T08:58:34.463901626Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 08:58:34.471836 containerd[1577]: time="2025-01-16T08:58:34.471733992Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 16 08:58:34.475450 containerd[1577]: time="2025-01-16T08:58:34.473972223Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 08:58:34.475830 containerd[1577]: time="2025-01-16T08:58:34.475765348Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 16 08:58:34.477806 containerd[1577]: time="2025-01-16T08:58:34.477730502Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 16 08:58:34.477960 containerd[1577]: time="2025-01-16T08:58:34.477924997Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 08:58:34.484458 containerd[1577]: time="2025-01-16T08:58:34.484374520Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 08:58:34.486207 containerd[1577]: time="2025-01-16T08:58:34.486148967Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 542.98891ms" Jan 16 08:58:34.490369 containerd[1577]: time="2025-01-16T08:58:34.489333903Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 544.131334ms" Jan 16 08:58:34.492793 containerd[1577]: time="2025-01-16T08:58:34.492736523Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 08:58:34.494380 containerd[1577]: time="2025-01-16T08:58:34.493218470Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 552.899814ms" Jan 16 08:58:34.679731 containerd[1577]: time="2025-01-16T08:58:34.679586740Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 08:58:34.680019 containerd[1577]: time="2025-01-16T08:58:34.679953658Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 08:58:34.680165 containerd[1577]: time="2025-01-16T08:58:34.680113964Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 08:58:34.680552 containerd[1577]: time="2025-01-16T08:58:34.680470456Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 08:58:34.691487 containerd[1577]: time="2025-01-16T08:58:34.689659410Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 08:58:34.691487 containerd[1577]: time="2025-01-16T08:58:34.690061885Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 08:58:34.691986 kubelet[2428]: W0116 08:58:34.691900 2428 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://24.199.127.61:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 24.199.127.61:6443: connect: connection refused Jan 16 08:58:34.691986 kubelet[2428]: E0116 08:58:34.691957 2428 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://24.199.127.61:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 24.199.127.61:6443: connect: connection refused Jan 16 08:58:34.692495 containerd[1577]: time="2025-01-16T08:58:34.692388207Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 08:58:34.693122 containerd[1577]: time="2025-01-16T08:58:34.692869045Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 08:58:34.693301 containerd[1577]: time="2025-01-16T08:58:34.692752773Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 08:58:34.693301 containerd[1577]: time="2025-01-16T08:58:34.692842934Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 08:58:34.693301 containerd[1577]: time="2025-01-16T08:58:34.692871682Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 08:58:34.693301 containerd[1577]: time="2025-01-16T08:58:34.693005087Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 08:58:34.848768 containerd[1577]: time="2025-01-16T08:58:34.847422128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152.2.0-e-393f89f1d0,Uid:61700e2d41671f66da0e5b135c339170,Namespace:kube-system,Attempt:0,} returns sandbox id \"239f8960b9cdd63024c4e2bed50bf3ecc0e6fd3ff75f0c931891c4afb6925dc4\"" Jan 16 08:58:34.853818 kubelet[2428]: E0116 08:58:34.853314 2428 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:58:34.858999 containerd[1577]: time="2025-01-16T08:58:34.858939911Z" level=info msg="CreateContainer within sandbox \"239f8960b9cdd63024c4e2bed50bf3ecc0e6fd3ff75f0c931891c4afb6925dc4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 16 08:58:34.868404 containerd[1577]: time="2025-01-16T08:58:34.868331809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152.2.0-e-393f89f1d0,Uid:9ee436fab466530b790c130af8cc2ca0,Namespace:kube-system,Attempt:0,} returns sandbox id \"0f6e69480f5b16c188464099323eb50dd819b7ed0acd7cc86c37df0833013961\"" Jan 16 08:58:34.869398 kubelet[2428]: W0116 08:58:34.869324 2428 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://24.199.127.61:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152.2.0-e-393f89f1d0&limit=500&resourceVersion=0": dial tcp 24.199.127.61:6443: connect: connection refused Jan 16 08:58:34.869762 kubelet[2428]: E0116 08:58:34.869740 2428 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://24.199.127.61:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152.2.0-e-393f89f1d0&limit=500&resourceVersion=0": dial tcp 24.199.127.61:6443: connect: connection refused Jan 16 08:58:34.870822 kubelet[2428]: W0116 08:58:34.870619 2428 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://24.199.127.61:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 24.199.127.61:6443: connect: connection refused Jan 16 08:58:34.870822 kubelet[2428]: E0116 08:58:34.870749 2428 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://24.199.127.61:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 24.199.127.61:6443: connect: connection refused Jan 16 08:58:34.872524 containerd[1577]: time="2025-01-16T08:58:34.872040789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152.2.0-e-393f89f1d0,Uid:6b66b3a99bb8f264ccb2affb5ccc1acf,Namespace:kube-system,Attempt:0,} returns sandbox id \"c7cbe5964fabba83bdc94dff5477c536e1375caf9639a6023ab8f27cc54da420\"" Jan 16 08:58:34.873918 kubelet[2428]: E0116 08:58:34.873650 2428 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:58:34.876791 kubelet[2428]: E0116 08:58:34.876742 2428 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:58:34.882868 containerd[1577]: time="2025-01-16T08:58:34.882742536Z" level=info msg="CreateContainer within sandbox \"0f6e69480f5b16c188464099323eb50dd819b7ed0acd7cc86c37df0833013961\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 16 08:58:34.890536 containerd[1577]: time="2025-01-16T08:58:34.889567088Z" level=info msg="CreateContainer within sandbox \"c7cbe5964fabba83bdc94dff5477c536e1375caf9639a6023ab8f27cc54da420\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 16 08:58:34.897109 containerd[1577]: time="2025-01-16T08:58:34.897024928Z" level=info msg="CreateContainer within sandbox \"239f8960b9cdd63024c4e2bed50bf3ecc0e6fd3ff75f0c931891c4afb6925dc4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b3036ab0a3af8190606b54077640caa3537c920a448ecfa470f8c1549752871a\"" Jan 16 08:58:34.898138 containerd[1577]: time="2025-01-16T08:58:34.898093244Z" level=info msg="StartContainer for \"b3036ab0a3af8190606b54077640caa3537c920a448ecfa470f8c1549752871a\"" Jan 16 08:58:34.905809 containerd[1577]: time="2025-01-16T08:58:34.905740515Z" level=info msg="CreateContainer within sandbox \"0f6e69480f5b16c188464099323eb50dd819b7ed0acd7cc86c37df0833013961\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3290aa6d0c67b1422c545a789fb579c2387c06e8e816612011abb52cbbda10cc\"" Jan 16 08:58:34.908453 containerd[1577]: time="2025-01-16T08:58:34.907534001Z" level=info msg="StartContainer for \"3290aa6d0c67b1422c545a789fb579c2387c06e8e816612011abb52cbbda10cc\"" Jan 16 08:58:34.915425 kubelet[2428]: E0116 08:58:34.915359 2428 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://24.199.127.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152.2.0-e-393f89f1d0?timeout=10s\": dial tcp 24.199.127.61:6443: connect: connection refused" interval="1.6s" Jan 16 08:58:34.915828 containerd[1577]: time="2025-01-16T08:58:34.915792849Z" level=info msg="CreateContainer within sandbox \"c7cbe5964fabba83bdc94dff5477c536e1375caf9639a6023ab8f27cc54da420\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6575f2f9159cea97475997b7552fcf92aceaf3454e8d2df9e0f6ce7eb6d691a6\"" Jan 16 08:58:34.916675 containerd[1577]: time="2025-01-16T08:58:34.916634451Z" level=info msg="StartContainer for \"6575f2f9159cea97475997b7552fcf92aceaf3454e8d2df9e0f6ce7eb6d691a6\"" Jan 16 08:58:35.021655 kubelet[2428]: I0116 08:58:35.021498 2428 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.2.0-e-393f89f1d0" Jan 16 08:58:35.028808 kubelet[2428]: E0116 08:58:35.028375 2428 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://24.199.127.61:6443/api/v1/nodes\": dial tcp 24.199.127.61:6443: connect: connection refused" node="ci-4152.2.0-e-393f89f1d0" Jan 16 08:58:35.113455 containerd[1577]: time="2025-01-16T08:58:35.112385794Z" level=info msg="StartContainer for \"3290aa6d0c67b1422c545a789fb579c2387c06e8e816612011abb52cbbda10cc\" returns successfully" Jan 16 08:58:35.121933 containerd[1577]: time="2025-01-16T08:58:35.119962763Z" level=info msg="StartContainer for \"b3036ab0a3af8190606b54077640caa3537c920a448ecfa470f8c1549752871a\" returns successfully" Jan 16 08:58:35.162512 containerd[1577]: time="2025-01-16T08:58:35.162039430Z" level=info msg="StartContainer for \"6575f2f9159cea97475997b7552fcf92aceaf3454e8d2df9e0f6ce7eb6d691a6\" returns successfully" Jan 16 08:58:35.527107 kubelet[2428]: E0116 08:58:35.526937 2428 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://24.199.127.61:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 24.199.127.61:6443: connect: connection refused Jan 16 08:58:35.581194 kubelet[2428]: E0116 08:58:35.581152 2428 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:58:35.590226 kubelet[2428]: E0116 08:58:35.589128 2428 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:58:35.598711 kubelet[2428]: E0116 08:58:35.598514 2428 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:58:36.600174 kubelet[2428]: E0116 08:58:36.600132 2428 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:58:36.632367 kubelet[2428]: I0116 08:58:36.632323 2428 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.2.0-e-393f89f1d0" Jan 16 08:58:37.774820 kubelet[2428]: E0116 08:58:37.774713 2428 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4152.2.0-e-393f89f1d0\" not found" node="ci-4152.2.0-e-393f89f1d0" Jan 16 08:58:37.903696 kubelet[2428]: I0116 08:58:37.901496 2428 kubelet_node_status.go:76] "Successfully registered node" node="ci-4152.2.0-e-393f89f1d0" Jan 16 08:58:37.913146 kubelet[2428]: E0116 08:58:37.913107 2428 event.go:346] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4152.2.0-e-393f89f1d0.181b2096d5cd1c43 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152.2.0-e-393f89f1d0,UID:ci-4152.2.0-e-393f89f1d0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4152.2.0-e-393f89f1d0,},FirstTimestamp:2025-01-16 08:58:33.485630531 +0000 UTC m=+0.629054343,LastTimestamp:2025-01-16 08:58:33.485630531 +0000 UTC m=+0.629054343,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152.2.0-e-393f89f1d0,}" Jan 16 08:58:38.481233 kubelet[2428]: I0116 08:58:38.481131 2428 apiserver.go:52] "Watching apiserver" Jan 16 08:58:38.511004 kubelet[2428]: I0116 08:58:38.510912 2428 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 16 08:58:38.801090 kubelet[2428]: W0116 08:58:38.797219 2428 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 16 08:58:38.801090 kubelet[2428]: E0116 08:58:38.798383 2428 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:58:39.611567 kubelet[2428]: E0116 08:58:39.611526 2428 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:58:41.694206 systemd[1]: Reloading requested from client PID 2706 ('systemctl') (unit session-9.scope)... Jan 16 08:58:41.694239 systemd[1]: Reloading... Jan 16 08:58:41.806784 zram_generator::config[2744]: No configuration found. Jan 16 08:58:42.037228 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 16 08:58:42.139707 systemd[1]: Reloading finished in 444 ms. Jan 16 08:58:42.188683 kubelet[2428]: I0116 08:58:42.188240 2428 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 16 08:58:42.190746 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 08:58:42.204474 systemd[1]: kubelet.service: Deactivated successfully. Jan 16 08:58:42.205010 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 08:58:42.218323 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 08:58:42.392602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 08:58:42.410248 (kubelet)[2806]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 16 08:58:42.493038 kubelet[2806]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 16 08:58:42.493038 kubelet[2806]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 16 08:58:42.493038 kubelet[2806]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 16 08:58:42.494082 kubelet[2806]: I0116 08:58:42.494009 2806 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 16 08:58:42.502284 kubelet[2806]: I0116 08:58:42.502222 2806 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 16 08:58:42.502284 kubelet[2806]: I0116 08:58:42.502274 2806 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 16 08:58:42.502836 kubelet[2806]: I0116 08:58:42.502747 2806 server.go:919] "Client rotation is on, will bootstrap in background" Jan 16 08:58:42.505618 kubelet[2806]: I0116 08:58:42.505560 2806 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 16 08:58:42.509833 kubelet[2806]: I0116 08:58:42.509244 2806 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 16 08:58:42.524951 kubelet[2806]: I0116 08:58:42.524873 2806 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 16 08:58:42.525840 kubelet[2806]: I0116 08:58:42.525798 2806 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 16 08:58:42.526204 kubelet[2806]: I0116 08:58:42.526135 2806 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 16 08:58:42.526204 kubelet[2806]: I0116 08:58:42.526192 2806 topology_manager.go:138] "Creating topology manager with none policy" Jan 16 08:58:42.526204 kubelet[2806]: I0116 08:58:42.526209 2806 container_manager_linux.go:301] "Creating device plugin manager" Jan 16 08:58:42.526585 kubelet[2806]: I0116 08:58:42.526278 2806 state_mem.go:36] "Initialized new in-memory state store" Jan 16 08:58:42.526585 kubelet[2806]: I0116 08:58:42.526517 2806 kubelet.go:396] "Attempting to sync node with API server" Jan 16 08:58:42.529472 kubelet[2806]: I0116 08:58:42.527499 2806 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 16 08:58:42.529472 kubelet[2806]: I0116 08:58:42.527574 2806 kubelet.go:312] "Adding apiserver pod source" Jan 16 08:58:42.529472 kubelet[2806]: I0116 08:58:42.527610 2806 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 16 08:58:42.541687 kubelet[2806]: I0116 08:58:42.541338 2806 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 16 08:58:42.541687 kubelet[2806]: I0116 08:58:42.541684 2806 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 16 08:58:42.542773 kubelet[2806]: I0116 08:58:42.542279 2806 server.go:1256] "Started kubelet" Jan 16 08:58:42.546632 kubelet[2806]: I0116 08:58:42.544284 2806 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 16 08:58:42.548902 kubelet[2806]: I0116 08:58:42.548102 2806 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 16 08:58:42.548902 kubelet[2806]: I0116 08:58:42.548214 2806 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 16 08:58:42.548902 kubelet[2806]: I0116 08:58:42.548884 2806 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 16 08:58:42.550562 kubelet[2806]: I0116 08:58:42.550521 2806 server.go:461] "Adding debug handlers to kubelet server" Jan 16 08:58:42.561465 kubelet[2806]: I0116 08:58:42.561370 2806 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 16 08:58:42.566422 kubelet[2806]: I0116 08:58:42.566384 2806 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 16 08:58:42.567107 kubelet[2806]: I0116 08:58:42.566925 2806 reconciler_new.go:29] "Reconciler: start to sync state" Jan 16 08:58:42.569073 kubelet[2806]: E0116 08:58:42.569009 2806 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 16 08:58:42.569541 kubelet[2806]: I0116 08:58:42.566575 2806 factory.go:221] Registration of the systemd container factory successfully Jan 16 08:58:42.569541 kubelet[2806]: I0116 08:58:42.569492 2806 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 16 08:58:42.571652 kubelet[2806]: I0116 08:58:42.571521 2806 factory.go:221] Registration of the containerd container factory successfully Jan 16 08:58:42.601046 kubelet[2806]: I0116 08:58:42.600891 2806 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 16 08:58:42.610679 kubelet[2806]: I0116 08:58:42.610635 2806 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 16 08:58:42.610946 kubelet[2806]: I0116 08:58:42.610911 2806 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 16 08:58:42.611063 kubelet[2806]: I0116 08:58:42.611050 2806 kubelet.go:2329] "Starting kubelet main sync loop" Jan 16 08:58:42.611341 kubelet[2806]: E0116 08:58:42.611319 2806 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 16 08:58:42.666230 kubelet[2806]: I0116 08:58:42.666041 2806 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.2.0-e-393f89f1d0" Jan 16 08:58:42.679363 kubelet[2806]: I0116 08:58:42.678833 2806 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 16 08:58:42.679363 kubelet[2806]: I0116 08:58:42.678865 2806 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 16 08:58:42.679363 kubelet[2806]: I0116 08:58:42.678892 2806 state_mem.go:36] "Initialized new in-memory state store" Jan 16 08:58:42.679363 kubelet[2806]: I0116 08:58:42.679175 2806 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 16 08:58:42.679363 kubelet[2806]: I0116 08:58:42.679207 2806 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 16 08:58:42.679363 kubelet[2806]: I0116 08:58:42.679219 2806 policy_none.go:49] "None policy: Start" Jan 16 08:58:42.680565 kubelet[2806]: I0116 08:58:42.680408 2806 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 16 08:58:42.680779 kubelet[2806]: I0116 08:58:42.680578 2806 state_mem.go:35] "Initializing new in-memory state store" Jan 16 08:58:42.680830 kubelet[2806]: I0116 08:58:42.680811 2806 state_mem.go:75] "Updated machine memory state" Jan 16 08:58:42.682909 kubelet[2806]: I0116 08:58:42.682704 2806 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 16 08:58:42.683081 kubelet[2806]: I0116 08:58:42.683041 2806 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 16 08:58:42.712209 kubelet[2806]: I0116 08:58:42.712151 2806 topology_manager.go:215] "Topology Admit Handler" podUID="61700e2d41671f66da0e5b135c339170" podNamespace="kube-system" podName="kube-apiserver-ci-4152.2.0-e-393f89f1d0" Jan 16 08:58:42.712476 kubelet[2806]: I0116 08:58:42.712277 2806 topology_manager.go:215] "Topology Admit Handler" podUID="9ee436fab466530b790c130af8cc2ca0" podNamespace="kube-system" podName="kube-controller-manager-ci-4152.2.0-e-393f89f1d0" Jan 16 08:58:42.712476 kubelet[2806]: I0116 08:58:42.712312 2806 topology_manager.go:215] "Topology Admit Handler" podUID="6b66b3a99bb8f264ccb2affb5ccc1acf" podNamespace="kube-system" podName="kube-scheduler-ci-4152.2.0-e-393f89f1d0" Jan 16 08:58:42.769452 kubelet[2806]: I0116 08:58:42.769069 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9ee436fab466530b790c130af8cc2ca0-flexvolume-dir\") pod \"kube-controller-manager-ci-4152.2.0-e-393f89f1d0\" (UID: \"9ee436fab466530b790c130af8cc2ca0\") " pod="kube-system/kube-controller-manager-ci-4152.2.0-e-393f89f1d0" Jan 16 08:58:42.769452 kubelet[2806]: I0116 08:58:42.769128 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9ee436fab466530b790c130af8cc2ca0-k8s-certs\") pod \"kube-controller-manager-ci-4152.2.0-e-393f89f1d0\" (UID: \"9ee436fab466530b790c130af8cc2ca0\") " pod="kube-system/kube-controller-manager-ci-4152.2.0-e-393f89f1d0" Jan 16 08:58:42.769452 kubelet[2806]: I0116 08:58:42.769151 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9ee436fab466530b790c130af8cc2ca0-ca-certs\") pod \"kube-controller-manager-ci-4152.2.0-e-393f89f1d0\" (UID: \"9ee436fab466530b790c130af8cc2ca0\") " pod="kube-system/kube-controller-manager-ci-4152.2.0-e-393f89f1d0" Jan 16 08:58:42.769452 kubelet[2806]: I0116 08:58:42.769173 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9ee436fab466530b790c130af8cc2ca0-kubeconfig\") pod \"kube-controller-manager-ci-4152.2.0-e-393f89f1d0\" (UID: \"9ee436fab466530b790c130af8cc2ca0\") " pod="kube-system/kube-controller-manager-ci-4152.2.0-e-393f89f1d0" Jan 16 08:58:42.769452 kubelet[2806]: I0116 08:58:42.769207 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9ee436fab466530b790c130af8cc2ca0-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152.2.0-e-393f89f1d0\" (UID: \"9ee436fab466530b790c130af8cc2ca0\") " pod="kube-system/kube-controller-manager-ci-4152.2.0-e-393f89f1d0" Jan 16 08:58:42.769730 kubelet[2806]: I0116 08:58:42.769238 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6b66b3a99bb8f264ccb2affb5ccc1acf-kubeconfig\") pod \"kube-scheduler-ci-4152.2.0-e-393f89f1d0\" (UID: \"6b66b3a99bb8f264ccb2affb5ccc1acf\") " pod="kube-system/kube-scheduler-ci-4152.2.0-e-393f89f1d0" Jan 16 08:58:42.769730 kubelet[2806]: I0116 08:58:42.769264 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/61700e2d41671f66da0e5b135c339170-ca-certs\") pod \"kube-apiserver-ci-4152.2.0-e-393f89f1d0\" (UID: \"61700e2d41671f66da0e5b135c339170\") " pod="kube-system/kube-apiserver-ci-4152.2.0-e-393f89f1d0" Jan 16 08:58:42.769730 kubelet[2806]: I0116 08:58:42.769290 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/61700e2d41671f66da0e5b135c339170-k8s-certs\") pod \"kube-apiserver-ci-4152.2.0-e-393f89f1d0\" (UID: \"61700e2d41671f66da0e5b135c339170\") " pod="kube-system/kube-apiserver-ci-4152.2.0-e-393f89f1d0" Jan 16 08:58:42.769730 kubelet[2806]: I0116 08:58:42.769337 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/61700e2d41671f66da0e5b135c339170-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152.2.0-e-393f89f1d0\" (UID: \"61700e2d41671f66da0e5b135c339170\") " pod="kube-system/kube-apiserver-ci-4152.2.0-e-393f89f1d0" Jan 16 08:58:43.532497 kubelet[2806]: I0116 08:58:43.532400 2806 apiserver.go:52] "Watching apiserver" Jan 16 08:58:43.566875 kubelet[2806]: I0116 08:58:43.566791 2806 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 16 08:58:44.252365 update_engine[1563]: I20250116 08:58:44.251899 1563 update_attempter.cc:509] Updating boot flags... Jan 16 08:58:44.300493 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2839) Jan 16 08:58:44.379465 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2842) Jan 16 08:58:44.449577 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2842) Jan 16 08:58:49.549480 kubelet[2806]: E0116 08:58:49.548861 2806 event.go:346] "Server rejected event (will not retry!)" err="etcdserver: request timed out" event="&Event{ObjectMeta:{ci-4152.2.0-e-393f89f1d0.181b2098f19e1c4f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152.2.0-e-393f89f1d0,UID:ci-4152.2.0-e-393f89f1d0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4152.2.0-e-393f89f1d0,},FirstTimestamp:2025-01-16 08:58:42.542246991 +0000 UTC m=+0.123001391,LastTimestamp:2025-01-16 08:58:42.542246991 +0000 UTC m=+0.123001391,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152.2.0-e-393f89f1d0,}" Jan 16 08:58:49.674773 kubelet[2806]: E0116 08:58:49.674683 2806 kubelet_node_status.go:96] "Unable to register node with API server" err="etcdserver: request timed out" node="ci-4152.2.0-e-393f89f1d0" Jan 16 08:58:49.732437 kubelet[2806]: W0116 08:58:49.732354 2806 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 16 08:58:49.732672 kubelet[2806]: E0116 08:58:49.732481 2806 kubelet.go:1921] "Failed creating a mirror pod for" err="etcdserver: request timed out" pod="kube-system/kube-scheduler-ci-4152.2.0-e-393f89f1d0" Jan 16 08:58:49.734374 kubelet[2806]: W0116 08:58:49.734041 2806 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 16 08:58:49.734633 kubelet[2806]: W0116 08:58:49.734614 2806 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 16 08:58:49.734827 kubelet[2806]: E0116 08:58:49.734752 2806 kubelet.go:1921] "Failed creating a mirror pod for" err="etcdserver: request timed out" pod="kube-system/kube-apiserver-ci-4152.2.0-e-393f89f1d0" Jan 16 08:58:49.735042 kubelet[2806]: E0116 08:58:49.734906 2806 kubelet.go:1921] "Failed creating a mirror pod for" err="etcdserver: request timed out" pod="kube-system/kube-controller-manager-ci-4152.2.0-e-393f89f1d0" Jan 16 08:58:49.735643 kubelet[2806]: E0116 08:58:49.735554 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:58:49.735982 kubelet[2806]: E0116 08:58:49.735861 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:58:49.735982 kubelet[2806]: E0116 08:58:49.734656 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:58:49.876761 kubelet[2806]: I0116 08:58:49.876723 2806 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.2.0-e-393f89f1d0" Jan 16 08:58:50.490388 kubelet[2806]: I0116 08:58:50.489853 2806 kubelet_node_status.go:112] "Node was previously registered" node="ci-4152.2.0-e-393f89f1d0" Jan 16 08:58:50.490388 kubelet[2806]: I0116 08:58:50.490062 2806 kubelet_node_status.go:76] "Successfully registered node" node="ci-4152.2.0-e-393f89f1d0" Jan 16 08:58:50.654565 kubelet[2806]: E0116 08:58:50.654524 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:58:50.655193 kubelet[2806]: E0116 08:58:50.654574 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:58:50.655193 kubelet[2806]: E0116 08:58:50.655178 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:58:51.166141 kubelet[2806]: I0116 08:58:51.166076 2806 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4152.2.0-e-393f89f1d0" podStartSLOduration=13.165921356 podStartE2EDuration="13.165921356s" podCreationTimestamp="2025-01-16 08:58:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-16 08:58:50.512637659 +0000 UTC m=+8.093392058" watchObservedRunningTime="2025-01-16 08:58:51.165921356 +0000 UTC m=+8.746675746" Jan 16 08:58:51.166398 kubelet[2806]: I0116 08:58:51.166291 2806 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4152.2.0-e-393f89f1d0" podStartSLOduration=9.166251292 podStartE2EDuration="9.166251292s" podCreationTimestamp="2025-01-16 08:58:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-16 08:58:51.159985473 +0000 UTC m=+8.740739868" watchObservedRunningTime="2025-01-16 08:58:51.166251292 +0000 UTC m=+8.747005686" Jan 16 08:58:51.234336 kubelet[2806]: I0116 08:58:51.234068 2806 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4152.2.0-e-393f89f1d0" podStartSLOduration=9.234010095 podStartE2EDuration="9.234010095s" podCreationTimestamp="2025-01-16 08:58:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-16 08:58:51.233341061 +0000 UTC m=+8.814095458" watchObservedRunningTime="2025-01-16 08:58:51.234010095 +0000 UTC m=+8.814764489" Jan 16 08:58:51.392132 sudo[2852]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 16 08:58:51.392792 sudo[2852]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 16 08:58:51.668479 kubelet[2806]: E0116 08:58:51.668437 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:58:52.168957 sudo[2852]: pam_unix(sudo:session): session closed for user root Jan 16 08:58:53.862001 sudo[1809]: pam_unix(sudo:session): session closed for user root Jan 16 08:58:53.866694 sshd[1808]: Connection closed by 147.75.109.163 port 47330 Jan 16 08:58:53.868055 sshd-session[1802]: pam_unix(sshd:session): session closed for user core Jan 16 08:58:53.874368 systemd[1]: sshd@8-24.199.127.61:22-147.75.109.163:47330.service: Deactivated successfully. Jan 16 08:58:53.880749 systemd-logind[1555]: Session 9 logged out. Waiting for processes to exit. Jan 16 08:58:53.881602 systemd[1]: session-9.scope: Deactivated successfully. Jan 16 08:58:53.885302 systemd-logind[1555]: Removed session 9. Jan 16 08:58:54.364499 kubelet[2806]: I0116 08:58:54.364459 2806 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 16 08:58:54.367511 kubelet[2806]: I0116 08:58:54.365863 2806 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 16 08:58:54.367623 containerd[1577]: time="2025-01-16T08:58:54.365003711Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 16 08:58:54.522851 kubelet[2806]: E0116 08:58:54.522802 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:58:54.672735 kubelet[2806]: E0116 08:58:54.672582 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:58:55.056350 kubelet[2806]: I0116 08:58:55.055952 2806 topology_manager.go:215] "Topology Admit Handler" podUID="3400d8ea-a801-457b-83e6-9655cbe13358" podNamespace="kube-system" podName="kube-proxy-rkr2v" Jan 16 08:58:55.101469 kubelet[2806]: I0116 08:58:55.100004 2806 topology_manager.go:215] "Topology Admit Handler" podUID="5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a" podNamespace="kube-system" podName="cilium-2v4d6" Jan 16 08:58:55.151685 kubelet[2806]: I0116 08:58:55.151633 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a-etc-cni-netd\") pod \"cilium-2v4d6\" (UID: \"5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a\") " pod="kube-system/cilium-2v4d6" Jan 16 08:58:55.151888 kubelet[2806]: I0116 08:58:55.151702 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3400d8ea-a801-457b-83e6-9655cbe13358-lib-modules\") pod \"kube-proxy-rkr2v\" (UID: \"3400d8ea-a801-457b-83e6-9655cbe13358\") " pod="kube-system/kube-proxy-rkr2v" Jan 16 08:58:55.151888 kubelet[2806]: I0116 08:58:55.151739 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqq5q\" (UniqueName: \"kubernetes.io/projected/3400d8ea-a801-457b-83e6-9655cbe13358-kube-api-access-vqq5q\") pod \"kube-proxy-rkr2v\" (UID: \"3400d8ea-a801-457b-83e6-9655cbe13358\") " pod="kube-system/kube-proxy-rkr2v" Jan 16 08:58:55.151888 kubelet[2806]: I0116 08:58:55.151773 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a-cilium-config-path\") pod \"cilium-2v4d6\" (UID: \"5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a\") " pod="kube-system/cilium-2v4d6" Jan 16 08:58:55.151888 kubelet[2806]: I0116 08:58:55.151800 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a-host-proc-sys-net\") pod \"cilium-2v4d6\" (UID: \"5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a\") " pod="kube-system/cilium-2v4d6" Jan 16 08:58:55.151888 kubelet[2806]: I0116 08:58:55.151832 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a-host-proc-sys-kernel\") pod \"cilium-2v4d6\" (UID: \"5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a\") " pod="kube-system/cilium-2v4d6" Jan 16 08:58:55.152136 kubelet[2806]: I0116 08:58:55.151861 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mtcr\" (UniqueName: \"kubernetes.io/projected/5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a-kube-api-access-2mtcr\") pod \"cilium-2v4d6\" (UID: \"5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a\") " pod="kube-system/cilium-2v4d6" Jan 16 08:58:55.152136 kubelet[2806]: I0116 08:58:55.151889 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a-bpf-maps\") pod \"cilium-2v4d6\" (UID: \"5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a\") " pod="kube-system/cilium-2v4d6" Jan 16 08:58:55.152136 kubelet[2806]: I0116 08:58:55.151918 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a-cilium-cgroup\") pod \"cilium-2v4d6\" (UID: \"5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a\") " pod="kube-system/cilium-2v4d6" Jan 16 08:58:55.152136 kubelet[2806]: I0116 08:58:55.151965 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3400d8ea-a801-457b-83e6-9655cbe13358-xtables-lock\") pod \"kube-proxy-rkr2v\" (UID: \"3400d8ea-a801-457b-83e6-9655cbe13358\") " pod="kube-system/kube-proxy-rkr2v" Jan 16 08:58:55.152136 kubelet[2806]: I0116 08:58:55.152008 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a-lib-modules\") pod \"cilium-2v4d6\" (UID: \"5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a\") " pod="kube-system/cilium-2v4d6" Jan 16 08:58:55.152136 kubelet[2806]: I0116 08:58:55.152039 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a-xtables-lock\") pod \"cilium-2v4d6\" (UID: \"5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a\") " pod="kube-system/cilium-2v4d6" Jan 16 08:58:55.152390 kubelet[2806]: I0116 08:58:55.152069 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a-hubble-tls\") pod \"cilium-2v4d6\" (UID: \"5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a\") " pod="kube-system/cilium-2v4d6" Jan 16 08:58:55.152390 kubelet[2806]: I0116 08:58:55.152118 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a-hostproc\") pod \"cilium-2v4d6\" (UID: \"5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a\") " pod="kube-system/cilium-2v4d6" Jan 16 08:58:55.152390 kubelet[2806]: I0116 08:58:55.152162 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3400d8ea-a801-457b-83e6-9655cbe13358-kube-proxy\") pod \"kube-proxy-rkr2v\" (UID: \"3400d8ea-a801-457b-83e6-9655cbe13358\") " pod="kube-system/kube-proxy-rkr2v" Jan 16 08:58:55.152390 kubelet[2806]: I0116 08:58:55.152192 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a-cilium-run\") pod \"cilium-2v4d6\" (UID: \"5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a\") " pod="kube-system/cilium-2v4d6" Jan 16 08:58:55.152390 kubelet[2806]: I0116 08:58:55.152224 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a-cni-path\") pod \"cilium-2v4d6\" (UID: \"5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a\") " pod="kube-system/cilium-2v4d6" Jan 16 08:58:55.152390 kubelet[2806]: I0116 08:58:55.152257 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a-clustermesh-secrets\") pod \"cilium-2v4d6\" (UID: \"5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a\") " pod="kube-system/cilium-2v4d6" Jan 16 08:58:55.375567 kubelet[2806]: E0116 08:58:55.375310 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:58:55.378520 containerd[1577]: time="2025-01-16T08:58:55.376785736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rkr2v,Uid:3400d8ea-a801-457b-83e6-9655cbe13358,Namespace:kube-system,Attempt:0,}" Jan 16 08:58:55.416528 kubelet[2806]: E0116 08:58:55.413583 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:58:55.420677 containerd[1577]: time="2025-01-16T08:58:55.420346248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2v4d6,Uid:5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a,Namespace:kube-system,Attempt:0,}" Jan 16 08:58:55.433969 kubelet[2806]: I0116 08:58:55.429551 2806 topology_manager.go:215] "Topology Admit Handler" podUID="f4d2f7b4-41c6-49c3-8f9b-61f02e17884d" podNamespace="kube-system" podName="cilium-operator-5cc964979-cnmmd" Jan 16 08:58:55.446900 containerd[1577]: time="2025-01-16T08:58:55.446770589Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 08:58:55.446900 containerd[1577]: time="2025-01-16T08:58:55.446843826Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 08:58:55.448497 containerd[1577]: time="2025-01-16T08:58:55.446856938Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 08:58:55.448497 containerd[1577]: time="2025-01-16T08:58:55.446969937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 08:58:55.456661 kubelet[2806]: I0116 08:58:55.456594 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fp52z\" (UniqueName: \"kubernetes.io/projected/f4d2f7b4-41c6-49c3-8f9b-61f02e17884d-kube-api-access-fp52z\") pod \"cilium-operator-5cc964979-cnmmd\" (UID: \"f4d2f7b4-41c6-49c3-8f9b-61f02e17884d\") " pod="kube-system/cilium-operator-5cc964979-cnmmd" Jan 16 08:58:55.456661 kubelet[2806]: I0116 08:58:55.456680 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f4d2f7b4-41c6-49c3-8f9b-61f02e17884d-cilium-config-path\") pod \"cilium-operator-5cc964979-cnmmd\" (UID: \"f4d2f7b4-41c6-49c3-8f9b-61f02e17884d\") " pod="kube-system/cilium-operator-5cc964979-cnmmd" Jan 16 08:58:55.539249 containerd[1577]: time="2025-01-16T08:58:55.538806546Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 08:58:55.539249 containerd[1577]: time="2025-01-16T08:58:55.538918420Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 08:58:55.539249 containerd[1577]: time="2025-01-16T08:58:55.538937023Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 08:58:55.540119 containerd[1577]: time="2025-01-16T08:58:55.539973009Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 08:58:55.561923 containerd[1577]: time="2025-01-16T08:58:55.561851617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rkr2v,Uid:3400d8ea-a801-457b-83e6-9655cbe13358,Namespace:kube-system,Attempt:0,} returns sandbox id \"3d4c73d3e5919b82e218edff6e5a71fcf26c42d97c38abd8525cadddc863494c\"" Jan 16 08:58:55.576176 kubelet[2806]: E0116 08:58:55.575300 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:58:55.589532 containerd[1577]: time="2025-01-16T08:58:55.589254585Z" level=info msg="CreateContainer within sandbox \"3d4c73d3e5919b82e218edff6e5a71fcf26c42d97c38abd8525cadddc863494c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 16 08:58:55.617556 containerd[1577]: time="2025-01-16T08:58:55.617349454Z" level=info msg="CreateContainer within sandbox \"3d4c73d3e5919b82e218edff6e5a71fcf26c42d97c38abd8525cadddc863494c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"335084c169a11763c21ef730a0cc7ff7d62199fee5e1ea6b1e61d1ee19ac7d67\"" Jan 16 08:58:55.618772 containerd[1577]: time="2025-01-16T08:58:55.618700634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2v4d6,Uid:5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a,Namespace:kube-system,Attempt:0,} returns sandbox id \"05d1db85a00fa8fbebc3691890726280d5545a049f5b39c9861286d69af67b95\"" Jan 16 08:58:55.619629 kubelet[2806]: E0116 08:58:55.619455 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:58:55.620692 containerd[1577]: time="2025-01-16T08:58:55.619880822Z" level=info msg="StartContainer for \"335084c169a11763c21ef730a0cc7ff7d62199fee5e1ea6b1e61d1ee19ac7d67\"" Jan 16 08:58:55.627194 containerd[1577]: time="2025-01-16T08:58:55.626786487Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 16 08:58:55.713148 containerd[1577]: time="2025-01-16T08:58:55.712932753Z" level=info msg="StartContainer for \"335084c169a11763c21ef730a0cc7ff7d62199fee5e1ea6b1e61d1ee19ac7d67\" returns successfully" Jan 16 08:58:55.767344 kubelet[2806]: E0116 08:58:55.767011 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:58:55.770556 containerd[1577]: time="2025-01-16T08:58:55.770470562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-cnmmd,Uid:f4d2f7b4-41c6-49c3-8f9b-61f02e17884d,Namespace:kube-system,Attempt:0,}" Jan 16 08:58:55.824558 containerd[1577]: time="2025-01-16T08:58:55.823982065Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 08:58:55.824558 containerd[1577]: time="2025-01-16T08:58:55.824110584Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 08:58:55.824558 containerd[1577]: time="2025-01-16T08:58:55.824142098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 08:58:55.827780 containerd[1577]: time="2025-01-16T08:58:55.827625967Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 08:58:55.988056 containerd[1577]: time="2025-01-16T08:58:55.987709261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-cnmmd,Uid:f4d2f7b4-41c6-49c3-8f9b-61f02e17884d,Namespace:kube-system,Attempt:0,} returns sandbox id \"6aad7d353e6a8d99d0aaf25f3ada9a713efe883297771370ccd99c7faae62099\"" Jan 16 08:58:55.990576 kubelet[2806]: E0116 08:58:55.990198 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:58:56.685703 kubelet[2806]: E0116 08:58:56.685456 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:58:56.702585 kubelet[2806]: I0116 08:58:56.701437 2806 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-rkr2v" podStartSLOduration=1.70136161 podStartE2EDuration="1.70136161s" podCreationTimestamp="2025-01-16 08:58:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-16 08:58:56.700728071 +0000 UTC m=+14.281482502" watchObservedRunningTime="2025-01-16 08:58:56.70136161 +0000 UTC m=+14.282116008" Jan 16 08:58:56.920340 kubelet[2806]: E0116 08:58:56.920292 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:58:57.688590 kubelet[2806]: E0116 08:58:57.687922 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:58:57.688590 kubelet[2806]: E0116 08:58:57.688309 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:58:58.850754 kubelet[2806]: E0116 08:58:58.850455 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:58:59.739394 kubelet[2806]: E0116 08:58:59.738636 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:59:02.877881 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3938782294.mount: Deactivated successfully. Jan 16 08:59:05.977409 containerd[1577]: time="2025-01-16T08:59:05.977304212Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:59:05.979152 containerd[1577]: time="2025-01-16T08:59:05.978097855Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735339" Jan 16 08:59:05.980264 containerd[1577]: time="2025-01-16T08:59:05.979786080Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:59:05.982527 containerd[1577]: time="2025-01-16T08:59:05.982465233Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.355617121s" Jan 16 08:59:05.982754 containerd[1577]: time="2025-01-16T08:59:05.982727968Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 16 08:59:05.983866 containerd[1577]: time="2025-01-16T08:59:05.983789604Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 16 08:59:06.002104 containerd[1577]: time="2025-01-16T08:59:06.002034782Z" level=info msg="CreateContainer within sandbox \"05d1db85a00fa8fbebc3691890726280d5545a049f5b39c9861286d69af67b95\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 16 08:59:06.101896 containerd[1577]: time="2025-01-16T08:59:06.101795785Z" level=info msg="CreateContainer within sandbox \"05d1db85a00fa8fbebc3691890726280d5545a049f5b39c9861286d69af67b95\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"870342d0db70b4465d865bcef66994bb2b1174012fb751e5d9d67498688f860e\"" Jan 16 08:59:06.111373 containerd[1577]: time="2025-01-16T08:59:06.111093815Z" level=info msg="StartContainer for \"870342d0db70b4465d865bcef66994bb2b1174012fb751e5d9d67498688f860e\"" Jan 16 08:59:06.302323 containerd[1577]: time="2025-01-16T08:59:06.301881379Z" level=info msg="StartContainer for \"870342d0db70b4465d865bcef66994bb2b1174012fb751e5d9d67498688f860e\" returns successfully" Jan 16 08:59:06.420439 containerd[1577]: time="2025-01-16T08:59:06.384355934Z" level=info msg="shim disconnected" id=870342d0db70b4465d865bcef66994bb2b1174012fb751e5d9d67498688f860e namespace=k8s.io Jan 16 08:59:06.420439 containerd[1577]: time="2025-01-16T08:59:06.420203787Z" level=warning msg="cleaning up after shim disconnected" id=870342d0db70b4465d865bcef66994bb2b1174012fb751e5d9d67498688f860e namespace=k8s.io Jan 16 08:59:06.420439 containerd[1577]: time="2025-01-16T08:59:06.420227001Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 16 08:59:06.778867 kubelet[2806]: E0116 08:59:06.777964 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:59:06.786265 containerd[1577]: time="2025-01-16T08:59:06.785878105Z" level=info msg="CreateContainer within sandbox \"05d1db85a00fa8fbebc3691890726280d5545a049f5b39c9861286d69af67b95\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 16 08:59:06.802634 containerd[1577]: time="2025-01-16T08:59:06.802447154Z" level=info msg="CreateContainer within sandbox \"05d1db85a00fa8fbebc3691890726280d5545a049f5b39c9861286d69af67b95\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"02be9972315e9d054dace87e7854e6c7309a78b4103a1764d273d296f2e848b9\"" Jan 16 08:59:06.806501 containerd[1577]: time="2025-01-16T08:59:06.805042380Z" level=info msg="StartContainer for \"02be9972315e9d054dace87e7854e6c7309a78b4103a1764d273d296f2e848b9\"" Jan 16 08:59:06.907019 containerd[1577]: time="2025-01-16T08:59:06.906956687Z" level=info msg="StartContainer for \"02be9972315e9d054dace87e7854e6c7309a78b4103a1764d273d296f2e848b9\" returns successfully" Jan 16 08:59:06.927162 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 16 08:59:06.927740 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 16 08:59:06.927867 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 16 08:59:06.937468 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 16 08:59:06.978135 containerd[1577]: time="2025-01-16T08:59:06.977965705Z" level=info msg="shim disconnected" id=02be9972315e9d054dace87e7854e6c7309a78b4103a1764d273d296f2e848b9 namespace=k8s.io Jan 16 08:59:06.978135 containerd[1577]: time="2025-01-16T08:59:06.978034127Z" level=warning msg="cleaning up after shim disconnected" id=02be9972315e9d054dace87e7854e6c7309a78b4103a1764d273d296f2e848b9 namespace=k8s.io Jan 16 08:59:06.978135 containerd[1577]: time="2025-01-16T08:59:06.978045585Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 16 08:59:06.994036 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 16 08:59:07.093493 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-870342d0db70b4465d865bcef66994bb2b1174012fb751e5d9d67498688f860e-rootfs.mount: Deactivated successfully. Jan 16 08:59:07.783137 kubelet[2806]: E0116 08:59:07.783091 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:59:07.788885 containerd[1577]: time="2025-01-16T08:59:07.788628695Z" level=info msg="CreateContainer within sandbox \"05d1db85a00fa8fbebc3691890726280d5545a049f5b39c9861286d69af67b95\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 16 08:59:07.839566 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1925557327.mount: Deactivated successfully. Jan 16 08:59:07.842838 containerd[1577]: time="2025-01-16T08:59:07.842771177Z" level=info msg="CreateContainer within sandbox \"05d1db85a00fa8fbebc3691890726280d5545a049f5b39c9861286d69af67b95\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"df84919d60d3717f5721b3391e0df7cb872eb8540e7410d41525e08e62db1592\"" Jan 16 08:59:07.843734 containerd[1577]: time="2025-01-16T08:59:07.843689168Z" level=info msg="StartContainer for \"df84919d60d3717f5721b3391e0df7cb872eb8540e7410d41525e08e62db1592\"" Jan 16 08:59:07.947647 containerd[1577]: time="2025-01-16T08:59:07.947099901Z" level=info msg="StartContainer for \"df84919d60d3717f5721b3391e0df7cb872eb8540e7410d41525e08e62db1592\" returns successfully" Jan 16 08:59:07.979825 containerd[1577]: time="2025-01-16T08:59:07.979752094Z" level=info msg="shim disconnected" id=df84919d60d3717f5721b3391e0df7cb872eb8540e7410d41525e08e62db1592 namespace=k8s.io Jan 16 08:59:07.979825 containerd[1577]: time="2025-01-16T08:59:07.979816904Z" level=warning msg="cleaning up after shim disconnected" id=df84919d60d3717f5721b3391e0df7cb872eb8540e7410d41525e08e62db1592 namespace=k8s.io Jan 16 08:59:07.979825 containerd[1577]: time="2025-01-16T08:59:07.979825674Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 16 08:59:08.093864 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-df84919d60d3717f5721b3391e0df7cb872eb8540e7410d41525e08e62db1592-rootfs.mount: Deactivated successfully. Jan 16 08:59:08.789203 kubelet[2806]: E0116 08:59:08.788523 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:59:08.793517 containerd[1577]: time="2025-01-16T08:59:08.793207166Z" level=info msg="CreateContainer within sandbox \"05d1db85a00fa8fbebc3691890726280d5545a049f5b39c9861286d69af67b95\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 16 08:59:08.824996 containerd[1577]: time="2025-01-16T08:59:08.824812483Z" level=info msg="CreateContainer within sandbox \"05d1db85a00fa8fbebc3691890726280d5545a049f5b39c9861286d69af67b95\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"3fb61f8bc053f88df65e7ff9b3da14dbc43a0c6850c7c37e6e922984cc7135b9\"" Jan 16 08:59:08.830351 containerd[1577]: time="2025-01-16T08:59:08.827939768Z" level=info msg="StartContainer for \"3fb61f8bc053f88df65e7ff9b3da14dbc43a0c6850c7c37e6e922984cc7135b9\"" Jan 16 08:59:08.917655 containerd[1577]: time="2025-01-16T08:59:08.917419466Z" level=info msg="StartContainer for \"3fb61f8bc053f88df65e7ff9b3da14dbc43a0c6850c7c37e6e922984cc7135b9\" returns successfully" Jan 16 08:59:08.947866 containerd[1577]: time="2025-01-16T08:59:08.947431696Z" level=info msg="shim disconnected" id=3fb61f8bc053f88df65e7ff9b3da14dbc43a0c6850c7c37e6e922984cc7135b9 namespace=k8s.io Jan 16 08:59:08.947866 containerd[1577]: time="2025-01-16T08:59:08.947540472Z" level=warning msg="cleaning up after shim disconnected" id=3fb61f8bc053f88df65e7ff9b3da14dbc43a0c6850c7c37e6e922984cc7135b9 namespace=k8s.io Jan 16 08:59:08.947866 containerd[1577]: time="2025-01-16T08:59:08.947555548Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 16 08:59:09.095694 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3fb61f8bc053f88df65e7ff9b3da14dbc43a0c6850c7c37e6e922984cc7135b9-rootfs.mount: Deactivated successfully. Jan 16 08:59:09.120164 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount673965612.mount: Deactivated successfully. Jan 16 08:59:09.801947 kubelet[2806]: E0116 08:59:09.801899 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:59:09.814911 containerd[1577]: time="2025-01-16T08:59:09.813384887Z" level=info msg="CreateContainer within sandbox \"05d1db85a00fa8fbebc3691890726280d5545a049f5b39c9861286d69af67b95\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 16 08:59:09.856637 containerd[1577]: time="2025-01-16T08:59:09.855383764Z" level=info msg="CreateContainer within sandbox \"05d1db85a00fa8fbebc3691890726280d5545a049f5b39c9861286d69af67b95\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a3336472ed5a7a2c1e735c99999b1d2d0170f6cf459a425e3aaabdcadc26eba6\"" Jan 16 08:59:09.858576 containerd[1577]: time="2025-01-16T08:59:09.857213379Z" level=info msg="StartContainer for \"a3336472ed5a7a2c1e735c99999b1d2d0170f6cf459a425e3aaabdcadc26eba6\"" Jan 16 08:59:10.024556 containerd[1577]: time="2025-01-16T08:59:10.022811801Z" level=info msg="StartContainer for \"a3336472ed5a7a2c1e735c99999b1d2d0170f6cf459a425e3aaabdcadc26eba6\" returns successfully" Jan 16 08:59:10.296184 kubelet[2806]: I0116 08:59:10.296106 2806 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 16 08:59:10.349580 kubelet[2806]: I0116 08:59:10.347366 2806 topology_manager.go:215] "Topology Admit Handler" podUID="a57877a6-6eef-4cee-9dea-5c89fdfe526d" podNamespace="kube-system" podName="coredns-76f75df574-fpznq" Jan 16 08:59:10.352488 kubelet[2806]: I0116 08:59:10.352423 2806 topology_manager.go:215] "Topology Admit Handler" podUID="9b87d7e0-a324-4b0f-a3c4-5c209da6016d" podNamespace="kube-system" podName="coredns-76f75df574-5mtsc" Jan 16 08:59:10.508638 kubelet[2806]: I0116 08:59:10.508576 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b87d7e0-a324-4b0f-a3c4-5c209da6016d-config-volume\") pod \"coredns-76f75df574-5mtsc\" (UID: \"9b87d7e0-a324-4b0f-a3c4-5c209da6016d\") " pod="kube-system/coredns-76f75df574-5mtsc" Jan 16 08:59:10.511682 kubelet[2806]: I0116 08:59:10.509711 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dt645\" (UniqueName: \"kubernetes.io/projected/9b87d7e0-a324-4b0f-a3c4-5c209da6016d-kube-api-access-dt645\") pod \"coredns-76f75df574-5mtsc\" (UID: \"9b87d7e0-a324-4b0f-a3c4-5c209da6016d\") " pod="kube-system/coredns-76f75df574-5mtsc" Jan 16 08:59:10.511682 kubelet[2806]: I0116 08:59:10.509785 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a57877a6-6eef-4cee-9dea-5c89fdfe526d-config-volume\") pod \"coredns-76f75df574-fpznq\" (UID: \"a57877a6-6eef-4cee-9dea-5c89fdfe526d\") " pod="kube-system/coredns-76f75df574-fpznq" Jan 16 08:59:10.511682 kubelet[2806]: I0116 08:59:10.509829 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwcxc\" (UniqueName: \"kubernetes.io/projected/a57877a6-6eef-4cee-9dea-5c89fdfe526d-kube-api-access-qwcxc\") pod \"coredns-76f75df574-fpznq\" (UID: \"a57877a6-6eef-4cee-9dea-5c89fdfe526d\") " pod="kube-system/coredns-76f75df574-fpznq" Jan 16 08:59:10.678970 kubelet[2806]: E0116 08:59:10.678058 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:59:10.680910 kubelet[2806]: E0116 08:59:10.680472 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:59:10.684175 containerd[1577]: time="2025-01-16T08:59:10.683697337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-5mtsc,Uid:9b87d7e0-a324-4b0f-a3c4-5c209da6016d,Namespace:kube-system,Attempt:0,}" Jan 16 08:59:10.684675 containerd[1577]: time="2025-01-16T08:59:10.684613079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-fpznq,Uid:a57877a6-6eef-4cee-9dea-5c89fdfe526d,Namespace:kube-system,Attempt:0,}" Jan 16 08:59:10.923586 kubelet[2806]: E0116 08:59:10.922958 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:59:10.963912 kubelet[2806]: I0116 08:59:10.962388 2806 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-2v4d6" podStartSLOduration=5.605261025 podStartE2EDuration="15.962336259s" podCreationTimestamp="2025-01-16 08:58:55 +0000 UTC" firstStartedPulling="2025-01-16 08:58:55.626218175 +0000 UTC m=+13.206972548" lastFinishedPulling="2025-01-16 08:59:05.98329341 +0000 UTC m=+23.564047782" observedRunningTime="2025-01-16 08:59:10.958565639 +0000 UTC m=+28.539320036" watchObservedRunningTime="2025-01-16 08:59:10.962336259 +0000 UTC m=+28.543090675" Jan 16 08:59:11.924439 kubelet[2806]: E0116 08:59:11.924165 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:59:12.927545 kubelet[2806]: E0116 08:59:12.927505 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:59:16.347540 containerd[1577]: time="2025-01-16T08:59:16.346505872Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:59:16.348188 containerd[1577]: time="2025-01-16T08:59:16.347993282Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907265" Jan 16 08:59:16.349501 containerd[1577]: time="2025-01-16T08:59:16.349117291Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:59:16.352368 containerd[1577]: time="2025-01-16T08:59:16.351571033Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 10.366758547s" Jan 16 08:59:16.352368 containerd[1577]: time="2025-01-16T08:59:16.351629981Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 16 08:59:16.358516 containerd[1577]: time="2025-01-16T08:59:16.358450287Z" level=info msg="CreateContainer within sandbox \"6aad7d353e6a8d99d0aaf25f3ada9a713efe883297771370ccd99c7faae62099\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 16 08:59:16.375918 containerd[1577]: time="2025-01-16T08:59:16.375495157Z" level=info msg="CreateContainer within sandbox \"6aad7d353e6a8d99d0aaf25f3ada9a713efe883297771370ccd99c7faae62099\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"0c98e4bcf30fa58fa06f407242586a332ec8df15c71624ba97609f0486d3f2c8\"" Jan 16 08:59:16.377155 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3472421390.mount: Deactivated successfully. Jan 16 08:59:16.381620 containerd[1577]: time="2025-01-16T08:59:16.380910554Z" level=info msg="StartContainer for \"0c98e4bcf30fa58fa06f407242586a332ec8df15c71624ba97609f0486d3f2c8\"" Jan 16 08:59:16.480910 containerd[1577]: time="2025-01-16T08:59:16.480837057Z" level=info msg="StartContainer for \"0c98e4bcf30fa58fa06f407242586a332ec8df15c71624ba97609f0486d3f2c8\" returns successfully" Jan 16 08:59:16.959381 kubelet[2806]: E0116 08:59:16.959187 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:59:17.961510 kubelet[2806]: E0116 08:59:17.959803 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:59:19.332331 systemd-networkd[1220]: cilium_host: Link UP Jan 16 08:59:19.340158 systemd-networkd[1220]: cilium_net: Link UP Jan 16 08:59:19.340172 systemd-networkd[1220]: cilium_net: Gained carrier Jan 16 08:59:19.341824 systemd-networkd[1220]: cilium_host: Gained carrier Jan 16 08:59:19.518670 systemd-networkd[1220]: cilium_vxlan: Link UP Jan 16 08:59:19.518882 systemd-networkd[1220]: cilium_vxlan: Gained carrier Jan 16 08:59:19.799121 systemd-networkd[1220]: cilium_host: Gained IPv6LL Jan 16 08:59:19.926099 systemd-networkd[1220]: cilium_net: Gained IPv6LL Jan 16 08:59:20.099457 kernel: NET: Registered PF_ALG protocol family Jan 16 08:59:21.200070 systemd-networkd[1220]: lxc_health: Link UP Jan 16 08:59:21.210663 systemd-networkd[1220]: lxc_health: Gained carrier Jan 16 08:59:21.211197 systemd-networkd[1220]: cilium_vxlan: Gained IPv6LL Jan 16 08:59:21.426741 kubelet[2806]: E0116 08:59:21.425180 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:59:21.473257 kubelet[2806]: I0116 08:59:21.472518 2806 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-cnmmd" podStartSLOduration=6.109992056 podStartE2EDuration="26.470549049s" podCreationTimestamp="2025-01-16 08:58:55 +0000 UTC" firstStartedPulling="2025-01-16 08:58:55.991401024 +0000 UTC m=+13.572155412" lastFinishedPulling="2025-01-16 08:59:16.351958017 +0000 UTC m=+33.932712405" observedRunningTime="2025-01-16 08:59:17.007862656 +0000 UTC m=+34.588617052" watchObservedRunningTime="2025-01-16 08:59:21.470549049 +0000 UTC m=+39.051303465" Jan 16 08:59:21.835757 systemd-networkd[1220]: lxc133701ffdbff: Link UP Jan 16 08:59:21.847462 kernel: eth0: renamed from tmpbabde Jan 16 08:59:21.862279 systemd-networkd[1220]: lxc133701ffdbff: Gained carrier Jan 16 08:59:21.878728 systemd-networkd[1220]: lxced44d3809a1d: Link UP Jan 16 08:59:21.895810 kernel: eth0: renamed from tmp25ac3 Jan 16 08:59:21.905560 systemd-networkd[1220]: lxced44d3809a1d: Gained carrier Jan 16 08:59:21.982459 kubelet[2806]: E0116 08:59:21.979925 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:59:22.806685 systemd-networkd[1220]: lxc_health: Gained IPv6LL Jan 16 08:59:22.986316 kubelet[2806]: E0116 08:59:22.986273 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:59:22.997639 systemd-networkd[1220]: lxc133701ffdbff: Gained IPv6LL Jan 16 08:59:23.317625 systemd-networkd[1220]: lxced44d3809a1d: Gained IPv6LL Jan 16 08:59:28.547036 containerd[1577]: time="2025-01-16T08:59:28.546673708Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 08:59:28.549154 containerd[1577]: time="2025-01-16T08:59:28.547785535Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 08:59:28.549154 containerd[1577]: time="2025-01-16T08:59:28.547910720Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 08:59:28.549154 containerd[1577]: time="2025-01-16T08:59:28.548313422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 08:59:28.644455 containerd[1577]: time="2025-01-16T08:59:28.639549138Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 08:59:28.646325 containerd[1577]: time="2025-01-16T08:59:28.644058592Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 08:59:28.646325 containerd[1577]: time="2025-01-16T08:59:28.644091069Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 08:59:28.646325 containerd[1577]: time="2025-01-16T08:59:28.644439407Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 08:59:28.702978 containerd[1577]: time="2025-01-16T08:59:28.702704350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-fpznq,Uid:a57877a6-6eef-4cee-9dea-5c89fdfe526d,Namespace:kube-system,Attempt:0,} returns sandbox id \"25ac38e152aa59d0a925b9111b191a1afa2342ee26f2bc6efe9c61a5c8e9d127\"" Jan 16 08:59:28.706480 kubelet[2806]: E0116 08:59:28.705104 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:59:28.717161 containerd[1577]: time="2025-01-16T08:59:28.715895647Z" level=info msg="CreateContainer within sandbox \"25ac38e152aa59d0a925b9111b191a1afa2342ee26f2bc6efe9c61a5c8e9d127\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 16 08:59:28.775656 containerd[1577]: time="2025-01-16T08:59:28.775581496Z" level=info msg="CreateContainer within sandbox \"25ac38e152aa59d0a925b9111b191a1afa2342ee26f2bc6efe9c61a5c8e9d127\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c78511335f2f29716f00a92e3d6cfc6e722dc0449c9601610fc733431719ee16\"" Jan 16 08:59:28.778943 containerd[1577]: time="2025-01-16T08:59:28.776992251Z" level=info msg="StartContainer for \"c78511335f2f29716f00a92e3d6cfc6e722dc0449c9601610fc733431719ee16\"" Jan 16 08:59:28.814505 containerd[1577]: time="2025-01-16T08:59:28.814066766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-5mtsc,Uid:9b87d7e0-a324-4b0f-a3c4-5c209da6016d,Namespace:kube-system,Attempt:0,} returns sandbox id \"babdecd8fa9d6ac076182c01dd65cdd36d443b8472fa61d82c8ac6f744a1fd21\"" Jan 16 08:59:28.817507 kubelet[2806]: E0116 08:59:28.817260 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:59:28.823648 containerd[1577]: time="2025-01-16T08:59:28.822038975Z" level=info msg="CreateContainer within sandbox \"babdecd8fa9d6ac076182c01dd65cdd36d443b8472fa61d82c8ac6f744a1fd21\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 16 08:59:28.842688 containerd[1577]: time="2025-01-16T08:59:28.842605847Z" level=info msg="CreateContainer within sandbox \"babdecd8fa9d6ac076182c01dd65cdd36d443b8472fa61d82c8ac6f744a1fd21\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"03366e6862a11e790c79575b7424ec6107b178ebe7159ae2af8e694b1c9aa302\"" Jan 16 08:59:28.846938 containerd[1577]: time="2025-01-16T08:59:28.846528994Z" level=info msg="StartContainer for \"03366e6862a11e790c79575b7424ec6107b178ebe7159ae2af8e694b1c9aa302\"" Jan 16 08:59:28.928504 containerd[1577]: time="2025-01-16T08:59:28.927293136Z" level=info msg="StartContainer for \"c78511335f2f29716f00a92e3d6cfc6e722dc0449c9601610fc733431719ee16\" returns successfully" Jan 16 08:59:28.974378 containerd[1577]: time="2025-01-16T08:59:28.974301473Z" level=info msg="StartContainer for \"03366e6862a11e790c79575b7424ec6107b178ebe7159ae2af8e694b1c9aa302\" returns successfully" Jan 16 08:59:29.024537 kubelet[2806]: E0116 08:59:29.024019 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:59:29.040000 kubelet[2806]: E0116 08:59:29.039201 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:59:29.067561 kubelet[2806]: I0116 08:59:29.064755 2806 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-5mtsc" podStartSLOduration=34.064682347 podStartE2EDuration="34.064682347s" podCreationTimestamp="2025-01-16 08:58:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-16 08:59:29.064175304 +0000 UTC m=+46.644929731" watchObservedRunningTime="2025-01-16 08:59:29.064682347 +0000 UTC m=+46.645436767" Jan 16 08:59:29.093452 kubelet[2806]: I0116 08:59:29.091859 2806 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-fpznq" podStartSLOduration=34.091801503 podStartE2EDuration="34.091801503s" podCreationTimestamp="2025-01-16 08:58:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-16 08:59:29.089777183 +0000 UTC m=+46.670531578" watchObservedRunningTime="2025-01-16 08:59:29.091801503 +0000 UTC m=+46.672555900" Jan 16 08:59:29.559482 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount214769400.mount: Deactivated successfully. Jan 16 08:59:30.047462 kubelet[2806]: E0116 08:59:30.046972 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:59:30.053989 kubelet[2806]: E0116 08:59:30.053486 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:59:31.050034 kubelet[2806]: E0116 08:59:31.049893 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 08:59:31.051288 kubelet[2806]: E0116 08:59:31.051126 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:00:06.621968 kubelet[2806]: E0116 09:00:06.616469 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:00:10.618974 kubelet[2806]: E0116 09:00:10.618267 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:00:18.813961 systemd[1]: Started sshd@9-24.199.127.61:22-47.250.81.7:44218.service - OpenSSH per-connection server daemon (47.250.81.7:44218). Jan 16 09:00:19.612552 kubelet[2806]: E0116 09:00:19.612470 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:00:20.613961 kubelet[2806]: E0116 09:00:20.613295 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:00:22.054477 sshd[4193]: kex_exchange_identification: read: Connection reset by peer Jan 16 09:00:22.054477 sshd[4193]: Connection reset by 47.250.81.7 port 44218 Jan 16 09:00:22.055681 systemd[1]: sshd@9-24.199.127.61:22-47.250.81.7:44218.service: Deactivated successfully. Jan 16 09:00:22.230649 systemd[1]: Started sshd@10-24.199.127.61:22-47.250.81.7:44224.service - OpenSSH per-connection server daemon (47.250.81.7:44224). Jan 16 09:00:22.937823 sshd[4197]: Invalid user from 47.250.81.7 port 44224 Jan 16 09:00:23.104645 sshd[4197]: Connection closed by invalid user 47.250.81.7 port 44224 [preauth] Jan 16 09:00:23.109093 systemd[1]: sshd@10-24.199.127.61:22-47.250.81.7:44224.service: Deactivated successfully. Jan 16 09:00:23.613722 kubelet[2806]: E0116 09:00:23.613672 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:00:32.614642 kubelet[2806]: E0116 09:00:32.614592 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:00:34.614328 kubelet[2806]: E0116 09:00:34.613272 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:00:54.612624 kubelet[2806]: E0116 09:00:54.612520 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:01:26.613933 kubelet[2806]: E0116 09:01:26.613479 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:01:28.613008 kubelet[2806]: E0116 09:01:28.612406 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:01:34.613480 kubelet[2806]: E0116 09:01:34.612713 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:01:37.613463 kubelet[2806]: E0116 09:01:37.613109 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:01:39.469840 systemd[1]: Started sshd@11-24.199.127.61:22-147.75.109.163:36528.service - OpenSSH per-connection server daemon (147.75.109.163:36528). Jan 16 09:01:39.576386 sshd[4213]: Accepted publickey for core from 147.75.109.163 port 36528 ssh2: RSA SHA256:JFCq2iHRoEjWAf+XB9dAYBdNgVKarxLIY/Gd6UT86ms Jan 16 09:01:39.581745 sshd-session[4213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:01:39.600703 systemd-logind[1555]: New session 10 of user core. Jan 16 09:01:39.611036 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 16 09:01:40.419360 sshd[4216]: Connection closed by 147.75.109.163 port 36528 Jan 16 09:01:40.420265 sshd-session[4213]: pam_unix(sshd:session): session closed for user core Jan 16 09:01:40.432846 systemd[1]: sshd@11-24.199.127.61:22-147.75.109.163:36528.service: Deactivated successfully. Jan 16 09:01:40.437991 systemd-logind[1555]: Session 10 logged out. Waiting for processes to exit. Jan 16 09:01:40.439392 systemd[1]: session-10.scope: Deactivated successfully. Jan 16 09:01:40.441594 systemd-logind[1555]: Removed session 10. Jan 16 09:01:43.614693 kubelet[2806]: E0116 09:01:43.613819 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:01:45.435311 systemd[1]: Started sshd@12-24.199.127.61:22-147.75.109.163:36544.service - OpenSSH per-connection server daemon (147.75.109.163:36544). Jan 16 09:01:45.526692 sshd[4230]: Accepted publickey for core from 147.75.109.163 port 36544 ssh2: RSA SHA256:JFCq2iHRoEjWAf+XB9dAYBdNgVKarxLIY/Gd6UT86ms Jan 16 09:01:45.527710 sshd-session[4230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:01:45.544563 systemd-logind[1555]: New session 11 of user core. Jan 16 09:01:45.551156 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 16 09:01:45.617400 kubelet[2806]: E0116 09:01:45.617353 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:01:45.814462 sshd[4233]: Connection closed by 147.75.109.163 port 36544 Jan 16 09:01:45.816045 sshd-session[4230]: pam_unix(sshd:session): session closed for user core Jan 16 09:01:45.841242 systemd[1]: sshd@12-24.199.127.61:22-147.75.109.163:36544.service: Deactivated successfully. Jan 16 09:01:45.850009 systemd-logind[1555]: Session 11 logged out. Waiting for processes to exit. Jan 16 09:01:45.852435 systemd[1]: session-11.scope: Deactivated successfully. Jan 16 09:01:45.854806 systemd-logind[1555]: Removed session 11. Jan 16 09:01:49.613910 kubelet[2806]: E0116 09:01:49.613865 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:01:50.829984 systemd[1]: Started sshd@13-24.199.127.61:22-147.75.109.163:44542.service - OpenSSH per-connection server daemon (147.75.109.163:44542). Jan 16 09:01:50.915326 sshd[4245]: Accepted publickey for core from 147.75.109.163 port 44542 ssh2: RSA SHA256:JFCq2iHRoEjWAf+XB9dAYBdNgVKarxLIY/Gd6UT86ms Jan 16 09:01:50.918578 sshd-session[4245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:01:50.925544 systemd-logind[1555]: New session 12 of user core. Jan 16 09:01:50.934913 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 16 09:01:51.117623 sshd[4248]: Connection closed by 147.75.109.163 port 44542 Jan 16 09:01:51.118212 sshd-session[4245]: pam_unix(sshd:session): session closed for user core Jan 16 09:01:51.125839 systemd-logind[1555]: Session 12 logged out. Waiting for processes to exit. Jan 16 09:01:51.127690 systemd[1]: sshd@13-24.199.127.61:22-147.75.109.163:44542.service: Deactivated successfully. Jan 16 09:01:51.135820 systemd[1]: session-12.scope: Deactivated successfully. Jan 16 09:01:51.138470 systemd-logind[1555]: Removed session 12. Jan 16 09:01:55.615006 kubelet[2806]: E0116 09:01:55.614536 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:01:56.129500 systemd[1]: Started sshd@14-24.199.127.61:22-147.75.109.163:44558.service - OpenSSH per-connection server daemon (147.75.109.163:44558). Jan 16 09:01:56.195487 sshd[4262]: Accepted publickey for core from 147.75.109.163 port 44558 ssh2: RSA SHA256:JFCq2iHRoEjWAf+XB9dAYBdNgVKarxLIY/Gd6UT86ms Jan 16 09:01:56.198462 sshd-session[4262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:01:56.207556 systemd-logind[1555]: New session 13 of user core. Jan 16 09:01:56.216054 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 16 09:01:56.404535 sshd[4265]: Connection closed by 147.75.109.163 port 44558 Jan 16 09:01:56.408544 sshd-session[4262]: pam_unix(sshd:session): session closed for user core Jan 16 09:01:56.413726 systemd[1]: sshd@14-24.199.127.61:22-147.75.109.163:44558.service: Deactivated successfully. Jan 16 09:01:56.421713 systemd-logind[1555]: Session 13 logged out. Waiting for processes to exit. Jan 16 09:01:56.423062 systemd[1]: session-13.scope: Deactivated successfully. Jan 16 09:01:56.425079 systemd-logind[1555]: Removed session 13. Jan 16 09:02:01.418957 systemd[1]: Started sshd@15-24.199.127.61:22-147.75.109.163:56458.service - OpenSSH per-connection server daemon (147.75.109.163:56458). Jan 16 09:02:01.501156 sshd[4277]: Accepted publickey for core from 147.75.109.163 port 56458 ssh2: RSA SHA256:JFCq2iHRoEjWAf+XB9dAYBdNgVKarxLIY/Gd6UT86ms Jan 16 09:02:01.505340 sshd-session[4277]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:02:01.518652 systemd-logind[1555]: New session 14 of user core. Jan 16 09:02:01.533111 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 16 09:02:01.785686 sshd[4280]: Connection closed by 147.75.109.163 port 56458 Jan 16 09:02:01.786391 sshd-session[4277]: pam_unix(sshd:session): session closed for user core Jan 16 09:02:01.798942 systemd[1]: sshd@15-24.199.127.61:22-147.75.109.163:56458.service: Deactivated successfully. Jan 16 09:02:01.807884 systemd[1]: session-14.scope: Deactivated successfully. Jan 16 09:02:01.811332 systemd-logind[1555]: Session 14 logged out. Waiting for processes to exit. Jan 16 09:02:01.814013 systemd-logind[1555]: Removed session 14. Jan 16 09:02:06.797960 systemd[1]: Started sshd@16-24.199.127.61:22-147.75.109.163:56468.service - OpenSSH per-connection server daemon (147.75.109.163:56468). Jan 16 09:02:06.879210 sshd[4292]: Accepted publickey for core from 147.75.109.163 port 56468 ssh2: RSA SHA256:JFCq2iHRoEjWAf+XB9dAYBdNgVKarxLIY/Gd6UT86ms Jan 16 09:02:06.881841 sshd-session[4292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:02:06.890406 systemd-logind[1555]: New session 15 of user core. Jan 16 09:02:06.896163 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 16 09:02:07.099598 sshd[4295]: Connection closed by 147.75.109.163 port 56468 Jan 16 09:02:07.101860 sshd-session[4292]: pam_unix(sshd:session): session closed for user core Jan 16 09:02:07.111110 systemd[1]: Started sshd@17-24.199.127.61:22-147.75.109.163:56470.service - OpenSSH per-connection server daemon (147.75.109.163:56470). Jan 16 09:02:07.112820 systemd[1]: sshd@16-24.199.127.61:22-147.75.109.163:56468.service: Deactivated successfully. Jan 16 09:02:07.118924 systemd[1]: session-15.scope: Deactivated successfully. Jan 16 09:02:07.121862 systemd-logind[1555]: Session 15 logged out. Waiting for processes to exit. Jan 16 09:02:07.129018 systemd-logind[1555]: Removed session 15. Jan 16 09:02:07.185690 sshd[4303]: Accepted publickey for core from 147.75.109.163 port 56470 ssh2: RSA SHA256:JFCq2iHRoEjWAf+XB9dAYBdNgVKarxLIY/Gd6UT86ms Jan 16 09:02:07.188099 sshd-session[4303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:02:07.197712 systemd-logind[1555]: New session 16 of user core. Jan 16 09:02:07.205053 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 16 09:02:07.448646 sshd[4309]: Connection closed by 147.75.109.163 port 56470 Jan 16 09:02:07.453572 sshd-session[4303]: pam_unix(sshd:session): session closed for user core Jan 16 09:02:07.478290 systemd[1]: Started sshd@18-24.199.127.61:22-147.75.109.163:54730.service - OpenSSH per-connection server daemon (147.75.109.163:54730). Jan 16 09:02:07.483276 systemd[1]: sshd@17-24.199.127.61:22-147.75.109.163:56470.service: Deactivated successfully. Jan 16 09:02:07.508054 systemd[1]: session-16.scope: Deactivated successfully. Jan 16 09:02:07.515535 systemd-logind[1555]: Session 16 logged out. Waiting for processes to exit. Jan 16 09:02:07.523263 systemd-logind[1555]: Removed session 16. Jan 16 09:02:07.617843 sshd[4315]: Accepted publickey for core from 147.75.109.163 port 54730 ssh2: RSA SHA256:JFCq2iHRoEjWAf+XB9dAYBdNgVKarxLIY/Gd6UT86ms Jan 16 09:02:07.621324 sshd-session[4315]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:02:07.632961 systemd-logind[1555]: New session 17 of user core. Jan 16 09:02:07.642283 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 16 09:02:07.890800 sshd[4321]: Connection closed by 147.75.109.163 port 54730 Jan 16 09:02:07.890310 sshd-session[4315]: pam_unix(sshd:session): session closed for user core Jan 16 09:02:07.902684 systemd[1]: sshd@18-24.199.127.61:22-147.75.109.163:54730.service: Deactivated successfully. Jan 16 09:02:07.910928 systemd[1]: session-17.scope: Deactivated successfully. Jan 16 09:02:07.917898 systemd-logind[1555]: Session 17 logged out. Waiting for processes to exit. Jan 16 09:02:07.925489 systemd-logind[1555]: Removed session 17. Jan 16 09:02:12.903000 systemd[1]: Started sshd@19-24.199.127.61:22-147.75.109.163:54742.service - OpenSSH per-connection server daemon (147.75.109.163:54742). Jan 16 09:02:12.979733 sshd[4332]: Accepted publickey for core from 147.75.109.163 port 54742 ssh2: RSA SHA256:JFCq2iHRoEjWAf+XB9dAYBdNgVKarxLIY/Gd6UT86ms Jan 16 09:02:12.983042 sshd-session[4332]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:02:12.991212 systemd-logind[1555]: New session 18 of user core. Jan 16 09:02:13.012077 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 16 09:02:13.207163 sshd[4335]: Connection closed by 147.75.109.163 port 54742 Jan 16 09:02:13.210110 sshd-session[4332]: pam_unix(sshd:session): session closed for user core Jan 16 09:02:13.225179 systemd[1]: sshd@19-24.199.127.61:22-147.75.109.163:54742.service: Deactivated successfully. Jan 16 09:02:13.231310 systemd[1]: session-18.scope: Deactivated successfully. Jan 16 09:02:13.235512 systemd-logind[1555]: Session 18 logged out. Waiting for processes to exit. Jan 16 09:02:13.240063 systemd-logind[1555]: Removed session 18. Jan 16 09:02:18.228819 systemd[1]: Started sshd@20-24.199.127.61:22-147.75.109.163:38520.service - OpenSSH per-connection server daemon (147.75.109.163:38520). Jan 16 09:02:18.290609 sshd[4346]: Accepted publickey for core from 147.75.109.163 port 38520 ssh2: RSA SHA256:JFCq2iHRoEjWAf+XB9dAYBdNgVKarxLIY/Gd6UT86ms Jan 16 09:02:18.294019 sshd-session[4346]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:02:18.305259 systemd-logind[1555]: New session 19 of user core. Jan 16 09:02:18.309927 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 16 09:02:18.490087 sshd[4349]: Connection closed by 147.75.109.163 port 38520 Jan 16 09:02:18.491015 sshd-session[4346]: pam_unix(sshd:session): session closed for user core Jan 16 09:02:18.499952 systemd-logind[1555]: Session 19 logged out. Waiting for processes to exit. Jan 16 09:02:18.501953 systemd[1]: sshd@20-24.199.127.61:22-147.75.109.163:38520.service: Deactivated successfully. Jan 16 09:02:18.506813 systemd[1]: session-19.scope: Deactivated successfully. Jan 16 09:02:18.509120 systemd-logind[1555]: Removed session 19. Jan 16 09:02:23.513089 systemd[1]: Started sshd@21-24.199.127.61:22-147.75.109.163:38524.service - OpenSSH per-connection server daemon (147.75.109.163:38524). Jan 16 09:02:23.642624 sshd[4360]: Accepted publickey for core from 147.75.109.163 port 38524 ssh2: RSA SHA256:JFCq2iHRoEjWAf+XB9dAYBdNgVKarxLIY/Gd6UT86ms Jan 16 09:02:23.645379 sshd-session[4360]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:02:23.654474 systemd-logind[1555]: New session 20 of user core. Jan 16 09:02:23.662638 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 16 09:02:23.873438 sshd[4363]: Connection closed by 147.75.109.163 port 38524 Jan 16 09:02:23.874386 sshd-session[4360]: pam_unix(sshd:session): session closed for user core Jan 16 09:02:23.889849 systemd[1]: Started sshd@22-24.199.127.61:22-147.75.109.163:38534.service - OpenSSH per-connection server daemon (147.75.109.163:38534). Jan 16 09:02:23.890972 systemd[1]: sshd@21-24.199.127.61:22-147.75.109.163:38524.service: Deactivated successfully. Jan 16 09:02:23.900099 systemd[1]: session-20.scope: Deactivated successfully. Jan 16 09:02:23.908766 systemd-logind[1555]: Session 20 logged out. Waiting for processes to exit. Jan 16 09:02:23.915803 systemd-logind[1555]: Removed session 20. Jan 16 09:02:23.973232 sshd[4372]: Accepted publickey for core from 147.75.109.163 port 38534 ssh2: RSA SHA256:JFCq2iHRoEjWAf+XB9dAYBdNgVKarxLIY/Gd6UT86ms Jan 16 09:02:23.975622 sshd-session[4372]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:02:23.985142 systemd-logind[1555]: New session 21 of user core. Jan 16 09:02:23.992269 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 16 09:02:24.521815 sshd[4377]: Connection closed by 147.75.109.163 port 38534 Jan 16 09:02:24.523150 sshd-session[4372]: pam_unix(sshd:session): session closed for user core Jan 16 09:02:24.537139 systemd[1]: Started sshd@23-24.199.127.61:22-147.75.109.163:38544.service - OpenSSH per-connection server daemon (147.75.109.163:38544). Jan 16 09:02:24.540017 systemd[1]: sshd@22-24.199.127.61:22-147.75.109.163:38534.service: Deactivated successfully. Jan 16 09:02:24.553383 systemd[1]: session-21.scope: Deactivated successfully. Jan 16 09:02:24.553711 systemd-logind[1555]: Session 21 logged out. Waiting for processes to exit. Jan 16 09:02:24.559145 systemd-logind[1555]: Removed session 21. Jan 16 09:02:24.672065 sshd[4383]: Accepted publickey for core from 147.75.109.163 port 38544 ssh2: RSA SHA256:JFCq2iHRoEjWAf+XB9dAYBdNgVKarxLIY/Gd6UT86ms Jan 16 09:02:24.674945 sshd-session[4383]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:02:24.682858 systemd-logind[1555]: New session 22 of user core. Jan 16 09:02:24.693114 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 16 09:02:27.063895 sshd[4389]: Connection closed by 147.75.109.163 port 38544 Jan 16 09:02:27.063810 sshd-session[4383]: pam_unix(sshd:session): session closed for user core Jan 16 09:02:27.091376 systemd[1]: Started sshd@24-24.199.127.61:22-147.75.109.163:38552.service - OpenSSH per-connection server daemon (147.75.109.163:38552). Jan 16 09:02:27.098169 systemd[1]: sshd@23-24.199.127.61:22-147.75.109.163:38544.service: Deactivated successfully. Jan 16 09:02:27.111241 systemd-logind[1555]: Session 22 logged out. Waiting for processes to exit. Jan 16 09:02:27.111901 systemd[1]: session-22.scope: Deactivated successfully. Jan 16 09:02:27.123741 systemd-logind[1555]: Removed session 22. Jan 16 09:02:27.202495 sshd[4404]: Accepted publickey for core from 147.75.109.163 port 38552 ssh2: RSA SHA256:JFCq2iHRoEjWAf+XB9dAYBdNgVKarxLIY/Gd6UT86ms Jan 16 09:02:27.205605 sshd-session[4404]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:02:27.216932 systemd-logind[1555]: New session 23 of user core. Jan 16 09:02:27.223510 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 16 09:02:27.867108 sshd[4411]: Connection closed by 147.75.109.163 port 38552 Jan 16 09:02:27.871081 sshd-session[4404]: pam_unix(sshd:session): session closed for user core Jan 16 09:02:27.880714 systemd[1]: sshd@24-24.199.127.61:22-147.75.109.163:38552.service: Deactivated successfully. Jan 16 09:02:27.894874 systemd[1]: session-23.scope: Deactivated successfully. Jan 16 09:02:27.902765 systemd-logind[1555]: Session 23 logged out. Waiting for processes to exit. Jan 16 09:02:27.914704 systemd[1]: Started sshd@25-24.199.127.61:22-147.75.109.163:34856.service - OpenSSH per-connection server daemon (147.75.109.163:34856). Jan 16 09:02:27.917287 systemd-logind[1555]: Removed session 23. Jan 16 09:02:27.987710 sshd[4420]: Accepted publickey for core from 147.75.109.163 port 34856 ssh2: RSA SHA256:JFCq2iHRoEjWAf+XB9dAYBdNgVKarxLIY/Gd6UT86ms Jan 16 09:02:27.989726 sshd-session[4420]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:02:28.002957 systemd-logind[1555]: New session 24 of user core. Jan 16 09:02:28.009040 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 16 09:02:28.195823 sshd[4423]: Connection closed by 147.75.109.163 port 34856 Jan 16 09:02:28.198304 sshd-session[4420]: pam_unix(sshd:session): session closed for user core Jan 16 09:02:28.206641 systemd[1]: sshd@25-24.199.127.61:22-147.75.109.163:34856.service: Deactivated successfully. Jan 16 09:02:28.212753 systemd[1]: session-24.scope: Deactivated successfully. Jan 16 09:02:28.215403 systemd-logind[1555]: Session 24 logged out. Waiting for processes to exit. Jan 16 09:02:28.219732 systemd-logind[1555]: Removed session 24. Jan 16 09:02:29.612708 kubelet[2806]: E0116 09:02:29.612328 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:02:33.209086 systemd[1]: Started sshd@26-24.199.127.61:22-147.75.109.163:34862.service - OpenSSH per-connection server daemon (147.75.109.163:34862). Jan 16 09:02:33.277293 sshd[4434]: Accepted publickey for core from 147.75.109.163 port 34862 ssh2: RSA SHA256:JFCq2iHRoEjWAf+XB9dAYBdNgVKarxLIY/Gd6UT86ms Jan 16 09:02:33.279999 sshd-session[4434]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:02:33.291541 systemd-logind[1555]: New session 25 of user core. Jan 16 09:02:33.297093 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 16 09:02:33.462540 sshd[4438]: Connection closed by 147.75.109.163 port 34862 Jan 16 09:02:33.462484 sshd-session[4434]: pam_unix(sshd:session): session closed for user core Jan 16 09:02:33.469177 systemd-logind[1555]: Session 25 logged out. Waiting for processes to exit. Jan 16 09:02:33.470846 systemd[1]: sshd@26-24.199.127.61:22-147.75.109.163:34862.service: Deactivated successfully. Jan 16 09:02:33.479371 systemd[1]: session-25.scope: Deactivated successfully. Jan 16 09:02:33.480740 systemd-logind[1555]: Removed session 25. Jan 16 09:02:34.614500 kubelet[2806]: E0116 09:02:34.613788 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:02:38.476245 systemd[1]: Started sshd@27-24.199.127.61:22-147.75.109.163:47084.service - OpenSSH per-connection server daemon (147.75.109.163:47084). Jan 16 09:02:38.569463 sshd[4452]: Accepted publickey for core from 147.75.109.163 port 47084 ssh2: RSA SHA256:JFCq2iHRoEjWAf+XB9dAYBdNgVKarxLIY/Gd6UT86ms Jan 16 09:02:38.571690 sshd-session[4452]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:02:38.579529 systemd-logind[1555]: New session 26 of user core. Jan 16 09:02:38.586011 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 16 09:02:38.833719 sshd[4455]: Connection closed by 147.75.109.163 port 47084 Jan 16 09:02:38.835203 sshd-session[4452]: pam_unix(sshd:session): session closed for user core Jan 16 09:02:38.844162 systemd[1]: sshd@27-24.199.127.61:22-147.75.109.163:47084.service: Deactivated successfully. Jan 16 09:02:38.851987 systemd[1]: session-26.scope: Deactivated successfully. Jan 16 09:02:38.853488 systemd-logind[1555]: Session 26 logged out. Waiting for processes to exit. Jan 16 09:02:38.855810 systemd-logind[1555]: Removed session 26. Jan 16 09:02:39.613760 kubelet[2806]: E0116 09:02:39.613304 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:02:43.850395 systemd[1]: Started sshd@28-24.199.127.61:22-147.75.109.163:47090.service - OpenSSH per-connection server daemon (147.75.109.163:47090). Jan 16 09:02:43.939636 sshd[4469]: Accepted publickey for core from 147.75.109.163 port 47090 ssh2: RSA SHA256:JFCq2iHRoEjWAf+XB9dAYBdNgVKarxLIY/Gd6UT86ms Jan 16 09:02:43.942197 sshd-session[4469]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:02:43.951654 systemd-logind[1555]: New session 27 of user core. Jan 16 09:02:43.959035 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 16 09:02:44.144301 sshd[4473]: Connection closed by 147.75.109.163 port 47090 Jan 16 09:02:44.144098 sshd-session[4469]: pam_unix(sshd:session): session closed for user core Jan 16 09:02:44.152934 systemd[1]: sshd@28-24.199.127.61:22-147.75.109.163:47090.service: Deactivated successfully. Jan 16 09:02:44.158549 systemd[1]: session-27.scope: Deactivated successfully. Jan 16 09:02:44.160306 systemd-logind[1555]: Session 27 logged out. Waiting for processes to exit. Jan 16 09:02:44.162884 systemd-logind[1555]: Removed session 27. Jan 16 09:02:49.158624 systemd[1]: Started sshd@29-24.199.127.61:22-147.75.109.163:37910.service - OpenSSH per-connection server daemon (147.75.109.163:37910). Jan 16 09:02:49.233102 sshd[4483]: Accepted publickey for core from 147.75.109.163 port 37910 ssh2: RSA SHA256:JFCq2iHRoEjWAf+XB9dAYBdNgVKarxLIY/Gd6UT86ms Jan 16 09:02:49.235532 sshd-session[4483]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:02:49.244742 systemd-logind[1555]: New session 28 of user core. Jan 16 09:02:49.255896 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 16 09:02:49.430153 sshd[4486]: Connection closed by 147.75.109.163 port 37910 Jan 16 09:02:49.431483 sshd-session[4483]: pam_unix(sshd:session): session closed for user core Jan 16 09:02:49.445469 systemd[1]: Started sshd@30-24.199.127.61:22-147.75.109.163:37914.service - OpenSSH per-connection server daemon (147.75.109.163:37914). Jan 16 09:02:49.449738 systemd[1]: sshd@29-24.199.127.61:22-147.75.109.163:37910.service: Deactivated successfully. Jan 16 09:02:49.453772 systemd[1]: session-28.scope: Deactivated successfully. Jan 16 09:02:49.460812 systemd-logind[1555]: Session 28 logged out. Waiting for processes to exit. Jan 16 09:02:49.463423 systemd-logind[1555]: Removed session 28. Jan 16 09:02:49.525608 sshd[4494]: Accepted publickey for core from 147.75.109.163 port 37914 ssh2: RSA SHA256:JFCq2iHRoEjWAf+XB9dAYBdNgVKarxLIY/Gd6UT86ms Jan 16 09:02:49.529104 sshd-session[4494]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:02:49.536673 systemd-logind[1555]: New session 29 of user core. Jan 16 09:02:49.545569 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 16 09:02:51.340225 systemd[1]: run-containerd-runc-k8s.io-a3336472ed5a7a2c1e735c99999b1d2d0170f6cf459a425e3aaabdcadc26eba6-runc.oZXqvL.mount: Deactivated successfully. Jan 16 09:02:51.367721 containerd[1577]: time="2025-01-16T09:02:51.367633182Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 16 09:02:51.435748 containerd[1577]: time="2025-01-16T09:02:51.435650720Z" level=info msg="StopContainer for \"a3336472ed5a7a2c1e735c99999b1d2d0170f6cf459a425e3aaabdcadc26eba6\" with timeout 2 (s)" Jan 16 09:02:51.436157 containerd[1577]: time="2025-01-16T09:02:51.436114658Z" level=info msg="StopContainer for \"0c98e4bcf30fa58fa06f407242586a332ec8df15c71624ba97609f0486d3f2c8\" with timeout 30 (s)" Jan 16 09:02:51.439715 containerd[1577]: time="2025-01-16T09:02:51.438906451Z" level=info msg="Stop container \"0c98e4bcf30fa58fa06f407242586a332ec8df15c71624ba97609f0486d3f2c8\" with signal terminated" Jan 16 09:02:51.440910 containerd[1577]: time="2025-01-16T09:02:51.440865102Z" level=info msg="Stop container \"a3336472ed5a7a2c1e735c99999b1d2d0170f6cf459a425e3aaabdcadc26eba6\" with signal terminated" Jan 16 09:02:51.458728 systemd-networkd[1220]: lxc_health: Link DOWN Jan 16 09:02:51.458740 systemd-networkd[1220]: lxc_health: Lost carrier Jan 16 09:02:51.539279 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a3336472ed5a7a2c1e735c99999b1d2d0170f6cf459a425e3aaabdcadc26eba6-rootfs.mount: Deactivated successfully. Jan 16 09:02:51.550319 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0c98e4bcf30fa58fa06f407242586a332ec8df15c71624ba97609f0486d3f2c8-rootfs.mount: Deactivated successfully. Jan 16 09:02:51.559218 containerd[1577]: time="2025-01-16T09:02:51.558893417Z" level=info msg="shim disconnected" id=a3336472ed5a7a2c1e735c99999b1d2d0170f6cf459a425e3aaabdcadc26eba6 namespace=k8s.io Jan 16 09:02:51.559218 containerd[1577]: time="2025-01-16T09:02:51.558972845Z" level=warning msg="cleaning up after shim disconnected" id=a3336472ed5a7a2c1e735c99999b1d2d0170f6cf459a425e3aaabdcadc26eba6 namespace=k8s.io Jan 16 09:02:51.559218 containerd[1577]: time="2025-01-16T09:02:51.558986577Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 16 09:02:51.561077 containerd[1577]: time="2025-01-16T09:02:51.560991810Z" level=info msg="shim disconnected" id=0c98e4bcf30fa58fa06f407242586a332ec8df15c71624ba97609f0486d3f2c8 namespace=k8s.io Jan 16 09:02:51.561077 containerd[1577]: time="2025-01-16T09:02:51.561070424Z" level=warning msg="cleaning up after shim disconnected" id=0c98e4bcf30fa58fa06f407242586a332ec8df15c71624ba97609f0486d3f2c8 namespace=k8s.io Jan 16 09:02:51.561077 containerd[1577]: time="2025-01-16T09:02:51.561085711Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 16 09:02:51.591573 containerd[1577]: time="2025-01-16T09:02:51.591365938Z" level=info msg="StopContainer for \"a3336472ed5a7a2c1e735c99999b1d2d0170f6cf459a425e3aaabdcadc26eba6\" returns successfully" Jan 16 09:02:51.597364 containerd[1577]: time="2025-01-16T09:02:51.596761836Z" level=info msg="StopPodSandbox for \"05d1db85a00fa8fbebc3691890726280d5545a049f5b39c9861286d69af67b95\"" Jan 16 09:02:51.605638 containerd[1577]: time="2025-01-16T09:02:51.603636371Z" level=info msg="Container to stop \"870342d0db70b4465d865bcef66994bb2b1174012fb751e5d9d67498688f860e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 16 09:02:51.605638 containerd[1577]: time="2025-01-16T09:02:51.603759113Z" level=info msg="Container to stop \"3fb61f8bc053f88df65e7ff9b3da14dbc43a0c6850c7c37e6e922984cc7135b9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 16 09:02:51.605638 containerd[1577]: time="2025-01-16T09:02:51.603774564Z" level=info msg="Container to stop \"df84919d60d3717f5721b3391e0df7cb872eb8540e7410d41525e08e62db1592\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 16 09:02:51.605638 containerd[1577]: time="2025-01-16T09:02:51.603796391Z" level=info msg="Container to stop \"a3336472ed5a7a2c1e735c99999b1d2d0170f6cf459a425e3aaabdcadc26eba6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 16 09:02:51.605638 containerd[1577]: time="2025-01-16T09:02:51.603811070Z" level=info msg="Container to stop \"02be9972315e9d054dace87e7854e6c7309a78b4103a1764d273d296f2e848b9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 16 09:02:51.609500 containerd[1577]: time="2025-01-16T09:02:51.607766146Z" level=info msg="StopContainer for \"0c98e4bcf30fa58fa06f407242586a332ec8df15c71624ba97609f0486d3f2c8\" returns successfully" Jan 16 09:02:51.610040 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-05d1db85a00fa8fbebc3691890726280d5545a049f5b39c9861286d69af67b95-shm.mount: Deactivated successfully. Jan 16 09:02:51.612601 containerd[1577]: time="2025-01-16T09:02:51.612390384Z" level=info msg="StopPodSandbox for \"6aad7d353e6a8d99d0aaf25f3ada9a713efe883297771370ccd99c7faae62099\"" Jan 16 09:02:51.612897 containerd[1577]: time="2025-01-16T09:02:51.612834358Z" level=info msg="Container to stop \"0c98e4bcf30fa58fa06f407242586a332ec8df15c71624ba97609f0486d3f2c8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 16 09:02:51.685394 containerd[1577]: time="2025-01-16T09:02:51.685315599Z" level=info msg="shim disconnected" id=05d1db85a00fa8fbebc3691890726280d5545a049f5b39c9861286d69af67b95 namespace=k8s.io Jan 16 09:02:51.685749 containerd[1577]: time="2025-01-16T09:02:51.685716354Z" level=warning msg="cleaning up after shim disconnected" id=05d1db85a00fa8fbebc3691890726280d5545a049f5b39c9861286d69af67b95 namespace=k8s.io Jan 16 09:02:51.686067 containerd[1577]: time="2025-01-16T09:02:51.686039070Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 16 09:02:51.713118 containerd[1577]: time="2025-01-16T09:02:51.713029753Z" level=info msg="shim disconnected" id=6aad7d353e6a8d99d0aaf25f3ada9a713efe883297771370ccd99c7faae62099 namespace=k8s.io Jan 16 09:02:51.713118 containerd[1577]: time="2025-01-16T09:02:51.713112725Z" level=warning msg="cleaning up after shim disconnected" id=6aad7d353e6a8d99d0aaf25f3ada9a713efe883297771370ccd99c7faae62099 namespace=k8s.io Jan 16 09:02:51.713118 containerd[1577]: time="2025-01-16T09:02:51.713128160Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 16 09:02:51.744696 containerd[1577]: time="2025-01-16T09:02:51.744538623Z" level=info msg="TearDown network for sandbox \"05d1db85a00fa8fbebc3691890726280d5545a049f5b39c9861286d69af67b95\" successfully" Jan 16 09:02:51.744696 containerd[1577]: time="2025-01-16T09:02:51.744621633Z" level=info msg="StopPodSandbox for \"05d1db85a00fa8fbebc3691890726280d5545a049f5b39c9861286d69af67b95\" returns successfully" Jan 16 09:02:51.773631 containerd[1577]: time="2025-01-16T09:02:51.773535586Z" level=info msg="TearDown network for sandbox \"6aad7d353e6a8d99d0aaf25f3ada9a713efe883297771370ccd99c7faae62099\" successfully" Jan 16 09:02:51.773631 containerd[1577]: time="2025-01-16T09:02:51.773597766Z" level=info msg="StopPodSandbox for \"6aad7d353e6a8d99d0aaf25f3ada9a713efe883297771370ccd99c7faae62099\" returns successfully" Jan 16 09:02:51.954957 kubelet[2806]: I0116 09:02:51.954272 2806 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a-clustermesh-secrets\") pod \"5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a\" (UID: \"5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a\") " Jan 16 09:02:51.954957 kubelet[2806]: I0116 09:02:51.954361 2806 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a-etc-cni-netd\") pod \"5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a\" (UID: \"5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a\") " Jan 16 09:02:51.954957 kubelet[2806]: I0116 09:02:51.954392 2806 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a-lib-modules\") pod \"5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a\" (UID: \"5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a\") " Jan 16 09:02:51.954957 kubelet[2806]: I0116 09:02:51.954449 2806 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a-cilium-run\") pod \"5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a\" (UID: \"5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a\") " Jan 16 09:02:51.954957 kubelet[2806]: I0116 09:02:51.954497 2806 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2mtcr\" (UniqueName: \"kubernetes.io/projected/5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a-kube-api-access-2mtcr\") pod \"5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a\" (UID: \"5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a\") " Jan 16 09:02:51.954957 kubelet[2806]: I0116 09:02:51.954531 2806 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f4d2f7b4-41c6-49c3-8f9b-61f02e17884d-cilium-config-path\") pod \"f4d2f7b4-41c6-49c3-8f9b-61f02e17884d\" (UID: \"f4d2f7b4-41c6-49c3-8f9b-61f02e17884d\") " Jan 16 09:02:51.955890 kubelet[2806]: I0116 09:02:51.954566 2806 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a-cilium-config-path\") pod \"5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a\" (UID: \"5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a\") " Jan 16 09:02:51.955890 kubelet[2806]: I0116 09:02:51.954594 2806 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a-cilium-cgroup\") pod \"5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a\" (UID: \"5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a\") " Jan 16 09:02:51.955890 kubelet[2806]: I0116 09:02:51.954625 2806 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a-host-proc-sys-net\") pod \"5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a\" (UID: \"5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a\") " Jan 16 09:02:51.955890 kubelet[2806]: I0116 09:02:51.954667 2806 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fp52z\" (UniqueName: \"kubernetes.io/projected/f4d2f7b4-41c6-49c3-8f9b-61f02e17884d-kube-api-access-fp52z\") pod \"f4d2f7b4-41c6-49c3-8f9b-61f02e17884d\" (UID: \"f4d2f7b4-41c6-49c3-8f9b-61f02e17884d\") " Jan 16 09:02:51.955890 kubelet[2806]: I0116 09:02:51.954696 2806 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a-bpf-maps\") pod \"5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a\" (UID: \"5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a\") " Jan 16 09:02:51.955890 kubelet[2806]: I0116 09:02:51.954725 2806 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a-hostproc\") pod \"5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a\" (UID: \"5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a\") " Jan 16 09:02:51.957217 kubelet[2806]: I0116 09:02:51.954756 2806 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a-xtables-lock\") pod \"5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a\" (UID: \"5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a\") " Jan 16 09:02:51.957217 kubelet[2806]: I0116 09:02:51.954782 2806 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a-cni-path\") pod \"5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a\" (UID: \"5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a\") " Jan 16 09:02:51.957217 kubelet[2806]: I0116 09:02:51.954813 2806 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a-host-proc-sys-kernel\") pod \"5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a\" (UID: \"5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a\") " Jan 16 09:02:51.957217 kubelet[2806]: I0116 09:02:51.954845 2806 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a-hubble-tls\") pod \"5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a\" (UID: \"5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a\") " Jan 16 09:02:51.959455 kubelet[2806]: I0116 09:02:51.959222 2806 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a" (UID: "5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 16 09:02:51.960116 kubelet[2806]: I0116 09:02:51.959490 2806 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a" (UID: "5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 16 09:02:51.960116 kubelet[2806]: I0116 09:02:51.959527 2806 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a" (UID: "5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 16 09:02:51.961777 kubelet[2806]: I0116 09:02:51.961549 2806 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a" (UID: "5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 16 09:02:51.961777 kubelet[2806]: I0116 09:02:51.961714 2806 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a" (UID: "5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 16 09:02:51.964351 kubelet[2806]: I0116 09:02:51.964147 2806 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a" (UID: "5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 16 09:02:51.964351 kubelet[2806]: I0116 09:02:51.964229 2806 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a-hostproc" (OuterVolumeSpecName: "hostproc") pod "5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a" (UID: "5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 16 09:02:51.964351 kubelet[2806]: I0116 09:02:51.964254 2806 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a" (UID: "5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 16 09:02:51.964351 kubelet[2806]: I0116 09:02:51.964276 2806 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a-cni-path" (OuterVolumeSpecName: "cni-path") pod "5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a" (UID: "5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 16 09:02:51.964351 kubelet[2806]: I0116 09:02:51.964298 2806 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a" (UID: "5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 16 09:02:51.967830 kubelet[2806]: I0116 09:02:51.967759 2806 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a" (UID: "5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 16 09:02:51.968085 kubelet[2806]: I0116 09:02:51.968069 2806 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a" (UID: "5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 16 09:02:51.969403 kubelet[2806]: I0116 09:02:51.969360 2806 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a-kube-api-access-2mtcr" (OuterVolumeSpecName: "kube-api-access-2mtcr") pod "5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a" (UID: "5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a"). InnerVolumeSpecName "kube-api-access-2mtcr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 16 09:02:51.970321 kubelet[2806]: I0116 09:02:51.970277 2806 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a" (UID: "5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 16 09:02:51.970469 kubelet[2806]: I0116 09:02:51.970439 2806 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4d2f7b4-41c6-49c3-8f9b-61f02e17884d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f4d2f7b4-41c6-49c3-8f9b-61f02e17884d" (UID: "f4d2f7b4-41c6-49c3-8f9b-61f02e17884d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 16 09:02:51.971184 kubelet[2806]: I0116 09:02:51.971131 2806 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4d2f7b4-41c6-49c3-8f9b-61f02e17884d-kube-api-access-fp52z" (OuterVolumeSpecName: "kube-api-access-fp52z") pod "f4d2f7b4-41c6-49c3-8f9b-61f02e17884d" (UID: "f4d2f7b4-41c6-49c3-8f9b-61f02e17884d"). InnerVolumeSpecName "kube-api-access-fp52z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 16 09:02:52.056555 kubelet[2806]: I0116 09:02:52.055929 2806 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a-host-proc-sys-net\") on node \"ci-4152.2.0-e-393f89f1d0\" DevicePath \"\"" Jan 16 09:02:52.056555 kubelet[2806]: I0116 09:02:52.055997 2806 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-fp52z\" (UniqueName: \"kubernetes.io/projected/f4d2f7b4-41c6-49c3-8f9b-61f02e17884d-kube-api-access-fp52z\") on node \"ci-4152.2.0-e-393f89f1d0\" DevicePath \"\"" Jan 16 09:02:52.056555 kubelet[2806]: I0116 09:02:52.056023 2806 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a-bpf-maps\") on node \"ci-4152.2.0-e-393f89f1d0\" DevicePath \"\"" Jan 16 09:02:52.056555 kubelet[2806]: I0116 09:02:52.056045 2806 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a-hostproc\") on node \"ci-4152.2.0-e-393f89f1d0\" DevicePath \"\"" Jan 16 09:02:52.056555 kubelet[2806]: I0116 09:02:52.056065 2806 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a-xtables-lock\") on node \"ci-4152.2.0-e-393f89f1d0\" DevicePath \"\"" Jan 16 09:02:52.056555 kubelet[2806]: I0116 09:02:52.056084 2806 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a-cni-path\") on node \"ci-4152.2.0-e-393f89f1d0\" DevicePath \"\"" Jan 16 09:02:52.056555 kubelet[2806]: I0116 09:02:52.056100 2806 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a-host-proc-sys-kernel\") on node \"ci-4152.2.0-e-393f89f1d0\" DevicePath \"\"" Jan 16 09:02:52.056555 kubelet[2806]: I0116 09:02:52.056118 2806 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a-hubble-tls\") on node \"ci-4152.2.0-e-393f89f1d0\" DevicePath \"\"" Jan 16 09:02:52.057122 kubelet[2806]: I0116 09:02:52.056170 2806 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a-clustermesh-secrets\") on node \"ci-4152.2.0-e-393f89f1d0\" DevicePath \"\"" Jan 16 09:02:52.057122 kubelet[2806]: I0116 09:02:52.056290 2806 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a-etc-cni-netd\") on node \"ci-4152.2.0-e-393f89f1d0\" DevicePath \"\"" Jan 16 09:02:52.057122 kubelet[2806]: I0116 09:02:52.056309 2806 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a-lib-modules\") on node \"ci-4152.2.0-e-393f89f1d0\" DevicePath \"\"" Jan 16 09:02:52.057122 kubelet[2806]: I0116 09:02:52.056325 2806 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a-cilium-run\") on node \"ci-4152.2.0-e-393f89f1d0\" DevicePath \"\"" Jan 16 09:02:52.057122 kubelet[2806]: I0116 09:02:52.056342 2806 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-2mtcr\" (UniqueName: \"kubernetes.io/projected/5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a-kube-api-access-2mtcr\") on node \"ci-4152.2.0-e-393f89f1d0\" DevicePath \"\"" Jan 16 09:02:52.057122 kubelet[2806]: I0116 09:02:52.056359 2806 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f4d2f7b4-41c6-49c3-8f9b-61f02e17884d-cilium-config-path\") on node \"ci-4152.2.0-e-393f89f1d0\" DevicePath \"\"" Jan 16 09:02:52.057122 kubelet[2806]: I0116 09:02:52.056490 2806 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a-cilium-config-path\") on node \"ci-4152.2.0-e-393f89f1d0\" DevicePath \"\"" Jan 16 09:02:52.057122 kubelet[2806]: I0116 09:02:52.056516 2806 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a-cilium-cgroup\") on node \"ci-4152.2.0-e-393f89f1d0\" DevicePath \"\"" Jan 16 09:02:52.326844 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6aad7d353e6a8d99d0aaf25f3ada9a713efe883297771370ccd99c7faae62099-rootfs.mount: Deactivated successfully. Jan 16 09:02:52.327039 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6aad7d353e6a8d99d0aaf25f3ada9a713efe883297771370ccd99c7faae62099-shm.mount: Deactivated successfully. Jan 16 09:02:52.327144 systemd[1]: var-lib-kubelet-pods-f4d2f7b4\x2d41c6\x2d49c3\x2d8f9b\x2d61f02e17884d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfp52z.mount: Deactivated successfully. Jan 16 09:02:52.327250 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-05d1db85a00fa8fbebc3691890726280d5545a049f5b39c9861286d69af67b95-rootfs.mount: Deactivated successfully. Jan 16 09:02:52.327355 systemd[1]: var-lib-kubelet-pods-5c9e475e\x2d4eb6\x2d48e1\x2d96fe\x2d8f8b0d2bdd1a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2mtcr.mount: Deactivated successfully. Jan 16 09:02:52.327509 systemd[1]: var-lib-kubelet-pods-5c9e475e\x2d4eb6\x2d48e1\x2d96fe\x2d8f8b0d2bdd1a-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 16 09:02:52.327660 systemd[1]: var-lib-kubelet-pods-5c9e475e\x2d4eb6\x2d48e1\x2d96fe\x2d8f8b0d2bdd1a-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 16 09:02:52.750467 kubelet[2806]: I0116 09:02:52.750354 2806 scope.go:117] "RemoveContainer" containerID="a3336472ed5a7a2c1e735c99999b1d2d0170f6cf459a425e3aaabdcadc26eba6" Jan 16 09:02:52.773677 containerd[1577]: time="2025-01-16T09:02:52.773572105Z" level=info msg="RemoveContainer for \"a3336472ed5a7a2c1e735c99999b1d2d0170f6cf459a425e3aaabdcadc26eba6\"" Jan 16 09:02:52.781698 kubelet[2806]: E0116 09:02:52.781618 2806 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 16 09:02:52.782877 containerd[1577]: time="2025-01-16T09:02:52.782532940Z" level=info msg="RemoveContainer for \"a3336472ed5a7a2c1e735c99999b1d2d0170f6cf459a425e3aaabdcadc26eba6\" returns successfully" Jan 16 09:02:52.788070 kubelet[2806]: I0116 09:02:52.787534 2806 scope.go:117] "RemoveContainer" containerID="3fb61f8bc053f88df65e7ff9b3da14dbc43a0c6850c7c37e6e922984cc7135b9" Jan 16 09:02:52.791218 containerd[1577]: time="2025-01-16T09:02:52.791012848Z" level=info msg="RemoveContainer for \"3fb61f8bc053f88df65e7ff9b3da14dbc43a0c6850c7c37e6e922984cc7135b9\"" Jan 16 09:02:52.794948 containerd[1577]: time="2025-01-16T09:02:52.794770112Z" level=info msg="RemoveContainer for \"3fb61f8bc053f88df65e7ff9b3da14dbc43a0c6850c7c37e6e922984cc7135b9\" returns successfully" Jan 16 09:02:52.796995 kubelet[2806]: I0116 09:02:52.795153 2806 scope.go:117] "RemoveContainer" containerID="df84919d60d3717f5721b3391e0df7cb872eb8540e7410d41525e08e62db1592" Jan 16 09:02:52.805211 containerd[1577]: time="2025-01-16T09:02:52.804898120Z" level=info msg="RemoveContainer for \"df84919d60d3717f5721b3391e0df7cb872eb8540e7410d41525e08e62db1592\"" Jan 16 09:02:52.810927 containerd[1577]: time="2025-01-16T09:02:52.810849322Z" level=info msg="RemoveContainer for \"df84919d60d3717f5721b3391e0df7cb872eb8540e7410d41525e08e62db1592\" returns successfully" Jan 16 09:02:52.812665 kubelet[2806]: I0116 09:02:52.812196 2806 scope.go:117] "RemoveContainer" containerID="02be9972315e9d054dace87e7854e6c7309a78b4103a1764d273d296f2e848b9" Jan 16 09:02:52.815878 containerd[1577]: time="2025-01-16T09:02:52.815406111Z" level=info msg="RemoveContainer for \"02be9972315e9d054dace87e7854e6c7309a78b4103a1764d273d296f2e848b9\"" Jan 16 09:02:52.824706 containerd[1577]: time="2025-01-16T09:02:52.824559812Z" level=info msg="RemoveContainer for \"02be9972315e9d054dace87e7854e6c7309a78b4103a1764d273d296f2e848b9\" returns successfully" Jan 16 09:02:52.825327 kubelet[2806]: I0116 09:02:52.825282 2806 scope.go:117] "RemoveContainer" containerID="870342d0db70b4465d865bcef66994bb2b1174012fb751e5d9d67498688f860e" Jan 16 09:02:52.828587 containerd[1577]: time="2025-01-16T09:02:52.827992373Z" level=info msg="RemoveContainer for \"870342d0db70b4465d865bcef66994bb2b1174012fb751e5d9d67498688f860e\"" Jan 16 09:02:52.831928 containerd[1577]: time="2025-01-16T09:02:52.831869852Z" level=info msg="RemoveContainer for \"870342d0db70b4465d865bcef66994bb2b1174012fb751e5d9d67498688f860e\" returns successfully" Jan 16 09:02:52.832594 kubelet[2806]: I0116 09:02:52.832554 2806 scope.go:117] "RemoveContainer" containerID="0c98e4bcf30fa58fa06f407242586a332ec8df15c71624ba97609f0486d3f2c8" Jan 16 09:02:52.834898 containerd[1577]: time="2025-01-16T09:02:52.834853785Z" level=info msg="RemoveContainer for \"0c98e4bcf30fa58fa06f407242586a332ec8df15c71624ba97609f0486d3f2c8\"" Jan 16 09:02:52.839010 containerd[1577]: time="2025-01-16T09:02:52.838917876Z" level=info msg="RemoveContainer for \"0c98e4bcf30fa58fa06f407242586a332ec8df15c71624ba97609f0486d3f2c8\" returns successfully" Jan 16 09:02:53.213268 sshd[4500]: Connection closed by 147.75.109.163 port 37914 Jan 16 09:02:53.214084 sshd-session[4494]: pam_unix(sshd:session): session closed for user core Jan 16 09:02:53.223680 systemd[1]: Started sshd@31-24.199.127.61:22-147.75.109.163:37922.service - OpenSSH per-connection server daemon (147.75.109.163:37922). Jan 16 09:02:53.225177 systemd[1]: sshd@30-24.199.127.61:22-147.75.109.163:37914.service: Deactivated successfully. Jan 16 09:02:53.236904 systemd-logind[1555]: Session 29 logged out. Waiting for processes to exit. Jan 16 09:02:53.240842 systemd[1]: session-29.scope: Deactivated successfully. Jan 16 09:02:53.244892 systemd-logind[1555]: Removed session 29. Jan 16 09:02:53.300853 sshd[4663]: Accepted publickey for core from 147.75.109.163 port 37922 ssh2: RSA SHA256:JFCq2iHRoEjWAf+XB9dAYBdNgVKarxLIY/Gd6UT86ms Jan 16 09:02:53.303197 sshd-session[4663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:02:53.313015 systemd-logind[1555]: New session 30 of user core. Jan 16 09:02:53.319009 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 16 09:02:54.291503 sshd[4669]: Connection closed by 147.75.109.163 port 37922 Jan 16 09:02:54.295719 sshd-session[4663]: pam_unix(sshd:session): session closed for user core Jan 16 09:02:54.315308 systemd[1]: Started sshd@32-24.199.127.61:22-147.75.109.163:37930.service - OpenSSH per-connection server daemon (147.75.109.163:37930). Jan 16 09:02:54.319142 systemd[1]: sshd@31-24.199.127.61:22-147.75.109.163:37922.service: Deactivated successfully. Jan 16 09:02:54.333107 kubelet[2806]: I0116 09:02:54.333044 2806 topology_manager.go:215] "Topology Admit Handler" podUID="c50f88a9-2b6d-40ec-924a-9656e5e17927" podNamespace="kube-system" podName="cilium-gsgnl" Jan 16 09:02:54.342918 kubelet[2806]: E0116 09:02:54.341096 2806 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a" containerName="mount-bpf-fs" Jan 16 09:02:54.342918 kubelet[2806]: E0116 09:02:54.341142 2806 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a" containerName="clean-cilium-state" Jan 16 09:02:54.342918 kubelet[2806]: E0116 09:02:54.341155 2806 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a" containerName="mount-cgroup" Jan 16 09:02:54.342918 kubelet[2806]: E0116 09:02:54.341169 2806 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a" containerName="apply-sysctl-overwrites" Jan 16 09:02:54.342918 kubelet[2806]: E0116 09:02:54.341180 2806 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a" containerName="cilium-agent" Jan 16 09:02:54.342918 kubelet[2806]: E0116 09:02:54.341191 2806 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f4d2f7b4-41c6-49c3-8f9b-61f02e17884d" containerName="cilium-operator" Jan 16 09:02:54.342918 kubelet[2806]: I0116 09:02:54.341231 2806 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a" containerName="cilium-agent" Jan 16 09:02:54.342918 kubelet[2806]: I0116 09:02:54.341243 2806 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4d2f7b4-41c6-49c3-8f9b-61f02e17884d" containerName="cilium-operator" Jan 16 09:02:54.343086 systemd[1]: session-30.scope: Deactivated successfully. Jan 16 09:02:54.350404 systemd-logind[1555]: Session 30 logged out. Waiting for processes to exit. Jan 16 09:02:54.360515 systemd-logind[1555]: Removed session 30. Jan 16 09:02:54.448462 sshd[4675]: Accepted publickey for core from 147.75.109.163 port 37930 ssh2: RSA SHA256:JFCq2iHRoEjWAf+XB9dAYBdNgVKarxLIY/Gd6UT86ms Jan 16 09:02:54.451486 sshd-session[4675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:02:54.473765 systemd-logind[1555]: New session 31 of user core. Jan 16 09:02:54.486291 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 16 09:02:54.496990 kubelet[2806]: I0116 09:02:54.494605 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdxx6\" (UniqueName: \"kubernetes.io/projected/c50f88a9-2b6d-40ec-924a-9656e5e17927-kube-api-access-rdxx6\") pod \"cilium-gsgnl\" (UID: \"c50f88a9-2b6d-40ec-924a-9656e5e17927\") " pod="kube-system/cilium-gsgnl" Jan 16 09:02:54.496990 kubelet[2806]: I0116 09:02:54.494683 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c50f88a9-2b6d-40ec-924a-9656e5e17927-cilium-ipsec-secrets\") pod \"cilium-gsgnl\" (UID: \"c50f88a9-2b6d-40ec-924a-9656e5e17927\") " pod="kube-system/cilium-gsgnl" Jan 16 09:02:54.496990 kubelet[2806]: I0116 09:02:54.494739 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c50f88a9-2b6d-40ec-924a-9656e5e17927-host-proc-sys-kernel\") pod \"cilium-gsgnl\" (UID: \"c50f88a9-2b6d-40ec-924a-9656e5e17927\") " pod="kube-system/cilium-gsgnl" Jan 16 09:02:54.496990 kubelet[2806]: I0116 09:02:54.496597 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c50f88a9-2b6d-40ec-924a-9656e5e17927-bpf-maps\") pod \"cilium-gsgnl\" (UID: \"c50f88a9-2b6d-40ec-924a-9656e5e17927\") " pod="kube-system/cilium-gsgnl" Jan 16 09:02:54.500828 kubelet[2806]: I0116 09:02:54.497684 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c50f88a9-2b6d-40ec-924a-9656e5e17927-cilium-cgroup\") pod \"cilium-gsgnl\" (UID: \"c50f88a9-2b6d-40ec-924a-9656e5e17927\") " pod="kube-system/cilium-gsgnl" Jan 16 09:02:54.504684 kubelet[2806]: I0116 09:02:54.502663 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c50f88a9-2b6d-40ec-924a-9656e5e17927-xtables-lock\") pod \"cilium-gsgnl\" (UID: \"c50f88a9-2b6d-40ec-924a-9656e5e17927\") " pod="kube-system/cilium-gsgnl" Jan 16 09:02:54.504684 kubelet[2806]: I0116 09:02:54.502847 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c50f88a9-2b6d-40ec-924a-9656e5e17927-host-proc-sys-net\") pod \"cilium-gsgnl\" (UID: \"c50f88a9-2b6d-40ec-924a-9656e5e17927\") " pod="kube-system/cilium-gsgnl" Jan 16 09:02:54.504684 kubelet[2806]: I0116 09:02:54.502935 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c50f88a9-2b6d-40ec-924a-9656e5e17927-etc-cni-netd\") pod \"cilium-gsgnl\" (UID: \"c50f88a9-2b6d-40ec-924a-9656e5e17927\") " pod="kube-system/cilium-gsgnl" Jan 16 09:02:54.504684 kubelet[2806]: I0116 09:02:54.502994 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c50f88a9-2b6d-40ec-924a-9656e5e17927-cilium-run\") pod \"cilium-gsgnl\" (UID: \"c50f88a9-2b6d-40ec-924a-9656e5e17927\") " pod="kube-system/cilium-gsgnl" Jan 16 09:02:54.504684 kubelet[2806]: I0116 09:02:54.503040 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c50f88a9-2b6d-40ec-924a-9656e5e17927-hostproc\") pod \"cilium-gsgnl\" (UID: \"c50f88a9-2b6d-40ec-924a-9656e5e17927\") " pod="kube-system/cilium-gsgnl" Jan 16 09:02:54.504684 kubelet[2806]: I0116 09:02:54.503073 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c50f88a9-2b6d-40ec-924a-9656e5e17927-hubble-tls\") pod \"cilium-gsgnl\" (UID: \"c50f88a9-2b6d-40ec-924a-9656e5e17927\") " pod="kube-system/cilium-gsgnl" Jan 16 09:02:54.505121 kubelet[2806]: I0116 09:02:54.503215 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c50f88a9-2b6d-40ec-924a-9656e5e17927-cni-path\") pod \"cilium-gsgnl\" (UID: \"c50f88a9-2b6d-40ec-924a-9656e5e17927\") " pod="kube-system/cilium-gsgnl" Jan 16 09:02:54.505121 kubelet[2806]: I0116 09:02:54.503282 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c50f88a9-2b6d-40ec-924a-9656e5e17927-lib-modules\") pod \"cilium-gsgnl\" (UID: \"c50f88a9-2b6d-40ec-924a-9656e5e17927\") " pod="kube-system/cilium-gsgnl" Jan 16 09:02:54.505121 kubelet[2806]: I0116 09:02:54.503383 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c50f88a9-2b6d-40ec-924a-9656e5e17927-clustermesh-secrets\") pod \"cilium-gsgnl\" (UID: \"c50f88a9-2b6d-40ec-924a-9656e5e17927\") " pod="kube-system/cilium-gsgnl" Jan 16 09:02:54.505121 kubelet[2806]: I0116 09:02:54.503492 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c50f88a9-2b6d-40ec-924a-9656e5e17927-cilium-config-path\") pod \"cilium-gsgnl\" (UID: \"c50f88a9-2b6d-40ec-924a-9656e5e17927\") " pod="kube-system/cilium-gsgnl" Jan 16 09:02:54.566649 sshd[4681]: Connection closed by 147.75.109.163 port 37930 Jan 16 09:02:54.569367 sshd-session[4675]: pam_unix(sshd:session): session closed for user core Jan 16 09:02:54.585073 systemd[1]: Started sshd@33-24.199.127.61:22-147.75.109.163:37932.service - OpenSSH per-connection server daemon (147.75.109.163:37932). Jan 16 09:02:54.589308 systemd[1]: sshd@32-24.199.127.61:22-147.75.109.163:37930.service: Deactivated successfully. Jan 16 09:02:54.604509 systemd[1]: session-31.scope: Deactivated successfully. Jan 16 09:02:54.610177 systemd-logind[1555]: Session 31 logged out. Waiting for processes to exit. Jan 16 09:02:54.618641 systemd-logind[1555]: Removed session 31. Jan 16 09:02:54.625570 kubelet[2806]: I0116 09:02:54.625030 2806 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a" path="/var/lib/kubelet/pods/5c9e475e-4eb6-48e1-96fe-8f8b0d2bdd1a/volumes" Jan 16 09:02:54.658690 kubelet[2806]: I0116 09:02:54.658643 2806 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="f4d2f7b4-41c6-49c3-8f9b-61f02e17884d" path="/var/lib/kubelet/pods/f4d2f7b4-41c6-49c3-8f9b-61f02e17884d/volumes" Jan 16 09:02:54.712609 sshd[4685]: Accepted publickey for core from 147.75.109.163 port 37932 ssh2: RSA SHA256:JFCq2iHRoEjWAf+XB9dAYBdNgVKarxLIY/Gd6UT86ms Jan 16 09:02:54.715406 sshd-session[4685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:02:54.724869 systemd-logind[1555]: New session 32 of user core. Jan 16 09:02:54.730068 systemd[1]: Started session-32.scope - Session 32 of User core. Jan 16 09:02:54.746230 kubelet[2806]: E0116 09:02:54.744884 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:02:54.746518 containerd[1577]: time="2025-01-16T09:02:54.745694590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gsgnl,Uid:c50f88a9-2b6d-40ec-924a-9656e5e17927,Namespace:kube-system,Attempt:0,}" Jan 16 09:02:54.804019 containerd[1577]: time="2025-01-16T09:02:54.803871968Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 09:02:54.804488 containerd[1577]: time="2025-01-16T09:02:54.804243882Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 09:02:54.806459 containerd[1577]: time="2025-01-16T09:02:54.806049174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:02:54.806615 containerd[1577]: time="2025-01-16T09:02:54.806269211Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:02:54.900677 containerd[1577]: time="2025-01-16T09:02:54.900618238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gsgnl,Uid:c50f88a9-2b6d-40ec-924a-9656e5e17927,Namespace:kube-system,Attempt:0,} returns sandbox id \"33ef49cb27d5346131c620962aa8e857db7ee0097361a8335d99ab75953401ff\"" Jan 16 09:02:54.905119 kubelet[2806]: E0116 09:02:54.902881 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:02:54.934799 containerd[1577]: time="2025-01-16T09:02:54.930723366Z" level=info msg="CreateContainer within sandbox \"33ef49cb27d5346131c620962aa8e857db7ee0097361a8335d99ab75953401ff\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 16 09:02:54.954709 containerd[1577]: time="2025-01-16T09:02:54.954650065Z" level=info msg="CreateContainer within sandbox \"33ef49cb27d5346131c620962aa8e857db7ee0097361a8335d99ab75953401ff\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2df861ae7361492ae96aeb04a088fc05b47abb92717d90f095f805d5260c9e95\"" Jan 16 09:02:54.958803 containerd[1577]: time="2025-01-16T09:02:54.956822408Z" level=info msg="StartContainer for \"2df861ae7361492ae96aeb04a088fc05b47abb92717d90f095f805d5260c9e95\"" Jan 16 09:02:55.061747 containerd[1577]: time="2025-01-16T09:02:55.061672310Z" level=info msg="StartContainer for \"2df861ae7361492ae96aeb04a088fc05b47abb92717d90f095f805d5260c9e95\" returns successfully" Jan 16 09:02:55.134228 containerd[1577]: time="2025-01-16T09:02:55.133915876Z" level=info msg="shim disconnected" id=2df861ae7361492ae96aeb04a088fc05b47abb92717d90f095f805d5260c9e95 namespace=k8s.io Jan 16 09:02:55.134228 containerd[1577]: time="2025-01-16T09:02:55.133990913Z" level=warning msg="cleaning up after shim disconnected" id=2df861ae7361492ae96aeb04a088fc05b47abb92717d90f095f805d5260c9e95 namespace=k8s.io Jan 16 09:02:55.134228 containerd[1577]: time="2025-01-16T09:02:55.134003695Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 16 09:02:55.157174 containerd[1577]: time="2025-01-16T09:02:55.156972652Z" level=warning msg="cleanup warnings time=\"2025-01-16T09:02:55Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 16 09:02:55.793032 kubelet[2806]: E0116 09:02:55.792959 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:02:55.805084 containerd[1577]: time="2025-01-16T09:02:55.804867508Z" level=info msg="CreateContainer within sandbox \"33ef49cb27d5346131c620962aa8e857db7ee0097361a8335d99ab75953401ff\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 16 09:02:55.827560 containerd[1577]: time="2025-01-16T09:02:55.826357562Z" level=info msg="CreateContainer within sandbox \"33ef49cb27d5346131c620962aa8e857db7ee0097361a8335d99ab75953401ff\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"00024f64b7eee63e64371f6f29ee191026be253b18a49a52d3f499aaf0ca747d\"" Jan 16 09:02:55.833106 containerd[1577]: time="2025-01-16T09:02:55.830658962Z" level=info msg="StartContainer for \"00024f64b7eee63e64371f6f29ee191026be253b18a49a52d3f499aaf0ca747d\"" Jan 16 09:02:55.935077 containerd[1577]: time="2025-01-16T09:02:55.934805416Z" level=info msg="StartContainer for \"00024f64b7eee63e64371f6f29ee191026be253b18a49a52d3f499aaf0ca747d\" returns successfully" Jan 16 09:02:55.985182 containerd[1577]: time="2025-01-16T09:02:55.985084058Z" level=info msg="shim disconnected" id=00024f64b7eee63e64371f6f29ee191026be253b18a49a52d3f499aaf0ca747d namespace=k8s.io Jan 16 09:02:55.985651 containerd[1577]: time="2025-01-16T09:02:55.985195015Z" level=warning msg="cleaning up after shim disconnected" id=00024f64b7eee63e64371f6f29ee191026be253b18a49a52d3f499aaf0ca747d namespace=k8s.io Jan 16 09:02:55.985651 containerd[1577]: time="2025-01-16T09:02:55.985214379Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 16 09:02:56.199810 kubelet[2806]: I0116 09:02:56.199745 2806 setters.go:568] "Node became not ready" node="ci-4152.2.0-e-393f89f1d0" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-16T09:02:56Z","lastTransitionTime":"2025-01-16T09:02:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 16 09:02:56.612952 kubelet[2806]: E0116 09:02:56.612268 2806 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-fpznq" podUID="a57877a6-6eef-4cee-9dea-5c89fdfe526d" Jan 16 09:02:56.633722 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-00024f64b7eee63e64371f6f29ee191026be253b18a49a52d3f499aaf0ca747d-rootfs.mount: Deactivated successfully. Jan 16 09:02:56.802672 kubelet[2806]: E0116 09:02:56.802346 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:02:56.810680 containerd[1577]: time="2025-01-16T09:02:56.810634609Z" level=info msg="CreateContainer within sandbox \"33ef49cb27d5346131c620962aa8e857db7ee0097361a8335d99ab75953401ff\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 16 09:02:56.845678 containerd[1577]: time="2025-01-16T09:02:56.845599121Z" level=info msg="CreateContainer within sandbox \"33ef49cb27d5346131c620962aa8e857db7ee0097361a8335d99ab75953401ff\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ad96f0546142f0791c21e9fd45406441bedb27c8d09bf61633948fd4e880f929\"" Jan 16 09:02:56.846697 containerd[1577]: time="2025-01-16T09:02:56.846613876Z" level=info msg="StartContainer for \"ad96f0546142f0791c21e9fd45406441bedb27c8d09bf61633948fd4e880f929\"" Jan 16 09:02:56.962821 containerd[1577]: time="2025-01-16T09:02:56.962092717Z" level=info msg="StartContainer for \"ad96f0546142f0791c21e9fd45406441bedb27c8d09bf61633948fd4e880f929\" returns successfully" Jan 16 09:02:57.032695 containerd[1577]: time="2025-01-16T09:02:57.032615459Z" level=info msg="shim disconnected" id=ad96f0546142f0791c21e9fd45406441bedb27c8d09bf61633948fd4e880f929 namespace=k8s.io Jan 16 09:02:57.033176 containerd[1577]: time="2025-01-16T09:02:57.033001573Z" level=warning msg="cleaning up after shim disconnected" id=ad96f0546142f0791c21e9fd45406441bedb27c8d09bf61633948fd4e880f929 namespace=k8s.io Jan 16 09:02:57.033176 containerd[1577]: time="2025-01-16T09:02:57.033034405Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 16 09:02:57.636847 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ad96f0546142f0791c21e9fd45406441bedb27c8d09bf61633948fd4e880f929-rootfs.mount: Deactivated successfully. Jan 16 09:02:57.783701 kubelet[2806]: E0116 09:02:57.783608 2806 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 16 09:02:57.811250 kubelet[2806]: E0116 09:02:57.811186 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:02:57.819238 containerd[1577]: time="2025-01-16T09:02:57.819175926Z" level=info msg="CreateContainer within sandbox \"33ef49cb27d5346131c620962aa8e857db7ee0097361a8335d99ab75953401ff\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 16 09:02:57.848606 containerd[1577]: time="2025-01-16T09:02:57.847966934Z" level=info msg="CreateContainer within sandbox \"33ef49cb27d5346131c620962aa8e857db7ee0097361a8335d99ab75953401ff\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e48ed5cf81a40420b6468cb02cb93a3fbbec1ace383e5c088261d6a470173009\"" Jan 16 09:02:57.850580 containerd[1577]: time="2025-01-16T09:02:57.849262026Z" level=info msg="StartContainer for \"e48ed5cf81a40420b6468cb02cb93a3fbbec1ace383e5c088261d6a470173009\"" Jan 16 09:02:57.952195 containerd[1577]: time="2025-01-16T09:02:57.952054937Z" level=info msg="StartContainer for \"e48ed5cf81a40420b6468cb02cb93a3fbbec1ace383e5c088261d6a470173009\" returns successfully" Jan 16 09:02:57.987850 containerd[1577]: time="2025-01-16T09:02:57.987743382Z" level=info msg="shim disconnected" id=e48ed5cf81a40420b6468cb02cb93a3fbbec1ace383e5c088261d6a470173009 namespace=k8s.io Jan 16 09:02:57.988147 containerd[1577]: time="2025-01-16T09:02:57.987939007Z" level=warning msg="cleaning up after shim disconnected" id=e48ed5cf81a40420b6468cb02cb93a3fbbec1ace383e5c088261d6a470173009 namespace=k8s.io Jan 16 09:02:57.988147 containerd[1577]: time="2025-01-16T09:02:57.987967592Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 16 09:02:58.612889 kubelet[2806]: E0116 09:02:58.612831 2806 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-fpznq" podUID="a57877a6-6eef-4cee-9dea-5c89fdfe526d" Jan 16 09:02:58.634513 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e48ed5cf81a40420b6468cb02cb93a3fbbec1ace383e5c088261d6a470173009-rootfs.mount: Deactivated successfully. Jan 16 09:02:58.821491 kubelet[2806]: E0116 09:02:58.820769 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:02:58.830490 containerd[1577]: time="2025-01-16T09:02:58.829777217Z" level=info msg="CreateContainer within sandbox \"33ef49cb27d5346131c620962aa8e857db7ee0097361a8335d99ab75953401ff\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 16 09:02:58.853456 containerd[1577]: time="2025-01-16T09:02:58.851896346Z" level=info msg="CreateContainer within sandbox \"33ef49cb27d5346131c620962aa8e857db7ee0097361a8335d99ab75953401ff\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"cec53d7c2893b93773391a30806e601ecaeac5287c09fd396b3d2bfde7aedfda\"" Jan 16 09:02:58.856828 containerd[1577]: time="2025-01-16T09:02:58.856566408Z" level=info msg="StartContainer for \"cec53d7c2893b93773391a30806e601ecaeac5287c09fd396b3d2bfde7aedfda\"" Jan 16 09:02:59.044921 containerd[1577]: time="2025-01-16T09:02:59.044791779Z" level=info msg="StartContainer for \"cec53d7c2893b93773391a30806e601ecaeac5287c09fd396b3d2bfde7aedfda\" returns successfully" Jan 16 09:02:59.612915 kubelet[2806]: E0116 09:02:59.612835 2806 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-5mtsc" podUID="9b87d7e0-a324-4b0f-a3c4-5c209da6016d" Jan 16 09:02:59.661668 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 16 09:02:59.843532 kubelet[2806]: E0116 09:02:59.843004 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:03:00.611881 kubelet[2806]: E0116 09:03:00.611777 2806 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-fpznq" podUID="a57877a6-6eef-4cee-9dea-5c89fdfe526d" Jan 16 09:03:00.846764 kubelet[2806]: E0116 09:03:00.846550 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:03:01.570468 systemd[1]: run-containerd-runc-k8s.io-cec53d7c2893b93773391a30806e601ecaeac5287c09fd396b3d2bfde7aedfda-runc.Xom9UD.mount: Deactivated successfully. Jan 16 09:03:01.619098 kubelet[2806]: E0116 09:03:01.616408 2806 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-5mtsc" podUID="9b87d7e0-a324-4b0f-a3c4-5c209da6016d" Jan 16 09:03:02.616009 kubelet[2806]: E0116 09:03:02.611851 2806 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-fpznq" podUID="a57877a6-6eef-4cee-9dea-5c89fdfe526d" Jan 16 09:03:02.624884 kubelet[2806]: E0116 09:03:02.624785 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:03:03.614542 kubelet[2806]: E0116 09:03:03.614485 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:03:04.617792 kubelet[2806]: E0116 09:03:04.615519 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:03:04.980293 systemd-networkd[1220]: lxc_health: Link UP Jan 16 09:03:04.987721 systemd-networkd[1220]: lxc_health: Gained carrier Jan 16 09:03:06.485705 systemd-networkd[1220]: lxc_health: Gained IPv6LL Jan 16 09:03:06.747906 kubelet[2806]: E0116 09:03:06.747519 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:03:06.788304 kubelet[2806]: I0116 09:03:06.787512 2806 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-gsgnl" podStartSLOduration=12.787326213 podStartE2EDuration="12.787326213s" podCreationTimestamp="2025-01-16 09:02:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-16 09:02:59.886584216 +0000 UTC m=+257.467338611" watchObservedRunningTime="2025-01-16 09:03:06.787326213 +0000 UTC m=+264.368080606" Jan 16 09:03:06.870218 kubelet[2806]: E0116 09:03:06.869937 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:03:07.873934 kubelet[2806]: E0116 09:03:07.873870 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:03:11.233495 sshd[4696]: Connection closed by 147.75.109.163 port 37932 Jan 16 09:03:11.234975 sshd-session[4685]: pam_unix(sshd:session): session closed for user core Jan 16 09:03:11.244854 systemd-logind[1555]: Session 32 logged out. Waiting for processes to exit. Jan 16 09:03:11.251037 systemd[1]: sshd@33-24.199.127.61:22-147.75.109.163:37932.service: Deactivated successfully. Jan 16 09:03:11.263969 systemd[1]: session-32.scope: Deactivated successfully. Jan 16 09:03:11.266738 systemd-logind[1555]: Removed session 32.