Mar 17 17:55:33.207996 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Mon Mar 17 16:09:25 -00 2025 Mar 17 17:55:33.208042 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=2a4a0f64c0160ed10b339be09fdc9d7e265b13f78aefc87616e79bf13c00bb1c Mar 17 17:55:33.208058 kernel: BIOS-provided physical RAM map: Mar 17 17:55:33.208065 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 17 17:55:33.208072 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 17 17:55:33.208079 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 17 17:55:33.208087 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Mar 17 17:55:33.208093 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Mar 17 17:55:33.208102 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 17 17:55:33.208113 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 17 17:55:33.208128 kernel: NX (Execute Disable) protection: active Mar 17 17:55:33.208144 kernel: APIC: Static calls initialized Mar 17 17:55:33.208155 kernel: SMBIOS 2.8 present. Mar 17 17:55:33.208165 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Mar 17 17:55:33.208178 kernel: Hypervisor detected: KVM Mar 17 17:55:33.208189 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 17 17:55:33.208210 kernel: kvm-clock: using sched offset of 4121794596 cycles Mar 17 17:55:33.208222 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 17 17:55:33.208233 kernel: tsc: Detected 1999.999 MHz processor Mar 17 17:55:33.208244 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 17 17:55:33.208256 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 17 17:55:33.208267 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Mar 17 17:55:33.208278 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 17 17:55:33.208289 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 17 17:55:33.208304 kernel: ACPI: Early table checksum verification disabled Mar 17 17:55:33.208315 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Mar 17 17:55:33.208326 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:55:33.208339 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:55:33.208351 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:55:33.208364 kernel: ACPI: FACS 0x000000007FFE0000 000040 Mar 17 17:55:33.208376 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:55:33.208386 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:55:33.208394 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:55:33.208405 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:55:33.208413 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Mar 17 17:55:33.208420 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Mar 17 17:55:33.208427 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Mar 17 17:55:33.208435 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Mar 17 17:55:33.208442 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Mar 17 17:55:33.208450 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Mar 17 17:55:33.208461 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Mar 17 17:55:33.208471 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Mar 17 17:55:33.208484 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Mar 17 17:55:33.208493 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Mar 17 17:55:33.208501 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Mar 17 17:55:33.208508 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Mar 17 17:55:33.208516 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Mar 17 17:55:33.208527 kernel: Zone ranges: Mar 17 17:55:33.208535 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 17 17:55:33.208543 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Mar 17 17:55:33.208551 kernel: Normal empty Mar 17 17:55:33.208609 kernel: Movable zone start for each node Mar 17 17:55:33.208618 kernel: Early memory node ranges Mar 17 17:55:33.208627 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 17 17:55:33.208635 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Mar 17 17:55:33.208642 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Mar 17 17:55:33.208654 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 17 17:55:33.208662 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 17 17:55:33.208674 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Mar 17 17:55:33.208682 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 17 17:55:33.208690 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 17 17:55:33.208698 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 17 17:55:33.208706 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 17 17:55:33.208714 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 17 17:55:33.208721 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 17 17:55:33.208729 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 17 17:55:33.208740 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 17 17:55:33.208748 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 17 17:55:33.208755 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 17 17:55:33.208763 kernel: TSC deadline timer available Mar 17 17:55:33.208771 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Mar 17 17:55:33.208779 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 17 17:55:33.208787 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Mar 17 17:55:33.208799 kernel: Booting paravirtualized kernel on KVM Mar 17 17:55:33.208807 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 17 17:55:33.208818 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Mar 17 17:55:33.208827 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Mar 17 17:55:33.208840 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Mar 17 17:55:33.208854 kernel: pcpu-alloc: [0] 0 1 Mar 17 17:55:33.208866 kernel: kvm-guest: PV spinlocks disabled, no host support Mar 17 17:55:33.208880 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=2a4a0f64c0160ed10b339be09fdc9d7e265b13f78aefc87616e79bf13c00bb1c Mar 17 17:55:33.208892 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 17 17:55:33.208905 kernel: random: crng init done Mar 17 17:55:33.208921 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 17 17:55:33.208929 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Mar 17 17:55:33.208937 kernel: Fallback order for Node 0: 0 Mar 17 17:55:33.208945 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Mar 17 17:55:33.208952 kernel: Policy zone: DMA32 Mar 17 17:55:33.208961 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 17 17:55:33.208969 kernel: Memory: 1969156K/2096612K available (14336K kernel code, 2303K rwdata, 22860K rodata, 43476K init, 1596K bss, 127196K reserved, 0K cma-reserved) Mar 17 17:55:33.208977 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 17 17:55:33.208985 kernel: Kernel/User page tables isolation: enabled Mar 17 17:55:33.208997 kernel: ftrace: allocating 37910 entries in 149 pages Mar 17 17:55:33.209011 kernel: ftrace: allocated 149 pages with 4 groups Mar 17 17:55:33.209022 kernel: Dynamic Preempt: voluntary Mar 17 17:55:33.209033 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 17 17:55:33.209055 kernel: rcu: RCU event tracing is enabled. Mar 17 17:55:33.209065 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 17 17:55:33.209074 kernel: Trampoline variant of Tasks RCU enabled. Mar 17 17:55:33.209082 kernel: Rude variant of Tasks RCU enabled. Mar 17 17:55:33.209094 kernel: Tracing variant of Tasks RCU enabled. Mar 17 17:55:33.209110 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 17 17:55:33.209122 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 17 17:55:33.209134 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Mar 17 17:55:33.209153 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 17 17:55:33.209180 kernel: Console: colour VGA+ 80x25 Mar 17 17:55:33.209193 kernel: printk: console [tty0] enabled Mar 17 17:55:33.209206 kernel: printk: console [ttyS0] enabled Mar 17 17:55:33.209219 kernel: ACPI: Core revision 20230628 Mar 17 17:55:33.209232 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 17 17:55:33.209250 kernel: APIC: Switch to symmetric I/O mode setup Mar 17 17:55:33.209262 kernel: x2apic enabled Mar 17 17:55:33.209275 kernel: APIC: Switched APIC routing to: physical x2apic Mar 17 17:55:33.209288 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 17 17:55:33.209302 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a85afc727, max_idle_ns: 881590685098 ns Mar 17 17:55:33.209314 kernel: Calibrating delay loop (skipped) preset value.. 3999.99 BogoMIPS (lpj=1999999) Mar 17 17:55:33.209325 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Mar 17 17:55:33.209333 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Mar 17 17:55:33.209354 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 17 17:55:33.209367 kernel: Spectre V2 : Mitigation: Retpolines Mar 17 17:55:33.209381 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Mar 17 17:55:33.209399 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Mar 17 17:55:33.209414 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Mar 17 17:55:33.209428 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 17 17:55:33.209443 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Mar 17 17:55:33.209456 kernel: MDS: Mitigation: Clear CPU buffers Mar 17 17:55:33.209471 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Mar 17 17:55:33.209494 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 17 17:55:33.209509 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 17 17:55:33.209523 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 17 17:55:33.209537 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 17 17:55:33.209552 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Mar 17 17:55:33.210177 kernel: Freeing SMP alternatives memory: 32K Mar 17 17:55:33.210196 kernel: pid_max: default: 32768 minimum: 301 Mar 17 17:55:33.210206 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 17 17:55:33.210223 kernel: landlock: Up and running. Mar 17 17:55:33.210231 kernel: SELinux: Initializing. Mar 17 17:55:33.210240 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 17 17:55:33.210249 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 17 17:55:33.210258 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Mar 17 17:55:33.210267 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 17 17:55:33.210276 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 17 17:55:33.210285 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 17 17:55:33.210293 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Mar 17 17:55:33.210305 kernel: signal: max sigframe size: 1776 Mar 17 17:55:33.210314 kernel: rcu: Hierarchical SRCU implementation. Mar 17 17:55:33.210324 kernel: rcu: Max phase no-delay instances is 400. Mar 17 17:55:33.210333 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 17 17:55:33.210342 kernel: smp: Bringing up secondary CPUs ... Mar 17 17:55:33.210350 kernel: smpboot: x86: Booting SMP configuration: Mar 17 17:55:33.210359 kernel: .... node #0, CPUs: #1 Mar 17 17:55:33.210373 kernel: smp: Brought up 1 node, 2 CPUs Mar 17 17:55:33.210381 kernel: smpboot: Max logical packages: 1 Mar 17 17:55:33.210393 kernel: smpboot: Total of 2 processors activated (7999.99 BogoMIPS) Mar 17 17:55:33.210407 kernel: devtmpfs: initialized Mar 17 17:55:33.210416 kernel: x86/mm: Memory block size: 128MB Mar 17 17:55:33.210425 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 17 17:55:33.210434 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 17 17:55:33.210443 kernel: pinctrl core: initialized pinctrl subsystem Mar 17 17:55:33.210452 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 17 17:55:33.210460 kernel: audit: initializing netlink subsys (disabled) Mar 17 17:55:33.210469 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 17 17:55:33.210487 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 17 17:55:33.210501 kernel: audit: type=2000 audit(1742234131.424:1): state=initialized audit_enabled=0 res=1 Mar 17 17:55:33.210514 kernel: cpuidle: using governor menu Mar 17 17:55:33.210526 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 17 17:55:33.210541 kernel: dca service started, version 1.12.1 Mar 17 17:55:33.210549 kernel: PCI: Using configuration type 1 for base access Mar 17 17:55:33.210575 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 17 17:55:33.211635 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 17 17:55:33.211653 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 17 17:55:33.211669 kernel: ACPI: Added _OSI(Module Device) Mar 17 17:55:33.211678 kernel: ACPI: Added _OSI(Processor Device) Mar 17 17:55:33.211687 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 17 17:55:33.211696 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 17 17:55:33.211704 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 17 17:55:33.211713 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 17 17:55:33.211722 kernel: ACPI: Interpreter enabled Mar 17 17:55:33.211731 kernel: ACPI: PM: (supports S0 S5) Mar 17 17:55:33.211739 kernel: ACPI: Using IOAPIC for interrupt routing Mar 17 17:55:33.211751 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 17 17:55:33.211760 kernel: PCI: Using E820 reservations for host bridge windows Mar 17 17:55:33.211769 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Mar 17 17:55:33.211778 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 17 17:55:33.212044 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Mar 17 17:55:33.212214 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Mar 17 17:55:33.212353 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Mar 17 17:55:33.212374 kernel: acpiphp: Slot [3] registered Mar 17 17:55:33.212388 kernel: acpiphp: Slot [4] registered Mar 17 17:55:33.212401 kernel: acpiphp: Slot [5] registered Mar 17 17:55:33.212410 kernel: acpiphp: Slot [6] registered Mar 17 17:55:33.212419 kernel: acpiphp: Slot [7] registered Mar 17 17:55:33.212427 kernel: acpiphp: Slot [8] registered Mar 17 17:55:33.212436 kernel: acpiphp: Slot [9] registered Mar 17 17:55:33.212444 kernel: acpiphp: Slot [10] registered Mar 17 17:55:33.212453 kernel: acpiphp: Slot [11] registered Mar 17 17:55:33.212461 kernel: acpiphp: Slot [12] registered Mar 17 17:55:33.212473 kernel: acpiphp: Slot [13] registered Mar 17 17:55:33.212482 kernel: acpiphp: Slot [14] registered Mar 17 17:55:33.212490 kernel: acpiphp: Slot [15] registered Mar 17 17:55:33.212499 kernel: acpiphp: Slot [16] registered Mar 17 17:55:33.212507 kernel: acpiphp: Slot [17] registered Mar 17 17:55:33.212516 kernel: acpiphp: Slot [18] registered Mar 17 17:55:33.212525 kernel: acpiphp: Slot [19] registered Mar 17 17:55:33.212539 kernel: acpiphp: Slot [20] registered Mar 17 17:55:33.212553 kernel: acpiphp: Slot [21] registered Mar 17 17:55:33.214629 kernel: acpiphp: Slot [22] registered Mar 17 17:55:33.214666 kernel: acpiphp: Slot [23] registered Mar 17 17:55:33.214676 kernel: acpiphp: Slot [24] registered Mar 17 17:55:33.214686 kernel: acpiphp: Slot [25] registered Mar 17 17:55:33.214696 kernel: acpiphp: Slot [26] registered Mar 17 17:55:33.214705 kernel: acpiphp: Slot [27] registered Mar 17 17:55:33.214715 kernel: acpiphp: Slot [28] registered Mar 17 17:55:33.214725 kernel: acpiphp: Slot [29] registered Mar 17 17:55:33.214734 kernel: acpiphp: Slot [30] registered Mar 17 17:55:33.214744 kernel: acpiphp: Slot [31] registered Mar 17 17:55:33.214760 kernel: PCI host bridge to bus 0000:00 Mar 17 17:55:33.215000 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 17 17:55:33.215111 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 17 17:55:33.215252 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 17 17:55:33.215359 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Mar 17 17:55:33.215446 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Mar 17 17:55:33.215717 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 17 17:55:33.215966 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Mar 17 17:55:33.216184 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Mar 17 17:55:33.216345 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Mar 17 17:55:33.216474 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Mar 17 17:55:33.218664 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Mar 17 17:55:33.218887 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Mar 17 17:55:33.219040 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Mar 17 17:55:33.219174 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Mar 17 17:55:33.219326 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Mar 17 17:55:33.219464 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Mar 17 17:55:33.219669 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Mar 17 17:55:33.219827 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Mar 17 17:55:33.219989 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Mar 17 17:55:33.220166 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Mar 17 17:55:33.220322 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Mar 17 17:55:33.220474 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Mar 17 17:55:33.222815 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Mar 17 17:55:33.223031 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Mar 17 17:55:33.223197 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 17 17:55:33.223392 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Mar 17 17:55:33.223553 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Mar 17 17:55:33.223792 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Mar 17 17:55:33.223938 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Mar 17 17:55:33.224105 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 17 17:55:33.224262 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Mar 17 17:55:33.224404 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Mar 17 17:55:33.224598 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Mar 17 17:55:33.224782 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Mar 17 17:55:33.224941 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Mar 17 17:55:33.225110 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Mar 17 17:55:33.225271 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Mar 17 17:55:33.225409 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Mar 17 17:55:33.225519 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Mar 17 17:55:33.227829 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Mar 17 17:55:33.227997 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Mar 17 17:55:33.228174 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Mar 17 17:55:33.228331 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Mar 17 17:55:33.228493 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Mar 17 17:55:33.228651 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Mar 17 17:55:33.228781 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Mar 17 17:55:33.228909 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Mar 17 17:55:33.229010 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Mar 17 17:55:33.229022 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 17 17:55:33.229031 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 17 17:55:33.229040 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 17 17:55:33.229049 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 17 17:55:33.229058 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Mar 17 17:55:33.229072 kernel: iommu: Default domain type: Translated Mar 17 17:55:33.229081 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 17 17:55:33.229089 kernel: PCI: Using ACPI for IRQ routing Mar 17 17:55:33.229098 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 17 17:55:33.229107 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 17 17:55:33.229116 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Mar 17 17:55:33.229244 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Mar 17 17:55:33.229376 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Mar 17 17:55:33.229493 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 17 17:55:33.229505 kernel: vgaarb: loaded Mar 17 17:55:33.229514 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 17 17:55:33.229523 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 17 17:55:33.229531 kernel: clocksource: Switched to clocksource kvm-clock Mar 17 17:55:33.229540 kernel: VFS: Disk quotas dquot_6.6.0 Mar 17 17:55:33.229549 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 17 17:55:33.231659 kernel: pnp: PnP ACPI init Mar 17 17:55:33.231698 kernel: pnp: PnP ACPI: found 4 devices Mar 17 17:55:33.231724 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 17 17:55:33.231739 kernel: NET: Registered PF_INET protocol family Mar 17 17:55:33.231753 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 17 17:55:33.231772 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Mar 17 17:55:33.231787 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 17 17:55:33.231799 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 17 17:55:33.231811 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Mar 17 17:55:33.231824 kernel: TCP: Hash tables configured (established 16384 bind 16384) Mar 17 17:55:33.231836 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 17 17:55:33.231853 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 17 17:55:33.231864 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 17 17:55:33.231876 kernel: NET: Registered PF_XDP protocol family Mar 17 17:55:33.232089 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 17 17:55:33.232226 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 17 17:55:33.232351 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 17 17:55:33.232478 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Mar 17 17:55:33.232624 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Mar 17 17:55:33.232737 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Mar 17 17:55:33.232851 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Mar 17 17:55:33.232865 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Mar 17 17:55:33.232964 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 49098 usecs Mar 17 17:55:33.232977 kernel: PCI: CLS 0 bytes, default 64 Mar 17 17:55:33.232986 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Mar 17 17:55:33.232995 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a85afc727, max_idle_ns: 881590685098 ns Mar 17 17:55:33.233004 kernel: Initialise system trusted keyrings Mar 17 17:55:33.233014 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Mar 17 17:55:33.233026 kernel: Key type asymmetric registered Mar 17 17:55:33.233036 kernel: Asymmetric key parser 'x509' registered Mar 17 17:55:33.233045 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 17 17:55:33.233054 kernel: io scheduler mq-deadline registered Mar 17 17:55:33.233066 kernel: io scheduler kyber registered Mar 17 17:55:33.233082 kernel: io scheduler bfq registered Mar 17 17:55:33.233096 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 17 17:55:33.233108 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Mar 17 17:55:33.233121 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Mar 17 17:55:33.233137 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Mar 17 17:55:33.233148 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 17 17:55:33.233172 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 17 17:55:33.233187 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 17 17:55:33.233200 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 17 17:55:33.233214 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 17 17:55:33.233422 kernel: rtc_cmos 00:03: RTC can wake from S4 Mar 17 17:55:33.233446 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 17 17:55:33.235721 kernel: rtc_cmos 00:03: registered as rtc0 Mar 17 17:55:33.235918 kernel: rtc_cmos 00:03: setting system clock to 2025-03-17T17:55:32 UTC (1742234132) Mar 17 17:55:33.236054 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Mar 17 17:55:33.236067 kernel: intel_pstate: CPU model not supported Mar 17 17:55:33.236077 kernel: NET: Registered PF_INET6 protocol family Mar 17 17:55:33.236086 kernel: Segment Routing with IPv6 Mar 17 17:55:33.236095 kernel: In-situ OAM (IOAM) with IPv6 Mar 17 17:55:33.236104 kernel: NET: Registered PF_PACKET protocol family Mar 17 17:55:33.236113 kernel: Key type dns_resolver registered Mar 17 17:55:33.236132 kernel: IPI shorthand broadcast: enabled Mar 17 17:55:33.236141 kernel: sched_clock: Marking stable (1307010481, 159177183)->(1676888067, -210700403) Mar 17 17:55:33.236150 kernel: registered taskstats version 1 Mar 17 17:55:33.236159 kernel: Loading compiled-in X.509 certificates Mar 17 17:55:33.236168 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: 2d438fc13e28f87f3f580874887bade2e2b0c7dd' Mar 17 17:55:33.236177 kernel: Key type .fscrypt registered Mar 17 17:55:33.236185 kernel: Key type fscrypt-provisioning registered Mar 17 17:55:33.236195 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 17 17:55:33.236206 kernel: ima: Allocated hash algorithm: sha1 Mar 17 17:55:33.236215 kernel: ima: No architecture policies found Mar 17 17:55:33.236224 kernel: clk: Disabling unused clocks Mar 17 17:55:33.236233 kernel: Freeing unused kernel image (initmem) memory: 43476K Mar 17 17:55:33.236242 kernel: Write protecting the kernel read-only data: 38912k Mar 17 17:55:33.236268 kernel: Freeing unused kernel image (rodata/data gap) memory: 1716K Mar 17 17:55:33.236280 kernel: Run /init as init process Mar 17 17:55:33.236289 kernel: with arguments: Mar 17 17:55:33.236298 kernel: /init Mar 17 17:55:33.236309 kernel: with environment: Mar 17 17:55:33.236318 kernel: HOME=/ Mar 17 17:55:33.236327 kernel: TERM=linux Mar 17 17:55:33.236336 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 17 17:55:33.236347 systemd[1]: Successfully made /usr/ read-only. Mar 17 17:55:33.236361 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 17 17:55:33.236371 systemd[1]: Detected virtualization kvm. Mar 17 17:55:33.236380 systemd[1]: Detected architecture x86-64. Mar 17 17:55:33.236393 systemd[1]: Running in initrd. Mar 17 17:55:33.236402 systemd[1]: No hostname configured, using default hostname. Mar 17 17:55:33.236411 systemd[1]: Hostname set to . Mar 17 17:55:33.236421 systemd[1]: Initializing machine ID from VM UUID. Mar 17 17:55:33.236430 systemd[1]: Queued start job for default target initrd.target. Mar 17 17:55:33.236440 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:55:33.236452 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:55:33.236464 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 17 17:55:33.236476 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:55:33.236485 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 17 17:55:33.236496 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 17 17:55:33.236507 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 17 17:55:33.236517 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 17 17:55:33.236526 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:55:33.236536 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:55:33.236547 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:55:33.236557 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:55:33.236605 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:55:33.236614 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:55:33.236624 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:55:33.236636 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:55:33.236646 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 17 17:55:33.236673 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 17 17:55:33.236683 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:55:33.236692 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:55:33.236702 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:55:33.236711 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:55:33.236720 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 17 17:55:33.236730 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:55:33.236742 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 17 17:55:33.236751 systemd[1]: Starting systemd-fsck-usr.service... Mar 17 17:55:33.236768 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:55:33.236783 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:55:33.236797 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:55:33.236810 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 17 17:55:33.236824 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:55:33.236842 systemd[1]: Finished systemd-fsck-usr.service. Mar 17 17:55:33.236857 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 17 17:55:33.236938 systemd-journald[183]: Collecting audit messages is disabled. Mar 17 17:55:33.236981 systemd-journald[183]: Journal started Mar 17 17:55:33.237022 systemd-journald[183]: Runtime Journal (/run/log/journal/9a5439c0a26240aabec49a1337d40f64) is 4.9M, max 39.3M, 34.4M free. Mar 17 17:55:33.210668 systemd-modules-load[184]: Inserted module 'overlay' Mar 17 17:55:33.250666 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:55:33.252107 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:55:33.263929 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:55:33.277479 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 17 17:55:33.277521 kernel: Bridge firewalling registered Mar 17 17:55:33.275409 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:55:33.277332 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:55:33.278731 systemd-modules-load[184]: Inserted module 'br_netfilter' Mar 17 17:55:33.282422 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:55:33.294863 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:55:33.310456 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:55:33.313805 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:55:33.320019 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:55:33.326223 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 17 17:55:33.333960 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:55:33.345878 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:55:33.349658 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:55:33.374836 dracut-cmdline[217]: dracut-dracut-053 Mar 17 17:55:33.383139 dracut-cmdline[217]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=2a4a0f64c0160ed10b339be09fdc9d7e265b13f78aefc87616e79bf13c00bb1c Mar 17 17:55:33.434240 systemd-resolved[220]: Positive Trust Anchors: Mar 17 17:55:33.434265 systemd-resolved[220]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:55:33.434323 systemd-resolved[220]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:55:33.439499 systemd-resolved[220]: Defaulting to hostname 'linux'. Mar 17 17:55:33.443226 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:55:33.444774 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:55:33.585699 kernel: SCSI subsystem initialized Mar 17 17:55:33.600634 kernel: Loading iSCSI transport class v2.0-870. Mar 17 17:55:33.629310 kernel: iscsi: registered transport (tcp) Mar 17 17:55:33.663948 kernel: iscsi: registered transport (qla4xxx) Mar 17 17:55:33.664055 kernel: QLogic iSCSI HBA Driver Mar 17 17:55:33.763317 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 17 17:55:33.776843 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 17 17:55:33.815651 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 17 17:55:33.815765 kernel: device-mapper: uevent: version 1.0.3 Mar 17 17:55:33.817705 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 17 17:55:33.878666 kernel: raid6: avx2x4 gen() 23500 MB/s Mar 17 17:55:33.909399 kernel: raid6: avx2x2 gen() 17819 MB/s Mar 17 17:55:33.933283 kernel: raid6: avx2x1 gen() 11752 MB/s Mar 17 17:55:33.933389 kernel: raid6: using algorithm avx2x4 gen() 23500 MB/s Mar 17 17:55:33.960622 kernel: raid6: .... xor() 5200 MB/s, rmw enabled Mar 17 17:55:33.960765 kernel: raid6: using avx2x2 recovery algorithm Mar 17 17:55:34.009609 kernel: xor: automatically using best checksumming function avx Mar 17 17:55:34.223644 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 17 17:55:34.245369 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:55:34.254910 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:55:34.293032 systemd-udevd[404]: Using default interface naming scheme 'v255'. Mar 17 17:55:34.301066 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:55:34.312441 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 17 17:55:34.354196 dracut-pre-trigger[412]: rd.md=0: removing MD RAID activation Mar 17 17:55:34.414434 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:55:34.432903 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:55:34.542168 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:55:34.552861 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 17 17:55:34.597053 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 17 17:55:34.602574 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:55:34.604704 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:55:34.607124 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:55:34.618115 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 17 17:55:34.652938 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:55:34.669266 kernel: scsi host0: Virtio SCSI HBA Mar 17 17:55:34.678601 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Mar 17 17:55:34.756762 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Mar 17 17:55:34.757008 kernel: cryptd: max_cpu_qlen set to 1000 Mar 17 17:55:34.757031 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 17 17:55:34.757048 kernel: GPT:9289727 != 125829119 Mar 17 17:55:34.757064 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 17 17:55:34.757082 kernel: GPT:9289727 != 125829119 Mar 17 17:55:34.757110 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 17 17:55:34.757129 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 17:55:34.757252 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Mar 17 17:55:34.778206 kernel: virtio_blk virtio5: [vdb] 964 512-byte logical blocks (494 kB/482 KiB) Mar 17 17:55:34.734313 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:55:34.734447 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:55:34.751116 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:55:34.753281 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:55:34.753548 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:55:34.754826 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:55:34.764530 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:55:34.768302 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 17 17:55:34.802933 kernel: AVX2 version of gcm_enc/dec engaged. Mar 17 17:55:34.803038 kernel: AES CTR mode by8 optimization enabled Mar 17 17:55:34.805828 kernel: ACPI: bus type USB registered Mar 17 17:55:34.814620 kernel: usbcore: registered new interface driver usbfs Mar 17 17:55:34.822653 kernel: libata version 3.00 loaded. Mar 17 17:55:34.847597 kernel: usbcore: registered new interface driver hub Mar 17 17:55:34.847683 kernel: usbcore: registered new device driver usb Mar 17 17:55:34.861069 kernel: ata_piix 0000:00:01.1: version 2.13 Mar 17 17:55:34.920815 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (448) Mar 17 17:55:34.920847 kernel: scsi host1: ata_piix Mar 17 17:55:34.921098 kernel: scsi host2: ata_piix Mar 17 17:55:34.921407 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Mar 17 17:55:34.921428 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Mar 17 17:55:34.925643 kernel: BTRFS: device fsid 16b3954e-2e86-4c7f-a948-d3d3817b1bdc devid 1 transid 42 /dev/vda3 scanned by (udev-worker) (462) Mar 17 17:55:34.926706 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 17 17:55:34.962651 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:55:34.978670 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 17 17:55:34.991161 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 17 17:55:35.011355 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 17 17:55:35.012251 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 17 17:55:35.024914 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 17 17:55:35.028795 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:55:35.034902 disk-uuid[535]: Primary Header is updated. Mar 17 17:55:35.034902 disk-uuid[535]: Secondary Entries is updated. Mar 17 17:55:35.034902 disk-uuid[535]: Secondary Header is updated. Mar 17 17:55:35.042935 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 17:55:35.068705 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:55:35.139351 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Mar 17 17:55:35.158493 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Mar 17 17:55:35.158759 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Mar 17 17:55:35.158960 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Mar 17 17:55:35.159085 kernel: hub 1-0:1.0: USB hub found Mar 17 17:55:35.159241 kernel: hub 1-0:1.0: 2 ports detected Mar 17 17:55:36.062626 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 17:55:36.063342 disk-uuid[536]: The operation has completed successfully. Mar 17 17:55:36.124513 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 17 17:55:36.124711 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 17 17:55:36.190933 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 17 17:55:36.195774 sh[562]: Success Mar 17 17:55:36.216620 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Mar 17 17:55:36.311426 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 17 17:55:36.325891 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 17 17:55:36.334049 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 17 17:55:36.359085 kernel: BTRFS info (device dm-0): first mount of filesystem 16b3954e-2e86-4c7f-a948-d3d3817b1bdc Mar 17 17:55:36.359177 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:55:36.360966 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 17 17:55:36.362966 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 17 17:55:36.365319 kernel: BTRFS info (device dm-0): using free space tree Mar 17 17:55:36.374921 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 17 17:55:36.376512 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 17 17:55:36.382870 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 17 17:55:36.386670 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 17 17:55:36.407629 kernel: BTRFS info (device vda6): first mount of filesystem e64ce651-fa93-44de-893d-ff1e0bc9061f Mar 17 17:55:36.411046 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:55:36.411157 kernel: BTRFS info (device vda6): using free space tree Mar 17 17:55:36.416608 kernel: BTRFS info (device vda6): auto enabling async discard Mar 17 17:55:36.434460 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 17 17:55:36.438182 kernel: BTRFS info (device vda6): last unmount of filesystem e64ce651-fa93-44de-893d-ff1e0bc9061f Mar 17 17:55:36.447725 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 17 17:55:36.455909 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 17 17:55:36.586726 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:55:36.601094 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:55:36.650090 systemd-networkd[747]: lo: Link UP Mar 17 17:55:36.650107 systemd-networkd[747]: lo: Gained carrier Mar 17 17:55:36.653784 systemd-networkd[747]: Enumeration completed Mar 17 17:55:36.653942 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:55:36.655223 systemd-networkd[747]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Mar 17 17:55:36.655229 systemd-networkd[747]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Mar 17 17:55:36.656318 systemd[1]: Reached target network.target - Network. Mar 17 17:55:36.656880 systemd-networkd[747]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:55:36.656887 systemd-networkd[747]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:55:36.658310 systemd-networkd[747]: eth0: Link UP Mar 17 17:55:36.658315 systemd-networkd[747]: eth0: Gained carrier Mar 17 17:55:36.658358 systemd-networkd[747]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Mar 17 17:55:36.667958 ignition[656]: Ignition 2.20.0 Mar 17 17:55:36.665331 systemd-networkd[747]: eth1: Link UP Mar 17 17:55:36.667969 ignition[656]: Stage: fetch-offline Mar 17 17:55:36.665340 systemd-networkd[747]: eth1: Gained carrier Mar 17 17:55:36.668029 ignition[656]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:55:36.665361 systemd-networkd[747]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:55:36.668044 ignition[656]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Mar 17 17:55:36.671324 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:55:36.668225 ignition[656]: parsed url from cmdline: "" Mar 17 17:55:36.668232 ignition[656]: no config URL provided Mar 17 17:55:36.668241 ignition[656]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 17:55:36.668255 ignition[656]: no config at "/usr/lib/ignition/user.ign" Mar 17 17:55:36.681679 systemd-networkd[747]: eth1: DHCPv4 address 10.124.0.34/20 acquired from 169.254.169.253 Mar 17 17:55:36.668267 ignition[656]: failed to fetch config: resource requires networking Mar 17 17:55:36.682120 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 17 17:55:36.668535 ignition[656]: Ignition finished successfully Mar 17 17:55:36.686785 systemd-networkd[747]: eth0: DHCPv4 address 64.23.213.164/19, gateway 64.23.192.1 acquired from 169.254.169.253 Mar 17 17:55:36.730149 ignition[756]: Ignition 2.20.0 Mar 17 17:55:36.730167 ignition[756]: Stage: fetch Mar 17 17:55:36.730482 ignition[756]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:55:36.730499 ignition[756]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Mar 17 17:55:36.730755 ignition[756]: parsed url from cmdline: "" Mar 17 17:55:36.730783 ignition[756]: no config URL provided Mar 17 17:55:36.730793 ignition[756]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 17:55:36.730811 ignition[756]: no config at "/usr/lib/ignition/user.ign" Mar 17 17:55:36.730851 ignition[756]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Mar 17 17:55:36.751105 ignition[756]: GET result: OK Mar 17 17:55:36.751976 ignition[756]: parsing config with SHA512: 76f5c42cbf3f254dd78aee14a4f052619d2bc87c2b828c2efef96b05a9650f2315374a7b7c49d4c8060a2b1577244bfe3db5d849a8f50f057802e79803fcf19b Mar 17 17:55:36.761491 unknown[756]: fetched base config from "system" Mar 17 17:55:36.762646 unknown[756]: fetched base config from "system" Mar 17 17:55:36.763238 unknown[756]: fetched user config from "digitalocean" Mar 17 17:55:36.764115 ignition[756]: fetch: fetch complete Mar 17 17:55:36.764122 ignition[756]: fetch: fetch passed Mar 17 17:55:36.764197 ignition[756]: Ignition finished successfully Mar 17 17:55:36.768660 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 17 17:55:36.777940 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 17 17:55:36.807113 ignition[764]: Ignition 2.20.0 Mar 17 17:55:36.807898 ignition[764]: Stage: kargs Mar 17 17:55:36.808229 ignition[764]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:55:36.808247 ignition[764]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Mar 17 17:55:36.812950 ignition[764]: kargs: kargs passed Mar 17 17:55:36.813032 ignition[764]: Ignition finished successfully Mar 17 17:55:36.815087 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 17 17:55:36.823898 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 17 17:55:36.843545 ignition[770]: Ignition 2.20.0 Mar 17 17:55:36.844908 ignition[770]: Stage: disks Mar 17 17:55:36.846025 ignition[770]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:55:36.846045 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Mar 17 17:55:36.855782 ignition[770]: disks: disks passed Mar 17 17:55:36.855907 ignition[770]: Ignition finished successfully Mar 17 17:55:36.858419 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 17 17:55:36.860752 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 17 17:55:36.862554 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 17 17:55:36.866096 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:55:36.867844 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:55:36.869190 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:55:36.877872 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 17 17:55:36.898634 systemd-fsck[778]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 17 17:55:36.903128 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 17 17:55:37.360767 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 17 17:55:37.521645 kernel: EXT4-fs (vda9): mounted filesystem 21764504-a65e-45eb-84e1-376b55b62aba r/w with ordered data mode. Quota mode: none. Mar 17 17:55:37.523400 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 17 17:55:37.524933 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 17 17:55:37.534171 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:55:37.539801 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 17 17:55:37.542218 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... Mar 17 17:55:37.547960 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 17 17:55:37.550999 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 17 17:55:37.551051 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:55:37.557638 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 17 17:55:37.569794 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 17 17:55:37.574335 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (786) Mar 17 17:55:37.576606 kernel: BTRFS info (device vda6): first mount of filesystem e64ce651-fa93-44de-893d-ff1e0bc9061f Mar 17 17:55:37.579616 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:55:37.579698 kernel: BTRFS info (device vda6): using free space tree Mar 17 17:55:37.593663 kernel: BTRFS info (device vda6): auto enabling async discard Mar 17 17:55:37.605594 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:55:37.691663 initrd-setup-root[816]: cut: /sysroot/etc/passwd: No such file or directory Mar 17 17:55:37.711479 initrd-setup-root[823]: cut: /sysroot/etc/group: No such file or directory Mar 17 17:55:37.714796 coreos-metadata[788]: Mar 17 17:55:37.713 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Mar 17 17:55:37.721923 coreos-metadata[789]: Mar 17 17:55:37.721 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Mar 17 17:55:37.726743 initrd-setup-root[830]: cut: /sysroot/etc/shadow: No such file or directory Mar 17 17:55:37.733321 initrd-setup-root[837]: cut: /sysroot/etc/gshadow: No such file or directory Mar 17 17:55:37.735532 coreos-metadata[788]: Mar 17 17:55:37.734 INFO Fetch successful Mar 17 17:55:37.742608 coreos-metadata[789]: Mar 17 17:55:37.742 INFO Fetch successful Mar 17 17:55:37.748586 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. Mar 17 17:55:37.749687 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. Mar 17 17:55:37.753226 coreos-metadata[789]: Mar 17 17:55:37.752 INFO wrote hostname ci-4230.1.0-0-80157c225a to /sysroot/etc/hostname Mar 17 17:55:37.755051 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 17 17:55:37.906638 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 17 17:55:37.913820 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 17 17:55:37.917890 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 17 17:55:37.931596 kernel: BTRFS info (device vda6): last unmount of filesystem e64ce651-fa93-44de-893d-ff1e0bc9061f Mar 17 17:55:37.969929 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 17 17:55:37.981691 ignition[906]: INFO : Ignition 2.20.0 Mar 17 17:55:37.981691 ignition[906]: INFO : Stage: mount Mar 17 17:55:37.983373 ignition[906]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:55:37.983373 ignition[906]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Mar 17 17:55:37.983373 ignition[906]: INFO : mount: mount passed Mar 17 17:55:37.983373 ignition[906]: INFO : Ignition finished successfully Mar 17 17:55:37.986046 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 17 17:55:38.006949 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 17 17:55:38.295457 systemd-networkd[747]: eth0: Gained IPv6LL Mar 17 17:55:38.355437 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 17 17:55:38.362906 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:55:38.380637 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (918) Mar 17 17:55:38.386821 kernel: BTRFS info (device vda6): first mount of filesystem e64ce651-fa93-44de-893d-ff1e0bc9061f Mar 17 17:55:38.386962 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:55:38.389237 kernel: BTRFS info (device vda6): using free space tree Mar 17 17:55:38.404615 kernel: BTRFS info (device vda6): auto enabling async discard Mar 17 17:55:38.407862 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:55:38.423408 systemd-networkd[747]: eth1: Gained IPv6LL Mar 17 17:55:38.447126 ignition[935]: INFO : Ignition 2.20.0 Mar 17 17:55:38.447126 ignition[935]: INFO : Stage: files Mar 17 17:55:38.449665 ignition[935]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:55:38.449665 ignition[935]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Mar 17 17:55:38.451393 ignition[935]: DEBUG : files: compiled without relabeling support, skipping Mar 17 17:55:38.452757 ignition[935]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 17 17:55:38.452757 ignition[935]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 17 17:55:38.458131 ignition[935]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 17 17:55:38.459317 ignition[935]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 17 17:55:38.459317 ignition[935]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 17 17:55:38.458830 unknown[935]: wrote ssh authorized keys file for user: core Mar 17 17:55:38.462621 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 17 17:55:38.462621 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Mar 17 17:55:38.506350 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 17 17:55:38.620079 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 17 17:55:38.620079 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 17 17:55:38.620079 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 17 17:55:38.620079 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 17 17:55:38.620079 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 17 17:55:38.620079 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 17:55:38.620079 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 17:55:38.620079 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 17:55:38.620079 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 17:55:38.620079 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:55:38.620079 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:55:38.620079 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 17:55:38.620079 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 17:55:38.620079 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 17:55:38.620079 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Mar 17 17:55:38.960401 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 17 17:55:39.694187 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 17:55:39.696602 ignition[935]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 17 17:55:39.699627 ignition[935]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 17:55:39.699627 ignition[935]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 17:55:39.699627 ignition[935]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 17 17:55:39.699627 ignition[935]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Mar 17 17:55:39.706605 ignition[935]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Mar 17 17:55:39.706605 ignition[935]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:55:39.706605 ignition[935]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:55:39.706605 ignition[935]: INFO : files: files passed Mar 17 17:55:39.706605 ignition[935]: INFO : Ignition finished successfully Mar 17 17:55:39.703014 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 17 17:55:39.732458 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 17 17:55:39.735620 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 17 17:55:39.761360 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 17 17:55:39.761545 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 17 17:55:39.793023 initrd-setup-root-after-ignition[964]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:55:39.793023 initrd-setup-root-after-ignition[964]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:55:39.800722 initrd-setup-root-after-ignition[968]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:55:39.805039 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:55:39.807534 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 17 17:55:39.833749 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 17 17:55:39.916491 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 17 17:55:39.917845 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 17 17:55:39.920226 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 17 17:55:39.920982 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 17 17:55:39.923642 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 17 17:55:39.945001 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 17 17:55:40.030667 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:55:40.057627 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 17 17:55:40.084698 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:55:40.085772 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:55:40.086741 systemd[1]: Stopped target timers.target - Timer Units. Mar 17 17:55:40.095553 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 17 17:55:40.097403 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:55:40.105510 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 17 17:55:40.109417 systemd[1]: Stopped target basic.target - Basic System. Mar 17 17:55:40.110675 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 17 17:55:40.111677 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:55:40.115460 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 17 17:55:40.116535 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 17 17:55:40.117508 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:55:40.118668 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 17 17:55:40.119641 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 17 17:55:40.120540 systemd[1]: Stopped target swap.target - Swaps. Mar 17 17:55:40.121398 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 17 17:55:40.121702 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:55:40.123012 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:55:40.123969 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:55:40.124885 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 17 17:55:40.135358 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:55:40.136425 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 17 17:55:40.136693 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 17 17:55:40.138027 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 17 17:55:40.138434 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:55:40.139643 systemd[1]: ignition-files.service: Deactivated successfully. Mar 17 17:55:40.139956 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 17 17:55:40.140942 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 17 17:55:40.141223 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 17 17:55:40.161301 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 17 17:55:40.162170 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 17 17:55:40.162598 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:55:40.171126 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 17 17:55:40.174872 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 17 17:55:40.176494 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:55:40.186828 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 17 17:55:40.187254 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:55:40.213835 ignition[988]: INFO : Ignition 2.20.0 Mar 17 17:55:40.216141 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 17 17:55:40.216323 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 17 17:55:40.228538 ignition[988]: INFO : Stage: umount Mar 17 17:55:40.230000 ignition[988]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:55:40.230000 ignition[988]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Mar 17 17:55:40.243789 ignition[988]: INFO : umount: umount passed Mar 17 17:55:40.245159 ignition[988]: INFO : Ignition finished successfully Mar 17 17:55:40.249485 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 17 17:55:40.264947 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 17 17:55:40.265183 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 17 17:55:40.368256 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 17 17:55:40.368488 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 17 17:55:40.371776 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 17 17:55:40.371907 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 17 17:55:40.375614 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 17 17:55:40.375747 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 17 17:55:40.377551 systemd[1]: Stopped target network.target - Network. Mar 17 17:55:40.379288 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 17 17:55:40.379430 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:55:40.380864 systemd[1]: Stopped target paths.target - Path Units. Mar 17 17:55:40.381946 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 17 17:55:40.395760 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:55:40.396885 systemd[1]: Stopped target slices.target - Slice Units. Mar 17 17:55:40.400337 systemd[1]: Stopped target sockets.target - Socket Units. Mar 17 17:55:40.402012 systemd[1]: iscsid.socket: Deactivated successfully. Mar 17 17:55:40.402123 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:55:40.402948 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 17 17:55:40.403005 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:55:40.405753 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 17 17:55:40.405933 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 17 17:55:40.406914 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 17 17:55:40.407004 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 17 17:55:40.411848 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 17 17:55:40.418364 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 17 17:55:40.422391 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 17 17:55:40.422635 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 17 17:55:40.426807 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 17 17:55:40.427003 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 17 17:55:40.449989 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 17:55:40.450271 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 17 17:55:40.458114 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 17 17:55:40.460131 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 17 17:55:40.460374 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 17 17:55:40.472487 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 17 17:55:40.477989 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 17 17:55:40.479711 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:55:40.503907 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 17 17:55:40.504705 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 17 17:55:40.504848 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:55:40.506851 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 17:55:40.506967 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:55:40.509333 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 17 17:55:40.509441 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 17 17:55:40.512810 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 17 17:55:40.512935 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:55:40.514729 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:55:40.521226 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 17 17:55:40.521366 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 17 17:55:40.551747 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 17 17:55:40.552041 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:55:40.556771 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 17 17:55:40.559267 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 17 17:55:40.563035 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 17 17:55:40.563195 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 17 17:55:40.564405 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 17 17:55:40.568522 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:55:40.569282 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 17 17:55:40.569391 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:55:40.570202 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 17 17:55:40.570259 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 17 17:55:40.570939 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:55:40.570993 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:55:40.599188 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 17 17:55:40.600219 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 17 17:55:40.600431 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:55:40.605850 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:55:40.605984 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:55:40.609496 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 17 17:55:40.609755 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 17 17:55:40.622476 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 17 17:55:40.622743 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 17 17:55:40.624594 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 17 17:55:40.645220 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 17 17:55:40.685155 systemd[1]: Switching root. Mar 17 17:55:40.737796 systemd-journald[183]: Journal stopped Mar 17 17:55:43.023151 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Mar 17 17:55:43.023261 kernel: SELinux: policy capability network_peer_controls=1 Mar 17 17:55:43.023294 kernel: SELinux: policy capability open_perms=1 Mar 17 17:55:43.023312 kernel: SELinux: policy capability extended_socket_class=1 Mar 17 17:55:43.023332 kernel: SELinux: policy capability always_check_network=0 Mar 17 17:55:43.023362 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 17 17:55:43.023388 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 17 17:55:43.023405 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 17 17:55:43.023421 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 17 17:55:43.023439 kernel: audit: type=1403 audit(1742234141.048:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 17 17:55:43.023469 systemd[1]: Successfully loaded SELinux policy in 89.707ms. Mar 17 17:55:43.023511 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 27.842ms. Mar 17 17:55:43.023534 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 17 17:55:43.023554 systemd[1]: Detected virtualization kvm. Mar 17 17:55:43.023597 systemd[1]: Detected architecture x86-64. Mar 17 17:55:43.023617 systemd[1]: Detected first boot. Mar 17 17:55:43.023637 systemd[1]: Hostname set to . Mar 17 17:55:43.023657 systemd[1]: Initializing machine ID from VM UUID. Mar 17 17:55:43.023676 zram_generator::config[1033]: No configuration found. Mar 17 17:55:43.023702 kernel: Guest personality initialized and is inactive Mar 17 17:55:43.023720 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Mar 17 17:55:43.023740 kernel: Initialized host personality Mar 17 17:55:43.023758 kernel: NET: Registered PF_VSOCK protocol family Mar 17 17:55:43.023775 systemd[1]: Populated /etc with preset unit settings. Mar 17 17:55:43.023804 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 17 17:55:43.023825 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 17 17:55:43.023845 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 17 17:55:43.023866 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 17 17:55:43.023891 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 17 17:55:43.023914 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 17 17:55:43.023933 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 17 17:55:43.023953 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 17 17:55:43.023976 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 17 17:55:43.023996 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 17 17:55:43.024015 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 17 17:55:43.024036 systemd[1]: Created slice user.slice - User and Session Slice. Mar 17 17:55:43.024062 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:55:43.024082 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:55:43.024104 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 17 17:55:43.024123 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 17 17:55:43.024145 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 17 17:55:43.024165 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:55:43.024188 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 17 17:55:43.024208 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:55:43.024227 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 17 17:55:43.024247 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 17 17:55:43.024266 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 17 17:55:43.024287 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 17 17:55:43.024307 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:55:43.024326 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:55:43.024344 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:55:43.024370 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:55:43.024388 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 17 17:55:43.024408 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 17 17:55:43.024428 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 17 17:55:43.024447 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:55:43.024468 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:55:43.024488 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:55:43.024507 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 17 17:55:43.024529 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 17 17:55:43.024548 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 17 17:55:43.024617 systemd[1]: Mounting media.mount - External Media Directory... Mar 17 17:55:43.024644 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:55:43.024660 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 17 17:55:43.024677 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 17 17:55:43.024693 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 17 17:55:43.024711 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 17 17:55:43.024729 systemd[1]: Reached target machines.target - Containers. Mar 17 17:55:43.024747 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 17 17:55:43.024771 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:55:43.024792 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:55:43.024813 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 17 17:55:43.024833 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:55:43.024851 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:55:43.024870 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:55:43.024891 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 17 17:55:43.024909 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:55:43.024934 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 17:55:43.024952 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 17 17:55:43.024971 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 17 17:55:43.025007 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 17 17:55:43.025027 systemd[1]: Stopped systemd-fsck-usr.service. Mar 17 17:55:43.025048 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 17:55:43.025069 kernel: fuse: init (API version 7.39) Mar 17 17:55:43.025091 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:55:43.025111 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:55:43.025137 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 17 17:55:43.025154 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 17 17:55:43.025174 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 17 17:55:43.025194 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:55:43.025219 systemd[1]: verity-setup.service: Deactivated successfully. Mar 17 17:55:43.025241 systemd[1]: Stopped verity-setup.service. Mar 17 17:55:43.025261 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:55:43.025279 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 17 17:55:43.025299 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 17 17:55:43.025319 systemd[1]: Mounted media.mount - External Media Directory. Mar 17 17:55:43.025341 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 17 17:55:43.025362 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 17 17:55:43.025388 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 17 17:55:43.025408 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:55:43.025428 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 17 17:55:43.025449 kernel: loop: module loaded Mar 17 17:55:43.025469 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 17 17:55:43.025487 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:55:43.025507 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:55:43.025531 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:55:43.025551 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:55:43.025601 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 17 17:55:43.025622 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 17 17:55:43.025641 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:55:43.025660 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:55:43.025678 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:55:43.025697 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 17 17:55:43.025722 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 17 17:55:43.025741 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 17 17:55:43.025762 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 17:55:43.025782 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:55:43.025802 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 17 17:55:43.025892 systemd-journald[1110]: Collecting audit messages is disabled. Mar 17 17:55:43.025942 systemd-journald[1110]: Journal started Mar 17 17:55:43.025988 systemd-journald[1110]: Runtime Journal (/run/log/journal/9a5439c0a26240aabec49a1337d40f64) is 4.9M, max 39.3M, 34.4M free. Mar 17 17:55:42.365637 systemd[1]: Queued start job for default target multi-user.target. Mar 17 17:55:42.381950 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 17 17:55:42.382757 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 17 17:55:43.055596 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 17 17:55:43.080593 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 17 17:55:43.080679 kernel: ACPI: bus type drm_connector registered Mar 17 17:55:43.085915 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:55:43.097829 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 17 17:55:43.097937 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:55:43.118617 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 17 17:55:43.118754 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:55:43.128948 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:55:43.150026 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 17 17:55:43.156614 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:55:43.169608 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 17 17:55:43.172266 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:55:43.179371 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:55:43.186345 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 17 17:55:43.188798 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 17 17:55:43.192416 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 17 17:55:43.194936 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 17 17:55:43.202024 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 17 17:55:43.220741 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 17 17:55:43.288628 kernel: loop0: detected capacity change from 0 to 147912 Mar 17 17:55:43.323100 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 17 17:55:43.324017 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 17 17:55:43.342010 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 17 17:55:43.361093 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 17 17:55:43.377702 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 17 17:55:43.402411 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 17 17:55:43.469292 systemd-journald[1110]: Time spent on flushing to /var/log/journal/9a5439c0a26240aabec49a1337d40f64 is 85.851ms for 1005 entries. Mar 17 17:55:43.469292 systemd-journald[1110]: System Journal (/var/log/journal/9a5439c0a26240aabec49a1337d40f64) is 8M, max 195.6M, 187.6M free. Mar 17 17:55:43.612358 systemd-journald[1110]: Received client request to flush runtime journal. Mar 17 17:55:43.612537 kernel: loop1: detected capacity change from 0 to 210664 Mar 17 17:55:43.612617 kernel: loop2: detected capacity change from 0 to 138176 Mar 17 17:55:43.470866 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:55:43.546706 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 17 17:55:43.548042 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 17 17:55:43.582890 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:55:43.601992 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 17 17:55:43.630231 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 17 17:55:43.649555 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 17 17:55:43.666988 kernel: loop3: detected capacity change from 0 to 8 Mar 17 17:55:43.672656 udevadm[1175]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 17 17:55:43.711346 kernel: loop4: detected capacity change from 0 to 147912 Mar 17 17:55:43.692020 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:55:43.789606 kernel: loop5: detected capacity change from 0 to 210664 Mar 17 17:55:43.840603 kernel: loop6: detected capacity change from 0 to 138176 Mar 17 17:55:43.864881 systemd-tmpfiles[1181]: ACLs are not supported, ignoring. Mar 17 17:55:43.866752 systemd-tmpfiles[1181]: ACLs are not supported, ignoring. Mar 17 17:55:43.890593 kernel: loop7: detected capacity change from 0 to 8 Mar 17 17:55:43.894277 (sd-merge)[1182]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Mar 17 17:55:43.896214 (sd-merge)[1182]: Merged extensions into '/usr'. Mar 17 17:55:43.921741 systemd[1]: Reload requested from client PID 1140 ('systemd-sysext') (unit systemd-sysext.service)... Mar 17 17:55:43.921788 systemd[1]: Reloading... Mar 17 17:55:44.279608 zram_generator::config[1212]: No configuration found. Mar 17 17:55:44.722799 ldconfig[1136]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 17 17:55:44.796821 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:55:44.944433 systemd[1]: Reloading finished in 1021 ms. Mar 17 17:55:44.975906 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 17 17:55:44.979610 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 17 17:55:44.983691 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:55:45.025990 systemd[1]: Starting ensure-sysext.service... Mar 17 17:55:45.036886 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:55:45.067705 systemd[1]: Reload requested from client PID 1257 ('systemctl') (unit ensure-sysext.service)... Mar 17 17:55:45.067748 systemd[1]: Reloading... Mar 17 17:55:45.145816 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 17 17:55:45.148527 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 17 17:55:45.154128 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 17 17:55:45.154883 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Mar 17 17:55:45.160263 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Mar 17 17:55:45.173284 systemd-tmpfiles[1258]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:55:45.173729 systemd-tmpfiles[1258]: Skipping /boot Mar 17 17:55:45.237714 systemd-tmpfiles[1258]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:55:45.239637 systemd-tmpfiles[1258]: Skipping /boot Mar 17 17:55:45.369099 zram_generator::config[1292]: No configuration found. Mar 17 17:55:45.641770 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:55:45.768178 systemd[1]: Reloading finished in 699 ms. Mar 17 17:55:45.805441 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 17 17:55:45.834697 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:55:45.852790 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:55:45.868472 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:55:45.876014 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 17 17:55:45.880068 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:55:45.891408 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:55:45.898089 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:55:45.902997 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:55:45.905965 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:55:45.906185 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 17:55:45.912360 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 17 17:55:45.925734 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:55:45.939050 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:55:45.954898 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 17 17:55:45.957708 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:55:45.962767 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:55:45.964224 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:55:45.974642 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:55:45.974969 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:55:45.987526 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:55:45.988653 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:55:45.988869 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 17:55:45.995057 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 17 17:55:45.996791 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:55:46.001179 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:55:46.001596 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:55:46.005104 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:55:46.013342 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:55:46.013931 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:55:46.022121 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:55:46.027145 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:55:46.029393 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:55:46.029641 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 17:55:46.029840 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:55:46.046527 systemd[1]: Finished ensure-sysext.service. Mar 17 17:55:46.057481 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:55:46.057859 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:55:46.073209 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 17 17:55:46.084325 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:55:46.084647 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:55:46.089029 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:55:46.104108 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 17 17:55:46.116981 systemd-udevd[1344]: Using default interface naming scheme 'v255'. Mar 17 17:55:46.123805 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 17 17:55:46.137002 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 17 17:55:46.170400 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 17 17:55:46.173100 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 17:55:46.175224 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:55:46.176698 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:55:46.178013 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:55:46.179506 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:55:46.182513 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:55:46.211862 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 17 17:55:46.212492 augenrules[1375]: No rules Mar 17 17:55:46.216091 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:55:46.217374 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:55:46.223740 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:55:46.237985 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:55:46.248860 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 17 17:55:46.450661 systemd-resolved[1343]: Positive Trust Anchors: Mar 17 17:55:46.450679 systemd-resolved[1343]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:55:46.450726 systemd-resolved[1343]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:55:46.460801 systemd-resolved[1343]: Using system hostname 'ci-4230.1.0-0-80157c225a'. Mar 17 17:55:46.463727 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:55:46.464899 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:55:46.540737 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 17 17:55:46.542020 systemd[1]: Reached target time-set.target - System Time Set. Mar 17 17:55:46.568357 systemd[1]: Condition check resulted in dev-disk-by\x2dlabel-config\x2d2.device - /dev/disk/by-label/config-2 being skipped. Mar 17 17:55:46.579136 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Mar 17 17:55:46.581747 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:55:46.581966 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:55:46.594143 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:55:46.602313 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:55:46.611068 systemd-networkd[1385]: lo: Link UP Mar 17 17:55:46.611078 systemd-networkd[1385]: lo: Gained carrier Mar 17 17:55:46.630948 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:55:46.632461 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:55:46.632528 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 17:55:46.632594 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 17:55:46.632618 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:55:46.638899 systemd-networkd[1385]: Enumeration completed Mar 17 17:55:46.639081 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:55:46.640156 systemd[1]: Reached target network.target - Network. Mar 17 17:55:46.652585 systemd-timesyncd[1360]: No network connectivity, watching for changes. Mar 17 17:55:46.683892 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 17 17:55:46.694758 kernel: ISO 9660 Extensions: RRIP_1991A Mar 17 17:55:46.695327 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 17 17:55:46.762835 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Mar 17 17:55:46.767870 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:55:46.770032 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:55:46.774479 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:55:46.774800 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:55:46.776303 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:55:46.776621 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:55:46.784407 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 17 17:55:46.792761 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:55:46.792855 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:55:46.831666 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 17 17:55:46.868693 systemd-networkd[1385]: eth0: Configuring with /run/systemd/network/10-ba:d0:8f:95:df:c5.network. Mar 17 17:55:46.871318 systemd-networkd[1385]: eth0: Link UP Mar 17 17:55:46.871332 systemd-networkd[1385]: eth0: Gained carrier Mar 17 17:55:46.878159 systemd-timesyncd[1360]: Network configuration changed, trying to establish connection. Mar 17 17:55:46.889271 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (1392) Mar 17 17:55:46.939505 systemd-networkd[1385]: eth1: Configuring with /run/systemd/network/10-fa:78:04:c0:12:c2.network. Mar 17 17:55:46.941317 systemd-timesyncd[1360]: Network configuration changed, trying to establish connection. Mar 17 17:55:46.943071 systemd-networkd[1385]: eth1: Link UP Mar 17 17:55:46.943090 systemd-networkd[1385]: eth1: Gained carrier Mar 17 17:55:46.953768 systemd-timesyncd[1360]: Network configuration changed, trying to establish connection. Mar 17 17:55:46.954869 systemd-timesyncd[1360]: Network configuration changed, trying to establish connection. Mar 17 17:55:46.996867 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Mar 17 17:55:47.003472 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 17 17:55:47.009636 kernel: ACPI: button: Power Button [PWRF] Mar 17 17:55:47.072267 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 17 17:55:47.086967 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 17 17:55:47.087284 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 17 17:55:47.136606 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 17 17:55:47.154618 kernel: mousedev: PS/2 mouse device common for all mice Mar 17 17:55:47.197074 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:55:47.234617 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Mar 17 17:55:47.252124 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Mar 17 17:55:47.262851 kernel: Console: switching to colour dummy device 80x25 Mar 17 17:55:47.269698 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Mar 17 17:55:47.269813 kernel: [drm] features: -context_init Mar 17 17:55:47.272723 kernel: [drm] number of scanouts: 1 Mar 17 17:55:47.272833 kernel: [drm] number of cap sets: 0 Mar 17 17:55:47.281177 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Mar 17 17:55:47.303725 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:55:47.304182 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:55:47.307097 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 17 17:55:47.321754 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Mar 17 17:55:47.321954 kernel: Console: switching to colour frame buffer device 128x48 Mar 17 17:55:47.326319 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:55:47.337913 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Mar 17 17:55:47.414318 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:55:47.415203 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:55:47.449112 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:55:47.572884 kernel: EDAC MC: Ver: 3.0.0 Mar 17 17:55:47.579291 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:55:47.613156 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 17 17:55:47.629972 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 17 17:55:47.660218 lvm[1454]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:55:47.715442 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 17 17:55:47.716809 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:55:47.718112 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:55:47.720358 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 17 17:55:47.721724 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 17 17:55:47.722090 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 17 17:55:47.722359 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 17 17:55:47.722465 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 17 17:55:47.722582 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 17 17:55:47.723737 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:55:47.725898 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:55:47.728272 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 17 17:55:47.732443 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 17 17:55:47.740693 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 17 17:55:47.743468 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 17 17:55:47.751509 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 17 17:55:47.779532 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 17 17:55:47.783210 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 17 17:55:47.790961 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 17 17:55:47.794717 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 17 17:55:47.799043 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:55:47.800448 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:55:47.803536 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:55:47.804439 lvm[1458]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:55:47.805185 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:55:47.812889 systemd[1]: Starting containerd.service - containerd container runtime... Mar 17 17:55:47.824986 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 17 17:55:47.838209 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 17 17:55:47.857922 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 17 17:55:47.876701 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 17 17:55:47.877683 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 17 17:55:47.892154 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 17 17:55:47.904610 jq[1462]: false Mar 17 17:55:47.903778 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 17 17:55:47.911820 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 17 17:55:47.925106 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 17 17:55:47.934403 coreos-metadata[1460]: Mar 17 17:55:47.934 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Mar 17 17:55:47.948214 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 17 17:55:47.955305 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 17 17:55:47.956386 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 17 17:55:47.963532 systemd[1]: Starting update-engine.service - Update Engine... Mar 17 17:55:47.979391 coreos-metadata[1460]: Mar 17 17:55:47.978 INFO Fetch successful Mar 17 17:55:47.990920 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 17 17:55:47.997329 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 17 17:55:48.012594 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 17 17:55:48.013774 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 17 17:55:48.026458 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 17 17:55:48.027130 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 17 17:55:48.093040 systemd[1]: motdgen.service: Deactivated successfully. Mar 17 17:55:48.095079 dbus-daemon[1461]: [system] SELinux support is enabled Mar 17 17:55:48.096362 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 17 17:55:48.103430 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 17 17:55:48.117322 jq[1479]: true Mar 17 17:55:48.120499 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 17 17:55:48.120554 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 17 17:55:48.123547 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 17 17:55:48.126265 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Mar 17 17:55:48.126309 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 17 17:55:48.132161 update_engine[1474]: I20250317 17:55:48.131971 1474 main.cc:92] Flatcar Update Engine starting Mar 17 17:55:48.138494 extend-filesystems[1465]: Found loop4 Mar 17 17:55:48.138494 extend-filesystems[1465]: Found loop5 Mar 17 17:55:48.138494 extend-filesystems[1465]: Found loop6 Mar 17 17:55:48.138494 extend-filesystems[1465]: Found loop7 Mar 17 17:55:48.138494 extend-filesystems[1465]: Found vda Mar 17 17:55:48.138494 extend-filesystems[1465]: Found vda1 Mar 17 17:55:48.138494 extend-filesystems[1465]: Found vda2 Mar 17 17:55:48.138494 extend-filesystems[1465]: Found vda3 Mar 17 17:55:48.138494 extend-filesystems[1465]: Found usr Mar 17 17:55:48.138494 extend-filesystems[1465]: Found vda4 Mar 17 17:55:48.138494 extend-filesystems[1465]: Found vda6 Mar 17 17:55:48.184481 extend-filesystems[1465]: Found vda7 Mar 17 17:55:48.184481 extend-filesystems[1465]: Found vda9 Mar 17 17:55:48.184481 extend-filesystems[1465]: Checking size of /dev/vda9 Mar 17 17:55:48.146858 systemd[1]: Started update-engine.service - Update Engine. Mar 17 17:55:48.188326 update_engine[1474]: I20250317 17:55:48.150625 1474 update_check_scheduler.cc:74] Next update check in 11m42s Mar 17 17:55:48.188377 tar[1481]: linux-amd64/helm Mar 17 17:55:48.166174 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 17 17:55:48.205808 jq[1493]: true Mar 17 17:55:48.216975 (ntainerd)[1497]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 17 17:55:48.285514 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 17 17:55:48.287465 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 17 17:55:48.288846 extend-filesystems[1465]: Resized partition /dev/vda9 Mar 17 17:55:48.348924 extend-filesystems[1522]: resize2fs 1.47.1 (20-May-2024) Mar 17 17:55:48.384412 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Mar 17 17:55:48.477789 bash[1521]: Updated "/home/core/.ssh/authorized_keys" Mar 17 17:55:48.479753 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 17 17:55:48.498910 systemd[1]: Starting sshkeys.service... Mar 17 17:55:48.540673 systemd-logind[1472]: New seat seat0. Mar 17 17:55:48.542283 systemd-logind[1472]: Watching system buttons on /dev/input/event1 (Power Button) Mar 17 17:55:48.542304 systemd-logind[1472]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 17 17:55:48.542657 systemd[1]: Started systemd-logind.service - User Login Management. Mar 17 17:55:48.609459 systemd-networkd[1385]: eth0: Gained IPv6LL Mar 17 17:55:48.616500 systemd-timesyncd[1360]: Network configuration changed, trying to establish connection. Mar 17 17:55:48.619453 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 17 17:55:48.649341 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 17 17:55:48.661551 systemd[1]: Reached target network-online.target - Network is Online. Mar 17 17:55:48.666462 systemd-networkd[1385]: eth1: Gained IPv6LL Mar 17 17:55:48.667366 systemd-timesyncd[1360]: Network configuration changed, trying to establish connection. Mar 17 17:55:48.693110 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (1396) Mar 17 17:55:48.693664 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Mar 17 17:55:48.685301 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 17 17:55:48.699073 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:55:48.722253 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 17 17:55:48.774103 extend-filesystems[1522]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 17 17:55:48.774103 extend-filesystems[1522]: old_desc_blocks = 1, new_desc_blocks = 8 Mar 17 17:55:48.774103 extend-filesystems[1522]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Mar 17 17:55:48.836697 extend-filesystems[1465]: Resized filesystem in /dev/vda9 Mar 17 17:55:48.836697 extend-filesystems[1465]: Found vdb Mar 17 17:55:48.813111 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 17 17:55:48.813543 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 17 17:55:48.875734 coreos-metadata[1527]: Mar 17 17:55:48.875 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Mar 17 17:55:48.922528 coreos-metadata[1527]: Mar 17 17:55:48.920 INFO Fetch successful Mar 17 17:55:48.984346 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 17 17:55:48.990760 unknown[1527]: wrote ssh authorized keys file for user: core Mar 17 17:55:49.061239 locksmithd[1496]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 17 17:55:49.087697 sshd_keygen[1501]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 17 17:55:49.087965 update-ssh-keys[1553]: Updated "/home/core/.ssh/authorized_keys" Mar 17 17:55:49.088437 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 17 17:55:49.103873 systemd[1]: Finished sshkeys.service. Mar 17 17:55:49.162979 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 17 17:55:49.196190 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 17 17:55:49.262737 systemd[1]: issuegen.service: Deactivated successfully. Mar 17 17:55:49.265696 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 17 17:55:49.279247 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 17 17:55:49.346393 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 17 17:55:49.360109 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 17 17:55:49.373177 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 17 17:55:49.374186 systemd[1]: Reached target getty.target - Login Prompts. Mar 17 17:55:49.412749 containerd[1497]: time="2025-03-17T17:55:49.410168778Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Mar 17 17:55:49.503682 containerd[1497]: time="2025-03-17T17:55:49.503403731Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:55:49.511307 containerd[1497]: time="2025-03-17T17:55:49.511221546Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.83-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:55:49.511307 containerd[1497]: time="2025-03-17T17:55:49.511294712Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 17 17:55:49.511307 containerd[1497]: time="2025-03-17T17:55:49.511326174Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 17 17:55:49.513495 containerd[1497]: time="2025-03-17T17:55:49.511553660Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 17 17:55:49.513495 containerd[1497]: time="2025-03-17T17:55:49.512512592Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 17 17:55:49.513495 containerd[1497]: time="2025-03-17T17:55:49.513099775Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:55:49.513495 containerd[1497]: time="2025-03-17T17:55:49.513133667Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:55:49.513807 containerd[1497]: time="2025-03-17T17:55:49.513593934Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:55:49.513807 containerd[1497]: time="2025-03-17T17:55:49.513624086Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 17 17:55:49.513807 containerd[1497]: time="2025-03-17T17:55:49.513694761Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:55:49.513807 containerd[1497]: time="2025-03-17T17:55:49.513713136Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 17 17:55:49.514499 containerd[1497]: time="2025-03-17T17:55:49.513974775Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:55:49.514499 containerd[1497]: time="2025-03-17T17:55:49.514337342Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:55:49.514664 containerd[1497]: time="2025-03-17T17:55:49.514630671Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:55:49.514664 containerd[1497]: time="2025-03-17T17:55:49.514655893Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 17 17:55:49.516102 containerd[1497]: time="2025-03-17T17:55:49.514793035Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 17 17:55:49.516102 containerd[1497]: time="2025-03-17T17:55:49.514879789Z" level=info msg="metadata content store policy set" policy=shared Mar 17 17:55:49.551405 containerd[1497]: time="2025-03-17T17:55:49.551324673Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 17 17:55:49.551614 containerd[1497]: time="2025-03-17T17:55:49.551436363Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 17 17:55:49.551614 containerd[1497]: time="2025-03-17T17:55:49.551466486Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 17 17:55:49.551614 containerd[1497]: time="2025-03-17T17:55:49.551495102Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 17 17:55:49.551614 containerd[1497]: time="2025-03-17T17:55:49.551522472Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 17 17:55:49.551897 containerd[1497]: time="2025-03-17T17:55:49.551864327Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 17 17:55:49.552341 containerd[1497]: time="2025-03-17T17:55:49.552309226Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 17 17:55:49.552547 containerd[1497]: time="2025-03-17T17:55:49.552521533Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 17 17:55:49.553981 containerd[1497]: time="2025-03-17T17:55:49.553896825Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 17 17:55:49.554119 containerd[1497]: time="2025-03-17T17:55:49.554020076Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 17 17:55:49.554435 containerd[1497]: time="2025-03-17T17:55:49.554397240Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 17 17:55:49.554739 containerd[1497]: time="2025-03-17T17:55:49.554710770Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 17 17:55:49.555983 containerd[1497]: time="2025-03-17T17:55:49.554752320Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 17 17:55:49.555983 containerd[1497]: time="2025-03-17T17:55:49.555674443Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 17 17:55:49.555983 containerd[1497]: time="2025-03-17T17:55:49.555724367Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 17 17:55:49.555983 containerd[1497]: time="2025-03-17T17:55:49.555746275Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 17 17:55:49.555983 containerd[1497]: time="2025-03-17T17:55:49.555786629Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 17 17:55:49.555983 containerd[1497]: time="2025-03-17T17:55:49.555813354Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 17 17:55:49.555983 containerd[1497]: time="2025-03-17T17:55:49.555870397Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 17 17:55:49.555983 containerd[1497]: time="2025-03-17T17:55:49.555897346Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 17 17:55:49.555983 containerd[1497]: time="2025-03-17T17:55:49.555943482Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 17 17:55:49.555983 containerd[1497]: time="2025-03-17T17:55:49.555972270Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 17 17:55:49.556304 containerd[1497]: time="2025-03-17T17:55:49.555995228Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 17 17:55:49.556304 containerd[1497]: time="2025-03-17T17:55:49.556259290Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 17 17:55:49.556304 containerd[1497]: time="2025-03-17T17:55:49.556286226Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 17 17:55:49.557205 containerd[1497]: time="2025-03-17T17:55:49.556354992Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 17 17:55:49.559392 containerd[1497]: time="2025-03-17T17:55:49.557657684Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 17 17:55:49.559392 containerd[1497]: time="2025-03-17T17:55:49.557719225Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 17 17:55:49.559392 containerd[1497]: time="2025-03-17T17:55:49.557743836Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 17 17:55:49.559392 containerd[1497]: time="2025-03-17T17:55:49.557839798Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 17 17:55:49.559392 containerd[1497]: time="2025-03-17T17:55:49.557868490Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 17 17:55:49.559392 containerd[1497]: time="2025-03-17T17:55:49.557887119Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 17 17:55:49.559392 containerd[1497]: time="2025-03-17T17:55:49.557924959Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 17 17:55:49.559392 containerd[1497]: time="2025-03-17T17:55:49.557947344Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 17 17:55:49.559392 containerd[1497]: time="2025-03-17T17:55:49.557963225Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 17 17:55:49.559392 containerd[1497]: time="2025-03-17T17:55:49.558043960Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 17 17:55:49.559392 containerd[1497]: time="2025-03-17T17:55:49.558074446Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 17 17:55:49.559392 containerd[1497]: time="2025-03-17T17:55:49.558096539Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 17 17:55:49.559392 containerd[1497]: time="2025-03-17T17:55:49.558116030Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 17 17:55:49.559910 containerd[1497]: time="2025-03-17T17:55:49.558130251Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 17 17:55:49.559910 containerd[1497]: time="2025-03-17T17:55:49.558195363Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 17 17:55:49.559910 containerd[1497]: time="2025-03-17T17:55:49.558223110Z" level=info msg="NRI interface is disabled by configuration." Mar 17 17:55:49.559910 containerd[1497]: time="2025-03-17T17:55:49.558241052Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 17 17:55:49.560012 containerd[1497]: time="2025-03-17T17:55:49.558673736Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 17 17:55:49.560012 containerd[1497]: time="2025-03-17T17:55:49.558789177Z" level=info msg="Connect containerd service" Mar 17 17:55:49.560012 containerd[1497]: time="2025-03-17T17:55:49.558871366Z" level=info msg="using legacy CRI server" Mar 17 17:55:49.560012 containerd[1497]: time="2025-03-17T17:55:49.558892598Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 17 17:55:49.560012 containerd[1497]: time="2025-03-17T17:55:49.559106063Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 17 17:55:49.571341 containerd[1497]: time="2025-03-17T17:55:49.571259422Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 17:55:49.576281 containerd[1497]: time="2025-03-17T17:55:49.572334311Z" level=info msg="Start subscribing containerd event" Mar 17 17:55:49.576281 containerd[1497]: time="2025-03-17T17:55:49.572425704Z" level=info msg="Start recovering state" Mar 17 17:55:49.576281 containerd[1497]: time="2025-03-17T17:55:49.572553319Z" level=info msg="Start event monitor" Mar 17 17:55:49.576281 containerd[1497]: time="2025-03-17T17:55:49.572601295Z" level=info msg="Start snapshots syncer" Mar 17 17:55:49.576281 containerd[1497]: time="2025-03-17T17:55:49.572616048Z" level=info msg="Start cni network conf syncer for default" Mar 17 17:55:49.576281 containerd[1497]: time="2025-03-17T17:55:49.572628019Z" level=info msg="Start streaming server" Mar 17 17:55:49.578198 containerd[1497]: time="2025-03-17T17:55:49.578109307Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 17 17:55:49.581221 containerd[1497]: time="2025-03-17T17:55:49.578237004Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 17 17:55:49.581221 containerd[1497]: time="2025-03-17T17:55:49.579767067Z" level=info msg="containerd successfully booted in 0.173510s" Mar 17 17:55:49.578548 systemd[1]: Started containerd.service - containerd container runtime. Mar 17 17:55:50.102345 tar[1481]: linux-amd64/LICENSE Mar 17 17:55:50.102345 tar[1481]: linux-amd64/README.md Mar 17 17:55:50.124202 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 17 17:55:50.809621 (kubelet)[1585]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:55:50.810678 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:55:50.815720 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 17 17:55:50.826791 systemd[1]: Startup finished in 1.535s (kernel) + 8.174s (initrd) + 9.865s (userspace) = 19.575s. Mar 17 17:55:52.004120 kubelet[1585]: E0317 17:55:52.003987 1585 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:55:52.008500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:55:52.009220 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:55:52.009997 systemd[1]: kubelet.service: Consumed 1.656s CPU time, 246.3M memory peak. Mar 17 17:55:56.937900 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 17 17:55:56.943180 systemd[1]: Started sshd@0-64.23.213.164:22-139.178.68.195:44402.service - OpenSSH per-connection server daemon (139.178.68.195:44402). Mar 17 17:55:57.055674 sshd[1598]: Accepted publickey for core from 139.178.68.195 port 44402 ssh2: RSA SHA256:nAUKsK2l9wjXYeF+xS7MSq6cfWij0pIIBV4i7QqSfSE Mar 17 17:55:57.059261 sshd-session[1598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:55:57.069236 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 17 17:55:57.077201 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 17 17:55:57.089708 systemd-logind[1472]: New session 1 of user core. Mar 17 17:55:57.101616 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 17 17:55:57.116183 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 17 17:55:57.125774 (systemd)[1602]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 17 17:55:57.131235 systemd-logind[1472]: New session c1 of user core. Mar 17 17:55:57.354947 systemd[1602]: Queued start job for default target default.target. Mar 17 17:55:57.366625 systemd[1602]: Created slice app.slice - User Application Slice. Mar 17 17:55:57.366928 systemd[1602]: Reached target paths.target - Paths. Mar 17 17:55:57.367119 systemd[1602]: Reached target timers.target - Timers. Mar 17 17:55:57.384279 systemd[1602]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 17 17:55:57.398619 systemd[1602]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 17 17:55:57.398817 systemd[1602]: Reached target sockets.target - Sockets. Mar 17 17:55:57.398897 systemd[1602]: Reached target basic.target - Basic System. Mar 17 17:55:57.398955 systemd[1602]: Reached target default.target - Main User Target. Mar 17 17:55:57.399000 systemd[1602]: Startup finished in 256ms. Mar 17 17:55:57.399274 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 17 17:55:57.411053 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 17 17:55:57.488990 systemd[1]: Started sshd@1-64.23.213.164:22-139.178.68.195:44414.service - OpenSSH per-connection server daemon (139.178.68.195:44414). Mar 17 17:55:57.563691 sshd[1613]: Accepted publickey for core from 139.178.68.195 port 44414 ssh2: RSA SHA256:nAUKsK2l9wjXYeF+xS7MSq6cfWij0pIIBV4i7QqSfSE Mar 17 17:55:57.566506 sshd-session[1613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:55:57.584793 systemd-logind[1472]: New session 2 of user core. Mar 17 17:55:57.588939 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 17 17:55:57.655036 sshd[1615]: Connection closed by 139.178.68.195 port 44414 Mar 17 17:55:57.654798 sshd-session[1613]: pam_unix(sshd:session): session closed for user core Mar 17 17:55:57.661189 systemd[1]: sshd@1-64.23.213.164:22-139.178.68.195:44414.service: Deactivated successfully. Mar 17 17:55:57.663967 systemd[1]: session-2.scope: Deactivated successfully. Mar 17 17:55:57.673136 systemd-logind[1472]: Session 2 logged out. Waiting for processes to exit. Mar 17 17:55:57.674502 systemd-logind[1472]: Removed session 2. Mar 17 17:55:57.722395 systemd[1]: Started sshd@2-64.23.213.164:22-139.178.68.195:44416.service - OpenSSH per-connection server daemon (139.178.68.195:44416). Mar 17 17:55:57.778757 sshd[1621]: Accepted publickey for core from 139.178.68.195 port 44416 ssh2: RSA SHA256:nAUKsK2l9wjXYeF+xS7MSq6cfWij0pIIBV4i7QqSfSE Mar 17 17:55:57.781021 sshd-session[1621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:55:57.795831 systemd-logind[1472]: New session 3 of user core. Mar 17 17:55:57.806937 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 17 17:55:57.872358 sshd[1623]: Connection closed by 139.178.68.195 port 44416 Mar 17 17:55:57.873422 sshd-session[1621]: pam_unix(sshd:session): session closed for user core Mar 17 17:55:57.888086 systemd[1]: sshd@2-64.23.213.164:22-139.178.68.195:44416.service: Deactivated successfully. Mar 17 17:55:57.893705 systemd[1]: session-3.scope: Deactivated successfully. Mar 17 17:55:57.897408 systemd-logind[1472]: Session 3 logged out. Waiting for processes to exit. Mar 17 17:55:57.910244 systemd[1]: Started sshd@3-64.23.213.164:22-139.178.68.195:44418.service - OpenSSH per-connection server daemon (139.178.68.195:44418). Mar 17 17:55:57.914840 systemd-logind[1472]: Removed session 3. Mar 17 17:55:57.971500 sshd[1628]: Accepted publickey for core from 139.178.68.195 port 44418 ssh2: RSA SHA256:nAUKsK2l9wjXYeF+xS7MSq6cfWij0pIIBV4i7QqSfSE Mar 17 17:55:57.974300 sshd-session[1628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:55:57.987441 systemd-logind[1472]: New session 4 of user core. Mar 17 17:55:57.994044 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 17 17:55:58.061202 sshd[1631]: Connection closed by 139.178.68.195 port 44418 Mar 17 17:55:58.061904 sshd-session[1628]: pam_unix(sshd:session): session closed for user core Mar 17 17:55:58.077543 systemd[1]: sshd@3-64.23.213.164:22-139.178.68.195:44418.service: Deactivated successfully. Mar 17 17:55:58.080962 systemd[1]: session-4.scope: Deactivated successfully. Mar 17 17:55:58.083752 systemd-logind[1472]: Session 4 logged out. Waiting for processes to exit. Mar 17 17:55:58.089360 systemd[1]: Started sshd@4-64.23.213.164:22-139.178.68.195:44420.service - OpenSSH per-connection server daemon (139.178.68.195:44420). Mar 17 17:55:58.092489 systemd-logind[1472]: Removed session 4. Mar 17 17:55:58.151319 sshd[1636]: Accepted publickey for core from 139.178.68.195 port 44420 ssh2: RSA SHA256:nAUKsK2l9wjXYeF+xS7MSq6cfWij0pIIBV4i7QqSfSE Mar 17 17:55:58.154295 sshd-session[1636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:55:58.164682 systemd-logind[1472]: New session 5 of user core. Mar 17 17:55:58.171967 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 17 17:55:58.250422 sudo[1640]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 17 17:55:58.251477 sudo[1640]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:55:58.947426 (dockerd)[1658]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 17 17:55:58.948820 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 17 17:55:59.637019 dockerd[1658]: time="2025-03-17T17:55:59.636349175Z" level=info msg="Starting up" Mar 17 17:55:59.981992 dockerd[1658]: time="2025-03-17T17:55:59.980919027Z" level=info msg="Loading containers: start." Mar 17 17:56:00.349666 kernel: Initializing XFRM netlink socket Mar 17 17:56:00.401499 systemd-timesyncd[1360]: Network configuration changed, trying to establish connection. Mar 17 17:56:00.554944 systemd-networkd[1385]: docker0: Link UP Mar 17 17:56:01.034197 systemd-timesyncd[1360]: Contacted time server 142.202.190.19:123 (2.flatcar.pool.ntp.org). Mar 17 17:56:01.034290 systemd-timesyncd[1360]: Initial clock synchronization to Mon 2025-03-17 17:56:01.033366 UTC. Mar 17 17:56:01.035027 systemd-resolved[1343]: Clock change detected. Flushing caches. Mar 17 17:56:01.037489 dockerd[1658]: time="2025-03-17T17:56:01.037023170Z" level=info msg="Loading containers: done." Mar 17 17:56:01.076042 dockerd[1658]: time="2025-03-17T17:56:01.075138014Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 17 17:56:01.076042 dockerd[1658]: time="2025-03-17T17:56:01.075307866Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Mar 17 17:56:01.076042 dockerd[1658]: time="2025-03-17T17:56:01.075497577Z" level=info msg="Daemon has completed initialization" Mar 17 17:56:01.175877 dockerd[1658]: time="2025-03-17T17:56:01.174804358Z" level=info msg="API listen on /run/docker.sock" Mar 17 17:56:01.176244 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 17 17:56:02.687378 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 17 17:56:02.704872 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:56:02.713212 containerd[1497]: time="2025-03-17T17:56:02.713042033Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\"" Mar 17 17:56:03.072816 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:56:03.076595 (kubelet)[1866]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:56:03.306179 kubelet[1866]: E0317 17:56:03.306014 1866 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:56:03.313699 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:56:03.313922 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:56:03.315214 systemd[1]: kubelet.service: Consumed 271ms CPU time, 95.7M memory peak. Mar 17 17:56:03.686207 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount823665045.mount: Deactivated successfully. Mar 17 17:56:07.516935 containerd[1497]: time="2025-03-17T17:56:07.516127593Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:56:07.534456 containerd[1497]: time="2025-03-17T17:56:07.534180021Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.11: active requests=0, bytes read=32674573" Mar 17 17:56:07.537693 containerd[1497]: time="2025-03-17T17:56:07.536459033Z" level=info msg="ImageCreate event name:\"sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:56:07.544826 containerd[1497]: time="2025-03-17T17:56:07.541941197Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:56:07.546561 containerd[1497]: time="2025-03-17T17:56:07.546476072Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.11\" with image id \"sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0\", size \"32671373\" in 4.833372242s" Mar 17 17:56:07.546896 containerd[1497]: time="2025-03-17T17:56:07.546848988Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\" returns image reference \"sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8\"" Mar 17 17:56:07.623392 containerd[1497]: time="2025-03-17T17:56:07.623332293Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\"" Mar 17 17:56:07.637008 systemd-resolved[1343]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Mar 17 17:56:10.720199 systemd-resolved[1343]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Mar 17 17:56:11.188680 containerd[1497]: time="2025-03-17T17:56:11.188550564Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:56:11.194362 containerd[1497]: time="2025-03-17T17:56:11.194263271Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.11: active requests=0, bytes read=29619772" Mar 17 17:56:11.196468 containerd[1497]: time="2025-03-17T17:56:11.196372827Z" level=info msg="ImageCreate event name:\"sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:56:11.203836 containerd[1497]: time="2025-03-17T17:56:11.203764305Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:56:11.205430 containerd[1497]: time="2025-03-17T17:56:11.205154437Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.11\" with image id \"sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f\", size \"31107380\" in 3.581530735s" Mar 17 17:56:11.205430 containerd[1497]: time="2025-03-17T17:56:11.205222836Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\" returns image reference \"sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a\"" Mar 17 17:56:11.303438 containerd[1497]: time="2025-03-17T17:56:11.302867319Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\"" Mar 17 17:56:13.476676 containerd[1497]: time="2025-03-17T17:56:13.471242989Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:56:13.477827 containerd[1497]: time="2025-03-17T17:56:13.477761722Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.11: active requests=0, bytes read=17903309" Mar 17 17:56:13.478962 containerd[1497]: time="2025-03-17T17:56:13.478917546Z" level=info msg="ImageCreate event name:\"sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:56:13.483889 containerd[1497]: time="2025-03-17T17:56:13.483837830Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:56:13.485035 containerd[1497]: time="2025-03-17T17:56:13.484982879Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.11\" with image id \"sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5\", size \"19390935\" in 2.182057118s" Mar 17 17:56:13.485035 containerd[1497]: time="2025-03-17T17:56:13.485034328Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\" returns image reference \"sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7\"" Mar 17 17:56:13.487522 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 17 17:56:13.498840 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:56:13.530170 containerd[1497]: time="2025-03-17T17:56:13.530116463Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\"" Mar 17 17:56:13.719346 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:56:13.720118 (kubelet)[1957]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:56:13.831264 kubelet[1957]: E0317 17:56:13.831156 1957 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:56:13.834569 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:56:13.834842 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:56:13.836011 systemd[1]: kubelet.service: Consumed 231ms CPU time, 96.8M memory peak. Mar 17 17:56:13.984976 systemd-resolved[1343]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Mar 17 17:56:15.447820 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount113073803.mount: Deactivated successfully. Mar 17 17:56:16.531400 containerd[1497]: time="2025-03-17T17:56:16.531128150Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:56:16.534682 containerd[1497]: time="2025-03-17T17:56:16.534577062Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.11: active requests=0, bytes read=29185372" Mar 17 17:56:16.536233 containerd[1497]: time="2025-03-17T17:56:16.536179714Z" level=info msg="ImageCreate event name:\"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:56:16.540644 containerd[1497]: time="2025-03-17T17:56:16.540423761Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:56:16.542181 containerd[1497]: time="2025-03-17T17:56:16.541674529Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.11\" with image id \"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\", repo tag \"registry.k8s.io/kube-proxy:v1.30.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55\", size \"29184391\" in 3.011288185s" Mar 17 17:56:16.542181 containerd[1497]: time="2025-03-17T17:56:16.541719870Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\" returns image reference \"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\"" Mar 17 17:56:16.594023 containerd[1497]: time="2025-03-17T17:56:16.593507775Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Mar 17 17:56:17.159978 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount505504434.mount: Deactivated successfully. Mar 17 17:56:18.834818 containerd[1497]: time="2025-03-17T17:56:18.834748760Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:56:18.837928 containerd[1497]: time="2025-03-17T17:56:18.837853149Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Mar 17 17:56:18.858950 containerd[1497]: time="2025-03-17T17:56:18.858145120Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:56:18.874360 containerd[1497]: time="2025-03-17T17:56:18.874247536Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:56:18.877130 containerd[1497]: time="2025-03-17T17:56:18.876846885Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.283272676s" Mar 17 17:56:18.877130 containerd[1497]: time="2025-03-17T17:56:18.876921307Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Mar 17 17:56:18.933684 containerd[1497]: time="2025-03-17T17:56:18.933310084Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Mar 17 17:56:19.488845 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2566565207.mount: Deactivated successfully. Mar 17 17:56:19.499364 containerd[1497]: time="2025-03-17T17:56:19.499278128Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:56:19.500967 containerd[1497]: time="2025-03-17T17:56:19.500880696Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Mar 17 17:56:19.501819 containerd[1497]: time="2025-03-17T17:56:19.501770623Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:56:19.505786 containerd[1497]: time="2025-03-17T17:56:19.505711527Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:56:19.507172 containerd[1497]: time="2025-03-17T17:56:19.507097153Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 573.729451ms" Mar 17 17:56:19.507906 containerd[1497]: time="2025-03-17T17:56:19.507865483Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Mar 17 17:56:19.552741 containerd[1497]: time="2025-03-17T17:56:19.552678967Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Mar 17 17:56:20.220248 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2884142219.mount: Deactivated successfully. Mar 17 17:56:23.003216 containerd[1497]: time="2025-03-17T17:56:23.003142842Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:56:23.006279 containerd[1497]: time="2025-03-17T17:56:23.006190084Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Mar 17 17:56:23.007758 containerd[1497]: time="2025-03-17T17:56:23.007668238Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:56:23.013912 containerd[1497]: time="2025-03-17T17:56:23.013854084Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:56:23.018478 containerd[1497]: time="2025-03-17T17:56:23.018364614Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 3.465614758s" Mar 17 17:56:23.018478 containerd[1497]: time="2025-03-17T17:56:23.018455804Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Mar 17 17:56:23.987247 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 17 17:56:23.999969 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:56:24.234199 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:56:24.241409 (kubelet)[2148]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:56:24.353686 kubelet[2148]: E0317 17:56:24.353377 2148 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:56:24.359769 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:56:24.360362 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:56:24.361525 systemd[1]: kubelet.service: Consumed 220ms CPU time, 98.5M memory peak. Mar 17 17:56:26.639353 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:56:26.640485 systemd[1]: kubelet.service: Consumed 220ms CPU time, 98.5M memory peak. Mar 17 17:56:26.648216 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:56:26.695959 systemd[1]: Reload requested from client PID 2163 ('systemctl') (unit session-5.scope)... Mar 17 17:56:26.695987 systemd[1]: Reloading... Mar 17 17:56:26.933168 zram_generator::config[2210]: No configuration found. Mar 17 17:56:27.165506 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:56:27.360464 systemd[1]: Reloading finished in 663 ms. Mar 17 17:56:27.439764 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:56:27.458422 (kubelet)[2251]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:56:27.465430 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:56:27.468447 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 17:56:27.468753 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:56:27.468827 systemd[1]: kubelet.service: Consumed 139ms CPU time, 84.4M memory peak. Mar 17 17:56:27.476233 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:56:27.694092 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:56:27.696481 (kubelet)[2263]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:56:27.808578 kubelet[2263]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:56:27.808578 kubelet[2263]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 17:56:27.808578 kubelet[2263]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:56:27.811144 kubelet[2263]: I0317 17:56:27.811031 2263 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 17:56:28.296225 kubelet[2263]: I0317 17:56:28.295487 2263 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 17 17:56:28.296225 kubelet[2263]: I0317 17:56:28.295543 2263 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 17:56:28.296225 kubelet[2263]: I0317 17:56:28.295946 2263 server.go:927] "Client rotation is on, will bootstrap in background" Mar 17 17:56:28.362783 kubelet[2263]: I0317 17:56:28.361336 2263 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 17:56:28.362783 kubelet[2263]: E0317 17:56:28.362115 2263 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://64.23.213.164:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 64.23.213.164:6443: connect: connection refused Mar 17 17:56:28.396457 kubelet[2263]: I0317 17:56:28.395765 2263 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 17:56:28.400604 kubelet[2263]: I0317 17:56:28.400493 2263 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 17:56:28.402725 kubelet[2263]: I0317 17:56:28.401370 2263 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.1.0-0-80157c225a","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 17 17:56:28.402725 kubelet[2263]: I0317 17:56:28.402570 2263 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 17:56:28.402725 kubelet[2263]: I0317 17:56:28.402589 2263 container_manager_linux.go:301] "Creating device plugin manager" Mar 17 17:56:28.404477 kubelet[2263]: I0317 17:56:28.404209 2263 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:56:28.407025 kubelet[2263]: I0317 17:56:28.406266 2263 kubelet.go:400] "Attempting to sync node with API server" Mar 17 17:56:28.407025 kubelet[2263]: I0317 17:56:28.406313 2263 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 17:56:28.407025 kubelet[2263]: I0317 17:56:28.406357 2263 kubelet.go:312] "Adding apiserver pod source" Mar 17 17:56:28.407025 kubelet[2263]: I0317 17:56:28.406384 2263 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 17:56:28.413143 kubelet[2263]: W0317 17:56:28.411055 2263 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://64.23.213.164:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.0-0-80157c225a&limit=500&resourceVersion=0": dial tcp 64.23.213.164:6443: connect: connection refused Mar 17 17:56:28.413143 kubelet[2263]: E0317 17:56:28.413027 2263 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://64.23.213.164:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.0-0-80157c225a&limit=500&resourceVersion=0": dial tcp 64.23.213.164:6443: connect: connection refused Mar 17 17:56:28.418866 kubelet[2263]: W0317 17:56:28.417912 2263 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://64.23.213.164:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 64.23.213.164:6443: connect: connection refused Mar 17 17:56:28.418866 kubelet[2263]: E0317 17:56:28.418001 2263 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://64.23.213.164:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 64.23.213.164:6443: connect: connection refused Mar 17 17:56:28.421817 kubelet[2263]: I0317 17:56:28.421735 2263 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 17:56:28.425686 kubelet[2263]: I0317 17:56:28.425563 2263 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 17:56:28.425930 kubelet[2263]: W0317 17:56:28.425880 2263 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 17 17:56:28.428818 kubelet[2263]: I0317 17:56:28.428761 2263 server.go:1264] "Started kubelet" Mar 17 17:56:28.440671 kubelet[2263]: I0317 17:56:28.439886 2263 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 17:56:28.441072 kubelet[2263]: I0317 17:56:28.440941 2263 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 17:56:28.441670 kubelet[2263]: I0317 17:56:28.441599 2263 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 17:56:28.441941 kubelet[2263]: E0317 17:56:28.441776 2263 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://64.23.213.164:6443/api/v1/namespaces/default/events\": dial tcp 64.23.213.164:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.1.0-0-80157c225a.182da8c37cd59c08 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.1.0-0-80157c225a,UID:ci-4230.1.0-0-80157c225a,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.1.0-0-80157c225a,},FirstTimestamp:2025-03-17 17:56:28.428712968 +0000 UTC m=+0.724289206,LastTimestamp:2025-03-17 17:56:28.428712968 +0000 UTC m=+0.724289206,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.1.0-0-80157c225a,}" Mar 17 17:56:28.443411 kubelet[2263]: I0317 17:56:28.443379 2263 server.go:455] "Adding debug handlers to kubelet server" Mar 17 17:56:28.452762 kubelet[2263]: I0317 17:56:28.452381 2263 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 17:56:28.461177 kubelet[2263]: I0317 17:56:28.460195 2263 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 17 17:56:28.462051 kubelet[2263]: I0317 17:56:28.461391 2263 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 17:56:28.462051 kubelet[2263]: I0317 17:56:28.461508 2263 reconciler.go:26] "Reconciler: start to sync state" Mar 17 17:56:28.465164 kubelet[2263]: W0317 17:56:28.464676 2263 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://64.23.213.164:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.23.213.164:6443: connect: connection refused Mar 17 17:56:28.465164 kubelet[2263]: E0317 17:56:28.464780 2263 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://64.23.213.164:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.23.213.164:6443: connect: connection refused Mar 17 17:56:28.465164 kubelet[2263]: E0317 17:56:28.465021 2263 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.213.164:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.0-0-80157c225a?timeout=10s\": dial tcp 64.23.213.164:6443: connect: connection refused" interval="200ms" Mar 17 17:56:28.466077 kubelet[2263]: I0317 17:56:28.465564 2263 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 17:56:28.486553 kubelet[2263]: I0317 17:56:28.485792 2263 factory.go:221] Registration of the containerd container factory successfully Mar 17 17:56:28.486553 kubelet[2263]: I0317 17:56:28.485827 2263 factory.go:221] Registration of the systemd container factory successfully Mar 17 17:56:28.601552 kubelet[2263]: I0317 17:56:28.593056 2263 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230.1.0-0-80157c225a" Mar 17 17:56:28.601552 kubelet[2263]: I0317 17:56:28.594090 2263 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 17:56:28.601552 kubelet[2263]: I0317 17:56:28.594105 2263 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 17:56:28.601552 kubelet[2263]: I0317 17:56:28.594134 2263 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:56:28.601552 kubelet[2263]: E0317 17:56:28.594470 2263 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://64.23.213.164:6443/api/v1/nodes\": dial tcp 64.23.213.164:6443: connect: connection refused" node="ci-4230.1.0-0-80157c225a" Mar 17 17:56:28.619917 kubelet[2263]: I0317 17:56:28.619597 2263 policy_none.go:49] "None policy: Start" Mar 17 17:56:28.622051 kubelet[2263]: I0317 17:56:28.621357 2263 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 17:56:28.622051 kubelet[2263]: I0317 17:56:28.621432 2263 state_mem.go:35] "Initializing new in-memory state store" Mar 17 17:56:28.644905 kubelet[2263]: I0317 17:56:28.644065 2263 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 17:56:28.647930 kubelet[2263]: I0317 17:56:28.647831 2263 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 17:56:28.647930 kubelet[2263]: I0317 17:56:28.647896 2263 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 17:56:28.647930 kubelet[2263]: I0317 17:56:28.647931 2263 kubelet.go:2337] "Starting kubelet main sync loop" Mar 17 17:56:28.648225 kubelet[2263]: E0317 17:56:28.647998 2263 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 17:56:28.653676 kubelet[2263]: W0317 17:56:28.652981 2263 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://64.23.213.164:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.23.213.164:6443: connect: connection refused Mar 17 17:56:28.653676 kubelet[2263]: E0317 17:56:28.653053 2263 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://64.23.213.164:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.23.213.164:6443: connect: connection refused Mar 17 17:56:28.664647 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 17 17:56:28.667428 kubelet[2263]: E0317 17:56:28.666982 2263 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.213.164:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.0-0-80157c225a?timeout=10s\": dial tcp 64.23.213.164:6443: connect: connection refused" interval="400ms" Mar 17 17:56:28.693774 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 17 17:56:28.701070 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 17 17:56:28.722306 kubelet[2263]: I0317 17:56:28.721397 2263 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 17:56:28.722306 kubelet[2263]: I0317 17:56:28.721754 2263 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 17:56:28.722306 kubelet[2263]: I0317 17:56:28.721968 2263 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 17:56:28.726473 kubelet[2263]: E0317 17:56:28.726432 2263 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230.1.0-0-80157c225a\" not found" Mar 17 17:56:28.748858 kubelet[2263]: I0317 17:56:28.748666 2263 topology_manager.go:215] "Topology Admit Handler" podUID="5f0ba4c214da8ad6efdccb2123e0cb10" podNamespace="kube-system" podName="kube-apiserver-ci-4230.1.0-0-80157c225a" Mar 17 17:56:28.766129 kubelet[2263]: I0317 17:56:28.766050 2263 topology_manager.go:215] "Topology Admit Handler" podUID="cc1a392ea0cbc0c0156756dde7e8f327" podNamespace="kube-system" podName="kube-controller-manager-ci-4230.1.0-0-80157c225a" Mar 17 17:56:28.768198 kubelet[2263]: I0317 17:56:28.768060 2263 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5f0ba4c214da8ad6efdccb2123e0cb10-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.1.0-0-80157c225a\" (UID: \"5f0ba4c214da8ad6efdccb2123e0cb10\") " pod="kube-system/kube-apiserver-ci-4230.1.0-0-80157c225a" Mar 17 17:56:28.768411 kubelet[2263]: I0317 17:56:28.768262 2263 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5f0ba4c214da8ad6efdccb2123e0cb10-ca-certs\") pod \"kube-apiserver-ci-4230.1.0-0-80157c225a\" (UID: \"5f0ba4c214da8ad6efdccb2123e0cb10\") " pod="kube-system/kube-apiserver-ci-4230.1.0-0-80157c225a" Mar 17 17:56:28.768477 kubelet[2263]: I0317 17:56:28.768373 2263 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5f0ba4c214da8ad6efdccb2123e0cb10-k8s-certs\") pod \"kube-apiserver-ci-4230.1.0-0-80157c225a\" (UID: \"5f0ba4c214da8ad6efdccb2123e0cb10\") " pod="kube-system/kube-apiserver-ci-4230.1.0-0-80157c225a" Mar 17 17:56:28.770375 kubelet[2263]: I0317 17:56:28.770040 2263 topology_manager.go:215] "Topology Admit Handler" podUID="db5d70e2ea183973ff1052c54f4780ea" podNamespace="kube-system" podName="kube-scheduler-ci-4230.1.0-0-80157c225a" Mar 17 17:56:28.789555 systemd[1]: Created slice kubepods-burstable-pod5f0ba4c214da8ad6efdccb2123e0cb10.slice - libcontainer container kubepods-burstable-pod5f0ba4c214da8ad6efdccb2123e0cb10.slice. Mar 17 17:56:28.800405 kubelet[2263]: I0317 17:56:28.799953 2263 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230.1.0-0-80157c225a" Mar 17 17:56:28.800991 kubelet[2263]: E0317 17:56:28.800578 2263 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://64.23.213.164:6443/api/v1/nodes\": dial tcp 64.23.213.164:6443: connect: connection refused" node="ci-4230.1.0-0-80157c225a" Mar 17 17:56:28.814938 systemd[1]: Created slice kubepods-burstable-poddb5d70e2ea183973ff1052c54f4780ea.slice - libcontainer container kubepods-burstable-poddb5d70e2ea183973ff1052c54f4780ea.slice. Mar 17 17:56:28.859244 systemd[1]: Created slice kubepods-burstable-podcc1a392ea0cbc0c0156756dde7e8f327.slice - libcontainer container kubepods-burstable-podcc1a392ea0cbc0c0156756dde7e8f327.slice. Mar 17 17:56:28.870376 kubelet[2263]: I0317 17:56:28.869725 2263 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cc1a392ea0cbc0c0156756dde7e8f327-ca-certs\") pod \"kube-controller-manager-ci-4230.1.0-0-80157c225a\" (UID: \"cc1a392ea0cbc0c0156756dde7e8f327\") " pod="kube-system/kube-controller-manager-ci-4230.1.0-0-80157c225a" Mar 17 17:56:28.870376 kubelet[2263]: I0317 17:56:28.869804 2263 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cc1a392ea0cbc0c0156756dde7e8f327-k8s-certs\") pod \"kube-controller-manager-ci-4230.1.0-0-80157c225a\" (UID: \"cc1a392ea0cbc0c0156756dde7e8f327\") " pod="kube-system/kube-controller-manager-ci-4230.1.0-0-80157c225a" Mar 17 17:56:28.870376 kubelet[2263]: I0317 17:56:28.869851 2263 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cc1a392ea0cbc0c0156756dde7e8f327-kubeconfig\") pod \"kube-controller-manager-ci-4230.1.0-0-80157c225a\" (UID: \"cc1a392ea0cbc0c0156756dde7e8f327\") " pod="kube-system/kube-controller-manager-ci-4230.1.0-0-80157c225a" Mar 17 17:56:28.870376 kubelet[2263]: I0317 17:56:28.869883 2263 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cc1a392ea0cbc0c0156756dde7e8f327-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.1.0-0-80157c225a\" (UID: \"cc1a392ea0cbc0c0156756dde7e8f327\") " pod="kube-system/kube-controller-manager-ci-4230.1.0-0-80157c225a" Mar 17 17:56:28.870376 kubelet[2263]: I0317 17:56:28.869916 2263 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cc1a392ea0cbc0c0156756dde7e8f327-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.1.0-0-80157c225a\" (UID: \"cc1a392ea0cbc0c0156756dde7e8f327\") " pod="kube-system/kube-controller-manager-ci-4230.1.0-0-80157c225a" Mar 17 17:56:28.871181 kubelet[2263]: I0317 17:56:28.869959 2263 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db5d70e2ea183973ff1052c54f4780ea-kubeconfig\") pod \"kube-scheduler-ci-4230.1.0-0-80157c225a\" (UID: \"db5d70e2ea183973ff1052c54f4780ea\") " pod="kube-system/kube-scheduler-ci-4230.1.0-0-80157c225a" Mar 17 17:56:29.068838 kubelet[2263]: E0317 17:56:29.068681 2263 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.213.164:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.0-0-80157c225a?timeout=10s\": dial tcp 64.23.213.164:6443: connect: connection refused" interval="800ms" Mar 17 17:56:29.112959 kubelet[2263]: E0317 17:56:29.108153 2263 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:56:29.113128 containerd[1497]: time="2025-03-17T17:56:29.112310921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.1.0-0-80157c225a,Uid:5f0ba4c214da8ad6efdccb2123e0cb10,Namespace:kube-system,Attempt:0,}" Mar 17 17:56:29.124997 kubelet[2263]: E0317 17:56:29.124917 2263 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:56:29.130358 containerd[1497]: time="2025-03-17T17:56:29.129995469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.1.0-0-80157c225a,Uid:db5d70e2ea183973ff1052c54f4780ea,Namespace:kube-system,Attempt:0,}" Mar 17 17:56:29.174320 kubelet[2263]: E0317 17:56:29.170075 2263 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:56:29.184471 containerd[1497]: time="2025-03-17T17:56:29.174862974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.1.0-0-80157c225a,Uid:cc1a392ea0cbc0c0156756dde7e8f327,Namespace:kube-system,Attempt:0,}" Mar 17 17:56:29.207127 kubelet[2263]: I0317 17:56:29.206195 2263 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230.1.0-0-80157c225a" Mar 17 17:56:29.207127 kubelet[2263]: E0317 17:56:29.206893 2263 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://64.23.213.164:6443/api/v1/nodes\": dial tcp 64.23.213.164:6443: connect: connection refused" node="ci-4230.1.0-0-80157c225a" Mar 17 17:56:29.563029 kubelet[2263]: W0317 17:56:29.561637 2263 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://64.23.213.164:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.0-0-80157c225a&limit=500&resourceVersion=0": dial tcp 64.23.213.164:6443: connect: connection refused Mar 17 17:56:29.563029 kubelet[2263]: E0317 17:56:29.561770 2263 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://64.23.213.164:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.0-0-80157c225a&limit=500&resourceVersion=0": dial tcp 64.23.213.164:6443: connect: connection refused Mar 17 17:56:29.565707 kubelet[2263]: W0317 17:56:29.565590 2263 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://64.23.213.164:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.23.213.164:6443: connect: connection refused Mar 17 17:56:29.565998 kubelet[2263]: E0317 17:56:29.565971 2263 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://64.23.213.164:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.23.213.164:6443: connect: connection refused Mar 17 17:56:29.768515 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount81525653.mount: Deactivated successfully. Mar 17 17:56:29.785680 containerd[1497]: time="2025-03-17T17:56:29.784077821Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:56:29.786223 containerd[1497]: time="2025-03-17T17:56:29.786175241Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:56:29.788118 containerd[1497]: time="2025-03-17T17:56:29.788026537Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 17:56:29.792333 containerd[1497]: time="2025-03-17T17:56:29.791231206Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 17 17:56:29.793924 containerd[1497]: time="2025-03-17T17:56:29.793700443Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:56:29.798253 containerd[1497]: time="2025-03-17T17:56:29.798142339Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 17:56:29.798455 containerd[1497]: time="2025-03-17T17:56:29.798349046Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:56:29.802013 containerd[1497]: time="2025-03-17T17:56:29.801920223Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:56:29.804675 containerd[1497]: time="2025-03-17T17:56:29.803384608Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 673.160906ms" Mar 17 17:56:29.807372 containerd[1497]: time="2025-03-17T17:56:29.807305009Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 632.278919ms" Mar 17 17:56:29.818281 containerd[1497]: time="2025-03-17T17:56:29.818115133Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 705.639534ms" Mar 17 17:56:29.871357 kubelet[2263]: E0317 17:56:29.870350 2263 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.213.164:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.0-0-80157c225a?timeout=10s\": dial tcp 64.23.213.164:6443: connect: connection refused" interval="1.6s" Mar 17 17:56:29.969647 kubelet[2263]: W0317 17:56:29.969490 2263 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://64.23.213.164:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.23.213.164:6443: connect: connection refused Mar 17 17:56:29.969647 kubelet[2263]: E0317 17:56:29.969574 2263 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://64.23.213.164:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.23.213.164:6443: connect: connection refused Mar 17 17:56:30.012354 kubelet[2263]: W0317 17:56:30.011357 2263 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://64.23.213.164:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 64.23.213.164:6443: connect: connection refused Mar 17 17:56:30.012354 kubelet[2263]: E0317 17:56:30.011485 2263 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://64.23.213.164:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 64.23.213.164:6443: connect: connection refused Mar 17 17:56:30.015447 kubelet[2263]: I0317 17:56:30.013165 2263 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230.1.0-0-80157c225a" Mar 17 17:56:30.015447 kubelet[2263]: E0317 17:56:30.013860 2263 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://64.23.213.164:6443/api/v1/nodes\": dial tcp 64.23.213.164:6443: connect: connection refused" node="ci-4230.1.0-0-80157c225a" Mar 17 17:56:30.298059 containerd[1497]: time="2025-03-17T17:56:30.295883674Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:56:30.298059 containerd[1497]: time="2025-03-17T17:56:30.295973312Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:56:30.298059 containerd[1497]: time="2025-03-17T17:56:30.295992976Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:56:30.298059 containerd[1497]: time="2025-03-17T17:56:30.296122851Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:56:30.363514 containerd[1497]: time="2025-03-17T17:56:30.360102789Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:56:30.363514 containerd[1497]: time="2025-03-17T17:56:30.360205289Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:56:30.363514 containerd[1497]: time="2025-03-17T17:56:30.360247350Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:56:30.363514 containerd[1497]: time="2025-03-17T17:56:30.360421016Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:56:30.388437 kubelet[2263]: E0317 17:56:30.385907 2263 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://64.23.213.164:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 64.23.213.164:6443: connect: connection refused Mar 17 17:56:30.388920 containerd[1497]: time="2025-03-17T17:56:30.385045883Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:56:30.388920 containerd[1497]: time="2025-03-17T17:56:30.385146677Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:56:30.388920 containerd[1497]: time="2025-03-17T17:56:30.385176027Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:56:30.389486 containerd[1497]: time="2025-03-17T17:56:30.389388252Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:56:30.416546 systemd[1]: Started cri-containerd-3295337a506a77794dc548310fa79ebd66b322874864bd5f7ac3e87409503107.scope - libcontainer container 3295337a506a77794dc548310fa79ebd66b322874864bd5f7ac3e87409503107. Mar 17 17:56:30.471142 systemd[1]: Started cri-containerd-891e9463bb8730a49ca5aec29e259ab44dddb678c26c57a6cb31e6627bb6bfd6.scope - libcontainer container 891e9463bb8730a49ca5aec29e259ab44dddb678c26c57a6cb31e6627bb6bfd6. Mar 17 17:56:30.604970 systemd[1]: Started cri-containerd-a8e96ade0441a546001fa9c8b3e8c4677db2b5e0c6d788c1f1fc8de9fca609cc.scope - libcontainer container a8e96ade0441a546001fa9c8b3e8c4677db2b5e0c6d788c1f1fc8de9fca609cc. Mar 17 17:56:30.719342 containerd[1497]: time="2025-03-17T17:56:30.718586794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.1.0-0-80157c225a,Uid:cc1a392ea0cbc0c0156756dde7e8f327,Namespace:kube-system,Attempt:0,} returns sandbox id \"3295337a506a77794dc548310fa79ebd66b322874864bd5f7ac3e87409503107\"" Mar 17 17:56:30.732713 kubelet[2263]: E0317 17:56:30.731309 2263 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:56:30.748265 containerd[1497]: time="2025-03-17T17:56:30.747957872Z" level=info msg="CreateContainer within sandbox \"3295337a506a77794dc548310fa79ebd66b322874864bd5f7ac3e87409503107\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 17 17:56:30.784363 containerd[1497]: time="2025-03-17T17:56:30.784075630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.1.0-0-80157c225a,Uid:db5d70e2ea183973ff1052c54f4780ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"891e9463bb8730a49ca5aec29e259ab44dddb678c26c57a6cb31e6627bb6bfd6\"" Mar 17 17:56:30.788570 kubelet[2263]: E0317 17:56:30.788124 2263 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:56:30.791507 containerd[1497]: time="2025-03-17T17:56:30.791457124Z" level=info msg="CreateContainer within sandbox \"891e9463bb8730a49ca5aec29e259ab44dddb678c26c57a6cb31e6627bb6bfd6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 17 17:56:30.858497 containerd[1497]: time="2025-03-17T17:56:30.858228351Z" level=info msg="CreateContainer within sandbox \"3295337a506a77794dc548310fa79ebd66b322874864bd5f7ac3e87409503107\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ffc9b01f9b9ae8b682a9a178ee033788477940de9e40d8af8dd39737699529b2\"" Mar 17 17:56:30.867118 containerd[1497]: time="2025-03-17T17:56:30.867054129Z" level=info msg="StartContainer for \"ffc9b01f9b9ae8b682a9a178ee033788477940de9e40d8af8dd39737699529b2\"" Mar 17 17:56:30.878409 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2474160894.mount: Deactivated successfully. Mar 17 17:56:30.887606 containerd[1497]: time="2025-03-17T17:56:30.886903146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.1.0-0-80157c225a,Uid:5f0ba4c214da8ad6efdccb2123e0cb10,Namespace:kube-system,Attempt:0,} returns sandbox id \"a8e96ade0441a546001fa9c8b3e8c4677db2b5e0c6d788c1f1fc8de9fca609cc\"" Mar 17 17:56:30.889079 kubelet[2263]: E0317 17:56:30.888670 2263 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:56:30.896527 containerd[1497]: time="2025-03-17T17:56:30.896273838Z" level=info msg="CreateContainer within sandbox \"891e9463bb8730a49ca5aec29e259ab44dddb678c26c57a6cb31e6627bb6bfd6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5621661d299659d14388b2820cbde22f2f8c3730de90d838ef2fdcfba552862d\"" Mar 17 17:56:30.896971 containerd[1497]: time="2025-03-17T17:56:30.896302355Z" level=info msg="CreateContainer within sandbox \"a8e96ade0441a546001fa9c8b3e8c4677db2b5e0c6d788c1f1fc8de9fca609cc\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 17 17:56:30.897765 containerd[1497]: time="2025-03-17T17:56:30.897720968Z" level=info msg="StartContainer for \"5621661d299659d14388b2820cbde22f2f8c3730de90d838ef2fdcfba552862d\"" Mar 17 17:56:30.931680 containerd[1497]: time="2025-03-17T17:56:30.931422532Z" level=info msg="CreateContainer within sandbox \"a8e96ade0441a546001fa9c8b3e8c4677db2b5e0c6d788c1f1fc8de9fca609cc\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8ff0a25f94d6dfa2f7520b91abc8518001df033a91634d1b969065cc1cd95b16\"" Mar 17 17:56:30.936338 containerd[1497]: time="2025-03-17T17:56:30.933358723Z" level=info msg="StartContainer for \"8ff0a25f94d6dfa2f7520b91abc8518001df033a91634d1b969065cc1cd95b16\"" Mar 17 17:56:30.963985 systemd[1]: Started cri-containerd-ffc9b01f9b9ae8b682a9a178ee033788477940de9e40d8af8dd39737699529b2.scope - libcontainer container ffc9b01f9b9ae8b682a9a178ee033788477940de9e40d8af8dd39737699529b2. Mar 17 17:56:31.037112 systemd[1]: Started cri-containerd-5621661d299659d14388b2820cbde22f2f8c3730de90d838ef2fdcfba552862d.scope - libcontainer container 5621661d299659d14388b2820cbde22f2f8c3730de90d838ef2fdcfba552862d. Mar 17 17:56:31.078977 systemd[1]: Started cri-containerd-8ff0a25f94d6dfa2f7520b91abc8518001df033a91634d1b969065cc1cd95b16.scope - libcontainer container 8ff0a25f94d6dfa2f7520b91abc8518001df033a91634d1b969065cc1cd95b16. Mar 17 17:56:31.190202 containerd[1497]: time="2025-03-17T17:56:31.189848924Z" level=info msg="StartContainer for \"ffc9b01f9b9ae8b682a9a178ee033788477940de9e40d8af8dd39737699529b2\" returns successfully" Mar 17 17:56:31.240786 containerd[1497]: time="2025-03-17T17:56:31.239449383Z" level=info msg="StartContainer for \"5621661d299659d14388b2820cbde22f2f8c3730de90d838ef2fdcfba552862d\" returns successfully" Mar 17 17:56:31.324794 containerd[1497]: time="2025-03-17T17:56:31.324524041Z" level=info msg="StartContainer for \"8ff0a25f94d6dfa2f7520b91abc8518001df033a91634d1b969065cc1cd95b16\" returns successfully" Mar 17 17:56:31.471829 kubelet[2263]: E0317 17:56:31.471382 2263 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.213.164:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.0-0-80157c225a?timeout=10s\": dial tcp 64.23.213.164:6443: connect: connection refused" interval="3.2s" Mar 17 17:56:31.619752 kubelet[2263]: I0317 17:56:31.619697 2263 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230.1.0-0-80157c225a" Mar 17 17:56:31.620922 kubelet[2263]: E0317 17:56:31.620286 2263 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://64.23.213.164:6443/api/v1/nodes\": dial tcp 64.23.213.164:6443: connect: connection refused" node="ci-4230.1.0-0-80157c225a" Mar 17 17:56:31.693610 kubelet[2263]: E0317 17:56:31.693456 2263 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:56:31.695785 kubelet[2263]: E0317 17:56:31.695450 2263 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:56:31.698960 kubelet[2263]: W0317 17:56:31.696954 2263 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://64.23.213.164:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.23.213.164:6443: connect: connection refused Mar 17 17:56:31.699338 kubelet[2263]: E0317 17:56:31.699297 2263 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://64.23.213.164:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.23.213.164:6443: connect: connection refused Mar 17 17:56:31.711297 kubelet[2263]: E0317 17:56:31.711243 2263 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:56:31.778812 kubelet[2263]: W0317 17:56:31.778711 2263 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://64.23.213.164:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 64.23.213.164:6443: connect: connection refused Mar 17 17:56:31.778812 kubelet[2263]: E0317 17:56:31.778818 2263 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://64.23.213.164:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 64.23.213.164:6443: connect: connection refused Mar 17 17:56:32.719856 kubelet[2263]: E0317 17:56:32.719688 2263 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:56:33.683263 update_engine[1474]: I20250317 17:56:33.678707 1474 update_attempter.cc:509] Updating boot flags... Mar 17 17:56:33.779189 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (2551) Mar 17 17:56:34.117738 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (2550) Mar 17 17:56:34.838034 kubelet[2263]: I0317 17:56:34.837292 2263 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230.1.0-0-80157c225a" Mar 17 17:56:35.358519 kubelet[2263]: E0317 17:56:35.358469 2263 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:56:36.243464 kubelet[2263]: E0317 17:56:36.230965 2263 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230.1.0-0-80157c225a\" not found" node="ci-4230.1.0-0-80157c225a" Mar 17 17:56:36.247845 kubelet[2263]: I0317 17:56:36.247365 2263 kubelet_node_status.go:76] "Successfully registered node" node="ci-4230.1.0-0-80157c225a" Mar 17 17:56:36.296042 kubelet[2263]: E0317 17:56:36.294608 2263 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4230.1.0-0-80157c225a.182da8c37cd59c08 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.1.0-0-80157c225a,UID:ci-4230.1.0-0-80157c225a,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.1.0-0-80157c225a,},FirstTimestamp:2025-03-17 17:56:28.428712968 +0000 UTC m=+0.724289206,LastTimestamp:2025-03-17 17:56:28.428712968 +0000 UTC m=+0.724289206,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.1.0-0-80157c225a,}" Mar 17 17:56:36.369550 kubelet[2263]: E0317 17:56:36.369131 2263 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4230.1.0-0-80157c225a.182da8c384dac9f4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.1.0-0-80157c225a,UID:ci-4230.1.0-0-80157c225a,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ci-4230.1.0-0-80157c225a status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ci-4230.1.0-0-80157c225a,},FirstTimestamp:2025-03-17 17:56:28.563270132 +0000 UTC m=+0.858846350,LastTimestamp:2025-03-17 17:56:28.563270132 +0000 UTC m=+0.858846350,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.1.0-0-80157c225a,}" Mar 17 17:56:36.416170 kubelet[2263]: I0317 17:56:36.416098 2263 apiserver.go:52] "Watching apiserver" Mar 17 17:56:36.462387 kubelet[2263]: I0317 17:56:36.462296 2263 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 17:56:39.196138 systemd[1]: Reload requested from client PID 2563 ('systemctl') (unit session-5.scope)... Mar 17 17:56:39.196169 systemd[1]: Reloading... Mar 17 17:56:39.419111 zram_generator::config[2611]: No configuration found. Mar 17 17:56:39.720143 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:56:39.934613 systemd[1]: Reloading finished in 737 ms. Mar 17 17:56:39.979959 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:56:39.996809 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 17:56:39.997934 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:56:39.998275 systemd[1]: kubelet.service: Consumed 1.399s CPU time, 112.5M memory peak. Mar 17 17:56:40.013238 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:56:40.300148 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:56:40.307273 (kubelet)[2658]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:56:40.400663 kubelet[2658]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:56:40.400663 kubelet[2658]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 17:56:40.400663 kubelet[2658]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:56:40.402579 kubelet[2658]: I0317 17:56:40.402334 2658 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 17:56:40.416730 kubelet[2658]: I0317 17:56:40.416129 2658 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 17 17:56:40.416730 kubelet[2658]: I0317 17:56:40.416184 2658 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 17:56:40.417329 kubelet[2658]: I0317 17:56:40.417299 2658 server.go:927] "Client rotation is on, will bootstrap in background" Mar 17 17:56:40.430673 kubelet[2658]: I0317 17:56:40.430518 2658 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 17 17:56:40.435604 kubelet[2658]: I0317 17:56:40.435426 2658 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 17:56:40.466652 kubelet[2658]: I0317 17:56:40.466482 2658 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 17:56:40.467648 kubelet[2658]: I0317 17:56:40.467133 2658 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 17:56:40.467648 kubelet[2658]: I0317 17:56:40.467186 2658 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.1.0-0-80157c225a","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 17 17:56:40.467648 kubelet[2658]: I0317 17:56:40.467492 2658 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 17:56:40.467648 kubelet[2658]: I0317 17:56:40.467505 2658 container_manager_linux.go:301] "Creating device plugin manager" Mar 17 17:56:40.468027 kubelet[2658]: I0317 17:56:40.467566 2658 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:56:40.469689 kubelet[2658]: I0317 17:56:40.469654 2658 kubelet.go:400] "Attempting to sync node with API server" Mar 17 17:56:40.470058 kubelet[2658]: I0317 17:56:40.469858 2658 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 17:56:40.470058 kubelet[2658]: I0317 17:56:40.469913 2658 kubelet.go:312] "Adding apiserver pod source" Mar 17 17:56:40.470058 kubelet[2658]: I0317 17:56:40.469950 2658 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 17:56:40.477758 kubelet[2658]: I0317 17:56:40.474151 2658 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 17:56:40.477758 kubelet[2658]: I0317 17:56:40.474469 2658 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 17:56:40.478589 kubelet[2658]: I0317 17:56:40.478554 2658 server.go:1264] "Started kubelet" Mar 17 17:56:40.488432 kubelet[2658]: I0317 17:56:40.488259 2658 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 17:56:40.498771 kubelet[2658]: I0317 17:56:40.498252 2658 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 17:56:40.500341 kubelet[2658]: I0317 17:56:40.500295 2658 server.go:455] "Adding debug handlers to kubelet server" Mar 17 17:56:40.503669 kubelet[2658]: I0317 17:56:40.502290 2658 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 17:56:40.503669 kubelet[2658]: I0317 17:56:40.502541 2658 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 17:56:40.506098 kubelet[2658]: I0317 17:56:40.506048 2658 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 17 17:56:40.507556 kubelet[2658]: I0317 17:56:40.507515 2658 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 17:56:40.508070 kubelet[2658]: I0317 17:56:40.508044 2658 reconciler.go:26] "Reconciler: start to sync state" Mar 17 17:56:40.522683 kubelet[2658]: I0317 17:56:40.521285 2658 factory.go:221] Registration of the systemd container factory successfully Mar 17 17:56:40.526013 kubelet[2658]: I0317 17:56:40.525948 2658 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 17:56:40.552340 kubelet[2658]: I0317 17:56:40.552171 2658 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 17:56:40.560872 kubelet[2658]: I0317 17:56:40.560811 2658 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 17:56:40.561181 kubelet[2658]: I0317 17:56:40.561152 2658 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 17:56:40.561852 kubelet[2658]: I0317 17:56:40.561824 2658 kubelet.go:2337] "Starting kubelet main sync loop" Mar 17 17:56:40.565940 kubelet[2658]: E0317 17:56:40.564845 2658 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 17:56:40.587319 kubelet[2658]: I0317 17:56:40.586230 2658 factory.go:221] Registration of the containerd container factory successfully Mar 17 17:56:40.592676 kubelet[2658]: E0317 17:56:40.590941 2658 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 17:56:40.625569 kubelet[2658]: I0317 17:56:40.624358 2658 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230.1.0-0-80157c225a" Mar 17 17:56:40.666046 kubelet[2658]: E0317 17:56:40.665178 2658 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 17 17:56:40.678944 kubelet[2658]: I0317 17:56:40.678493 2658 kubelet_node_status.go:112] "Node was previously registered" node="ci-4230.1.0-0-80157c225a" Mar 17 17:56:40.678944 kubelet[2658]: I0317 17:56:40.678656 2658 kubelet_node_status.go:76] "Successfully registered node" node="ci-4230.1.0-0-80157c225a" Mar 17 17:56:40.743563 kubelet[2658]: I0317 17:56:40.743090 2658 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 17:56:40.743563 kubelet[2658]: I0317 17:56:40.743124 2658 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 17:56:40.743563 kubelet[2658]: I0317 17:56:40.743162 2658 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:56:40.743563 kubelet[2658]: I0317 17:56:40.743398 2658 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 17 17:56:40.743563 kubelet[2658]: I0317 17:56:40.743411 2658 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 17 17:56:40.743563 kubelet[2658]: I0317 17:56:40.743434 2658 policy_none.go:49] "None policy: Start" Mar 17 17:56:40.746254 kubelet[2658]: I0317 17:56:40.746196 2658 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 17:56:40.746614 kubelet[2658]: I0317 17:56:40.746548 2658 state_mem.go:35] "Initializing new in-memory state store" Mar 17 17:56:40.747454 kubelet[2658]: I0317 17:56:40.747420 2658 state_mem.go:75] "Updated machine memory state" Mar 17 17:56:40.762415 kubelet[2658]: I0317 17:56:40.758671 2658 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 17:56:40.762415 kubelet[2658]: I0317 17:56:40.758975 2658 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 17:56:40.762415 kubelet[2658]: I0317 17:56:40.759725 2658 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 17:56:40.866948 kubelet[2658]: I0317 17:56:40.866227 2658 topology_manager.go:215] "Topology Admit Handler" podUID="cc1a392ea0cbc0c0156756dde7e8f327" podNamespace="kube-system" podName="kube-controller-manager-ci-4230.1.0-0-80157c225a" Mar 17 17:56:40.866948 kubelet[2658]: I0317 17:56:40.866788 2658 topology_manager.go:215] "Topology Admit Handler" podUID="db5d70e2ea183973ff1052c54f4780ea" podNamespace="kube-system" podName="kube-scheduler-ci-4230.1.0-0-80157c225a" Mar 17 17:56:40.866948 kubelet[2658]: I0317 17:56:40.866909 2658 topology_manager.go:215] "Topology Admit Handler" podUID="5f0ba4c214da8ad6efdccb2123e0cb10" podNamespace="kube-system" podName="kube-apiserver-ci-4230.1.0-0-80157c225a" Mar 17 17:56:40.901336 kubelet[2658]: W0317 17:56:40.901272 2658 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 17:56:40.902017 kubelet[2658]: W0317 17:56:40.901704 2658 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 17:56:40.904735 kubelet[2658]: W0317 17:56:40.904255 2658 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 17:56:40.914680 kubelet[2658]: I0317 17:56:40.913073 2658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cc1a392ea0cbc0c0156756dde7e8f327-ca-certs\") pod \"kube-controller-manager-ci-4230.1.0-0-80157c225a\" (UID: \"cc1a392ea0cbc0c0156756dde7e8f327\") " pod="kube-system/kube-controller-manager-ci-4230.1.0-0-80157c225a" Mar 17 17:56:40.914680 kubelet[2658]: I0317 17:56:40.913152 2658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cc1a392ea0cbc0c0156756dde7e8f327-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.1.0-0-80157c225a\" (UID: \"cc1a392ea0cbc0c0156756dde7e8f327\") " pod="kube-system/kube-controller-manager-ci-4230.1.0-0-80157c225a" Mar 17 17:56:40.914680 kubelet[2658]: I0317 17:56:40.913190 2658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db5d70e2ea183973ff1052c54f4780ea-kubeconfig\") pod \"kube-scheduler-ci-4230.1.0-0-80157c225a\" (UID: \"db5d70e2ea183973ff1052c54f4780ea\") " pod="kube-system/kube-scheduler-ci-4230.1.0-0-80157c225a" Mar 17 17:56:40.914680 kubelet[2658]: I0317 17:56:40.913218 2658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5f0ba4c214da8ad6efdccb2123e0cb10-ca-certs\") pod \"kube-apiserver-ci-4230.1.0-0-80157c225a\" (UID: \"5f0ba4c214da8ad6efdccb2123e0cb10\") " pod="kube-system/kube-apiserver-ci-4230.1.0-0-80157c225a" Mar 17 17:56:40.914680 kubelet[2658]: I0317 17:56:40.913247 2658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5f0ba4c214da8ad6efdccb2123e0cb10-k8s-certs\") pod \"kube-apiserver-ci-4230.1.0-0-80157c225a\" (UID: \"5f0ba4c214da8ad6efdccb2123e0cb10\") " pod="kube-system/kube-apiserver-ci-4230.1.0-0-80157c225a" Mar 17 17:56:40.915061 kubelet[2658]: I0317 17:56:40.913275 2658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5f0ba4c214da8ad6efdccb2123e0cb10-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.1.0-0-80157c225a\" (UID: \"5f0ba4c214da8ad6efdccb2123e0cb10\") " pod="kube-system/kube-apiserver-ci-4230.1.0-0-80157c225a" Mar 17 17:56:40.915061 kubelet[2658]: I0317 17:56:40.913302 2658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cc1a392ea0cbc0c0156756dde7e8f327-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.1.0-0-80157c225a\" (UID: \"cc1a392ea0cbc0c0156756dde7e8f327\") " pod="kube-system/kube-controller-manager-ci-4230.1.0-0-80157c225a" Mar 17 17:56:40.915061 kubelet[2658]: I0317 17:56:40.913329 2658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cc1a392ea0cbc0c0156756dde7e8f327-k8s-certs\") pod \"kube-controller-manager-ci-4230.1.0-0-80157c225a\" (UID: \"cc1a392ea0cbc0c0156756dde7e8f327\") " pod="kube-system/kube-controller-manager-ci-4230.1.0-0-80157c225a" Mar 17 17:56:40.915061 kubelet[2658]: I0317 17:56:40.913358 2658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cc1a392ea0cbc0c0156756dde7e8f327-kubeconfig\") pod \"kube-controller-manager-ci-4230.1.0-0-80157c225a\" (UID: \"cc1a392ea0cbc0c0156756dde7e8f327\") " pod="kube-system/kube-controller-manager-ci-4230.1.0-0-80157c225a" Mar 17 17:56:41.204284 kubelet[2658]: E0317 17:56:41.204103 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:56:41.207030 kubelet[2658]: E0317 17:56:41.206963 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:56:41.207808 kubelet[2658]: E0317 17:56:41.207759 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:56:41.477808 kubelet[2658]: I0317 17:56:41.473766 2658 apiserver.go:52] "Watching apiserver" Mar 17 17:56:41.509180 kubelet[2658]: I0317 17:56:41.508247 2658 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 17:56:41.675437 kubelet[2658]: E0317 17:56:41.675391 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:56:41.681722 kubelet[2658]: E0317 17:56:41.681679 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:56:41.684447 kubelet[2658]: E0317 17:56:41.684409 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:56:41.916701 kubelet[2658]: I0317 17:56:41.913357 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230.1.0-0-80157c225a" podStartSLOduration=1.913330169 podStartE2EDuration="1.913330169s" podCreationTimestamp="2025-03-17 17:56:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:56:41.872008246 +0000 UTC m=+1.557202723" watchObservedRunningTime="2025-03-17 17:56:41.913330169 +0000 UTC m=+1.598524635" Mar 17 17:56:41.940752 kubelet[2658]: I0317 17:56:41.940606 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230.1.0-0-80157c225a" podStartSLOduration=1.940576281 podStartE2EDuration="1.940576281s" podCreationTimestamp="2025-03-17 17:56:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:56:41.913757089 +0000 UTC m=+1.598951560" watchObservedRunningTime="2025-03-17 17:56:41.940576281 +0000 UTC m=+1.625770750" Mar 17 17:56:42.386259 sudo[1640]: pam_unix(sudo:session): session closed for user root Mar 17 17:56:42.392191 sshd[1639]: Connection closed by 139.178.68.195 port 44420 Mar 17 17:56:42.393596 sshd-session[1636]: pam_unix(sshd:session): session closed for user core Mar 17 17:56:42.399212 systemd-logind[1472]: Session 5 logged out. Waiting for processes to exit. Mar 17 17:56:42.400374 systemd[1]: sshd@4-64.23.213.164:22-139.178.68.195:44420.service: Deactivated successfully. Mar 17 17:56:42.410547 systemd[1]: session-5.scope: Deactivated successfully. Mar 17 17:56:42.411054 systemd[1]: session-5.scope: Consumed 5.749s CPU time, 193.4M memory peak. Mar 17 17:56:42.417144 systemd-logind[1472]: Removed session 5. Mar 17 17:56:42.680145 kubelet[2658]: E0317 17:56:42.679253 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:56:43.184742 kubelet[2658]: I0317 17:56:43.174931 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230.1.0-0-80157c225a" podStartSLOduration=3.174902662 podStartE2EDuration="3.174902662s" podCreationTimestamp="2025-03-17 17:56:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:56:41.942665371 +0000 UTC m=+1.627859837" watchObservedRunningTime="2025-03-17 17:56:43.174902662 +0000 UTC m=+2.860097126" Mar 17 17:56:43.591953 kubelet[2658]: E0317 17:56:43.591836 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:56:43.681241 kubelet[2658]: E0317 17:56:43.681189 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:56:43.682764 kubelet[2658]: E0317 17:56:43.682520 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:56:44.683748 kubelet[2658]: E0317 17:56:44.682507 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:56:45.685311 kubelet[2658]: E0317 17:56:45.685076 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:56:50.073897 kubelet[2658]: E0317 17:56:50.073850 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:56:50.700695 kubelet[2658]: E0317 17:56:50.700443 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:56:54.152375 kubelet[2658]: I0317 17:56:54.149445 2658 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 17 17:56:54.154010 containerd[1497]: time="2025-03-17T17:56:54.153939543Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 17 17:56:54.156265 kubelet[2658]: I0317 17:56:54.154930 2658 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 17 17:56:54.984786 kubelet[2658]: I0317 17:56:54.968806 2658 topology_manager.go:215] "Topology Admit Handler" podUID="a3650c62-f4d9-4770-9a98-6eb9caf0a211" podNamespace="kube-flannel" podName="kube-flannel-ds-sfldn" Mar 17 17:56:54.986669 kubelet[2658]: I0317 17:56:54.985813 2658 topology_manager.go:215] "Topology Admit Handler" podUID="c5726e50-a47a-4f1d-b296-e82d1436e437" podNamespace="kube-system" podName="kube-proxy-cdhl8" Mar 17 17:56:55.017223 kubelet[2658]: W0317 17:56:55.016778 2658 reflector.go:547] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4230.1.0-0-80157c225a" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230.1.0-0-80157c225a' and this object Mar 17 17:56:55.017223 kubelet[2658]: E0317 17:56:55.016831 2658 reflector.go:150] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4230.1.0-0-80157c225a" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230.1.0-0-80157c225a' and this object Mar 17 17:56:55.017223 kubelet[2658]: W0317 17:56:55.016913 2658 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-4230.1.0-0-80157c225a" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230.1.0-0-80157c225a' and this object Mar 17 17:56:55.017223 kubelet[2658]: E0317 17:56:55.016934 2658 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-4230.1.0-0-80157c225a" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230.1.0-0-80157c225a' and this object Mar 17 17:56:55.020743 systemd[1]: Created slice kubepods-burstable-poda3650c62_f4d9_4770_9a98_6eb9caf0a211.slice - libcontainer container kubepods-burstable-poda3650c62_f4d9_4770_9a98_6eb9caf0a211.slice. Mar 17 17:56:55.026119 kubelet[2658]: W0317 17:56:55.024180 2658 reflector.go:547] object-"kube-flannel"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4230.1.0-0-80157c225a" cannot list resource "configmaps" in API group "" in the namespace "kube-flannel": no relationship found between node 'ci-4230.1.0-0-80157c225a' and this object Mar 17 17:56:55.026119 kubelet[2658]: E0317 17:56:55.024232 2658 reflector.go:150] object-"kube-flannel"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4230.1.0-0-80157c225a" cannot list resource "configmaps" in API group "" in the namespace "kube-flannel": no relationship found between node 'ci-4230.1.0-0-80157c225a' and this object Mar 17 17:56:55.026119 kubelet[2658]: W0317 17:56:55.024487 2658 reflector.go:547] object-"kube-flannel"/"kube-flannel-cfg": failed to list *v1.ConfigMap: configmaps "kube-flannel-cfg" is forbidden: User "system:node:ci-4230.1.0-0-80157c225a" cannot list resource "configmaps" in API group "" in the namespace "kube-flannel": no relationship found between node 'ci-4230.1.0-0-80157c225a' and this object Mar 17 17:56:55.026119 kubelet[2658]: E0317 17:56:55.024511 2658 reflector.go:150] object-"kube-flannel"/"kube-flannel-cfg": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-flannel-cfg" is forbidden: User "system:node:ci-4230.1.0-0-80157c225a" cannot list resource "configmaps" in API group "" in the namespace "kube-flannel": no relationship found between node 'ci-4230.1.0-0-80157c225a' and this object Mar 17 17:56:55.052641 systemd[1]: Created slice kubepods-besteffort-podc5726e50_a47a_4f1d_b296_e82d1436e437.slice - libcontainer container kubepods-besteffort-podc5726e50_a47a_4f1d_b296_e82d1436e437.slice. Mar 17 17:56:55.093107 kubelet[2658]: I0317 17:56:55.093012 2658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a3650c62-f4d9-4770-9a98-6eb9caf0a211-run\") pod \"kube-flannel-ds-sfldn\" (UID: \"a3650c62-f4d9-4770-9a98-6eb9caf0a211\") " pod="kube-flannel/kube-flannel-ds-sfldn" Mar 17 17:56:55.093107 kubelet[2658]: I0317 17:56:55.093087 2658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/a3650c62-f4d9-4770-9a98-6eb9caf0a211-flannel-cfg\") pod \"kube-flannel-ds-sfldn\" (UID: \"a3650c62-f4d9-4770-9a98-6eb9caf0a211\") " pod="kube-flannel/kube-flannel-ds-sfldn" Mar 17 17:56:55.093107 kubelet[2658]: I0317 17:56:55.093121 2658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/a3650c62-f4d9-4770-9a98-6eb9caf0a211-cni\") pod \"kube-flannel-ds-sfldn\" (UID: \"a3650c62-f4d9-4770-9a98-6eb9caf0a211\") " pod="kube-flannel/kube-flannel-ds-sfldn" Mar 17 17:56:55.093431 kubelet[2658]: I0317 17:56:55.093153 2658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c5726e50-a47a-4f1d-b296-e82d1436e437-kube-proxy\") pod \"kube-proxy-cdhl8\" (UID: \"c5726e50-a47a-4f1d-b296-e82d1436e437\") " pod="kube-system/kube-proxy-cdhl8" Mar 17 17:56:55.093431 kubelet[2658]: I0317 17:56:55.093181 2658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c5726e50-a47a-4f1d-b296-e82d1436e437-xtables-lock\") pod \"kube-proxy-cdhl8\" (UID: \"c5726e50-a47a-4f1d-b296-e82d1436e437\") " pod="kube-system/kube-proxy-cdhl8" Mar 17 17:56:55.093431 kubelet[2658]: I0317 17:56:55.093210 2658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt52h\" (UniqueName: \"kubernetes.io/projected/c5726e50-a47a-4f1d-b296-e82d1436e437-kube-api-access-kt52h\") pod \"kube-proxy-cdhl8\" (UID: \"c5726e50-a47a-4f1d-b296-e82d1436e437\") " pod="kube-system/kube-proxy-cdhl8" Mar 17 17:56:55.093431 kubelet[2658]: I0317 17:56:55.093238 2658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a3650c62-f4d9-4770-9a98-6eb9caf0a211-xtables-lock\") pod \"kube-flannel-ds-sfldn\" (UID: \"a3650c62-f4d9-4770-9a98-6eb9caf0a211\") " pod="kube-flannel/kube-flannel-ds-sfldn" Mar 17 17:56:55.093431 kubelet[2658]: I0317 17:56:55.093284 2658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sgls\" (UniqueName: \"kubernetes.io/projected/a3650c62-f4d9-4770-9a98-6eb9caf0a211-kube-api-access-8sgls\") pod \"kube-flannel-ds-sfldn\" (UID: \"a3650c62-f4d9-4770-9a98-6eb9caf0a211\") " pod="kube-flannel/kube-flannel-ds-sfldn" Mar 17 17:56:55.093681 kubelet[2658]: I0317 17:56:55.093312 2658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c5726e50-a47a-4f1d-b296-e82d1436e437-lib-modules\") pod \"kube-proxy-cdhl8\" (UID: \"c5726e50-a47a-4f1d-b296-e82d1436e437\") " pod="kube-system/kube-proxy-cdhl8" Mar 17 17:56:55.093681 kubelet[2658]: I0317 17:56:55.093345 2658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/a3650c62-f4d9-4770-9a98-6eb9caf0a211-cni-plugin\") pod \"kube-flannel-ds-sfldn\" (UID: \"a3650c62-f4d9-4770-9a98-6eb9caf0a211\") " pod="kube-flannel/kube-flannel-ds-sfldn" Mar 17 17:56:56.194702 kubelet[2658]: E0317 17:56:56.194006 2658 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 17 17:56:56.194702 kubelet[2658]: E0317 17:56:56.194152 2658 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c5726e50-a47a-4f1d-b296-e82d1436e437-kube-proxy podName:c5726e50-a47a-4f1d-b296-e82d1436e437 nodeName:}" failed. No retries permitted until 2025-03-17 17:56:56.694121007 +0000 UTC m=+16.379315476 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/c5726e50-a47a-4f1d-b296-e82d1436e437-kube-proxy") pod "kube-proxy-cdhl8" (UID: "c5726e50-a47a-4f1d-b296-e82d1436e437") : failed to sync configmap cache: timed out waiting for the condition Mar 17 17:56:56.194702 kubelet[2658]: E0317 17:56:56.194457 2658 configmap.go:199] Couldn't get configMap kube-flannel/kube-flannel-cfg: failed to sync configmap cache: timed out waiting for the condition Mar 17 17:56:56.194702 kubelet[2658]: E0317 17:56:56.194514 2658 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a3650c62-f4d9-4770-9a98-6eb9caf0a211-flannel-cfg podName:a3650c62-f4d9-4770-9a98-6eb9caf0a211 nodeName:}" failed. No retries permitted until 2025-03-17 17:56:56.694495392 +0000 UTC m=+16.379689844 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "flannel-cfg" (UniqueName: "kubernetes.io/configmap/a3650c62-f4d9-4770-9a98-6eb9caf0a211-flannel-cfg") pod "kube-flannel-ds-sfldn" (UID: "a3650c62-f4d9-4770-9a98-6eb9caf0a211") : failed to sync configmap cache: timed out waiting for the condition Mar 17 17:56:56.255215 kubelet[2658]: E0317 17:56:56.252866 2658 projected.go:294] Couldn't get configMap kube-flannel/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Mar 17 17:56:56.255215 kubelet[2658]: E0317 17:56:56.252951 2658 projected.go:200] Error preparing data for projected volume kube-api-access-8sgls for pod kube-flannel/kube-flannel-ds-sfldn: failed to sync configmap cache: timed out waiting for the condition Mar 17 17:56:56.255215 kubelet[2658]: E0317 17:56:56.253074 2658 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a3650c62-f4d9-4770-9a98-6eb9caf0a211-kube-api-access-8sgls podName:a3650c62-f4d9-4770-9a98-6eb9caf0a211 nodeName:}" failed. No retries permitted until 2025-03-17 17:56:56.753044373 +0000 UTC m=+16.438238833 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8sgls" (UniqueName: "kubernetes.io/projected/a3650c62-f4d9-4770-9a98-6eb9caf0a211-kube-api-access-8sgls") pod "kube-flannel-ds-sfldn" (UID: "a3650c62-f4d9-4770-9a98-6eb9caf0a211") : failed to sync configmap cache: timed out waiting for the condition Mar 17 17:56:56.845398 kubelet[2658]: E0317 17:56:56.845312 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:56:56.846479 containerd[1497]: time="2025-03-17T17:56:56.846376195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-sfldn,Uid:a3650c62-f4d9-4770-9a98-6eb9caf0a211,Namespace:kube-flannel,Attempt:0,}" Mar 17 17:56:56.871546 kubelet[2658]: E0317 17:56:56.870898 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:56:56.873949 containerd[1497]: time="2025-03-17T17:56:56.872285604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cdhl8,Uid:c5726e50-a47a-4f1d-b296-e82d1436e437,Namespace:kube-system,Attempt:0,}" Mar 17 17:56:56.950899 containerd[1497]: time="2025-03-17T17:56:56.949335928Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:56:56.950899 containerd[1497]: time="2025-03-17T17:56:56.949436478Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:56:56.950899 containerd[1497]: time="2025-03-17T17:56:56.949462132Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:56:56.953700 containerd[1497]: time="2025-03-17T17:56:56.951087622Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:56:56.982862 containerd[1497]: time="2025-03-17T17:56:56.982363602Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:56:56.982862 containerd[1497]: time="2025-03-17T17:56:56.982436889Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:56:56.982862 containerd[1497]: time="2025-03-17T17:56:56.982455011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:56:56.982862 containerd[1497]: time="2025-03-17T17:56:56.982560554Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:56:57.026230 systemd[1]: Started cri-containerd-53373e5e40f14a770efd102d85971e1bba31b3dee5e83a977f3970153fff5080.scope - libcontainer container 53373e5e40f14a770efd102d85971e1bba31b3dee5e83a977f3970153fff5080. Mar 17 17:56:57.067988 systemd[1]: Started cri-containerd-109576b22b57423b9f639bfbdea502996307e757623f7dca4ccc5879bc14df17.scope - libcontainer container 109576b22b57423b9f639bfbdea502996307e757623f7dca4ccc5879bc14df17. Mar 17 17:56:57.150580 containerd[1497]: time="2025-03-17T17:56:57.150399616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cdhl8,Uid:c5726e50-a47a-4f1d-b296-e82d1436e437,Namespace:kube-system,Attempt:0,} returns sandbox id \"109576b22b57423b9f639bfbdea502996307e757623f7dca4ccc5879bc14df17\"" Mar 17 17:56:57.154617 kubelet[2658]: E0317 17:56:57.153867 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:56:57.165419 containerd[1497]: time="2025-03-17T17:56:57.165178051Z" level=info msg="CreateContainer within sandbox \"109576b22b57423b9f639bfbdea502996307e757623f7dca4ccc5879bc14df17\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 17 17:56:57.179187 containerd[1497]: time="2025-03-17T17:56:57.179098319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-sfldn,Uid:a3650c62-f4d9-4770-9a98-6eb9caf0a211,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"53373e5e40f14a770efd102d85971e1bba31b3dee5e83a977f3970153fff5080\"" Mar 17 17:56:57.181309 kubelet[2658]: E0317 17:56:57.181113 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:56:57.192694 containerd[1497]: time="2025-03-17T17:56:57.191020143Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Mar 17 17:56:57.241198 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1423151670.mount: Deactivated successfully. Mar 17 17:56:57.255526 containerd[1497]: time="2025-03-17T17:56:57.254793515Z" level=info msg="CreateContainer within sandbox \"109576b22b57423b9f639bfbdea502996307e757623f7dca4ccc5879bc14df17\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"103871dec1791e58b787a4079db0665e955efac6e317cb79cd2276ed43291466\"" Mar 17 17:56:57.259749 containerd[1497]: time="2025-03-17T17:56:57.259657660Z" level=info msg="StartContainer for \"103871dec1791e58b787a4079db0665e955efac6e317cb79cd2276ed43291466\"" Mar 17 17:56:57.368007 systemd[1]: Started cri-containerd-103871dec1791e58b787a4079db0665e955efac6e317cb79cd2276ed43291466.scope - libcontainer container 103871dec1791e58b787a4079db0665e955efac6e317cb79cd2276ed43291466. Mar 17 17:56:57.443593 containerd[1497]: time="2025-03-17T17:56:57.443383306Z" level=info msg="StartContainer for \"103871dec1791e58b787a4079db0665e955efac6e317cb79cd2276ed43291466\" returns successfully" Mar 17 17:56:57.736032 kubelet[2658]: E0317 17:56:57.733315 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:56:58.190030 systemd[1]: run-containerd-runc-k8s.io-103871dec1791e58b787a4079db0665e955efac6e317cb79cd2276ed43291466-runc.iNRm0n.mount: Deactivated successfully. Mar 17 17:56:59.422901 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3385907074.mount: Deactivated successfully. Mar 17 17:56:59.496080 containerd[1497]: time="2025-03-17T17:56:59.495986438Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:56:59.497388 containerd[1497]: time="2025-03-17T17:56:59.496812872Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3852937" Mar 17 17:56:59.498374 containerd[1497]: time="2025-03-17T17:56:59.498323265Z" level=info msg="ImageCreate event name:\"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:56:59.505683 containerd[1497]: time="2025-03-17T17:56:59.504952283Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:56:59.506225 containerd[1497]: time="2025-03-17T17:56:59.506174891Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3842055\" in 2.315094238s" Mar 17 17:56:59.506364 containerd[1497]: time="2025-03-17T17:56:59.506343524Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\"" Mar 17 17:56:59.510797 containerd[1497]: time="2025-03-17T17:56:59.510425625Z" level=info msg="CreateContainer within sandbox \"53373e5e40f14a770efd102d85971e1bba31b3dee5e83a977f3970153fff5080\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Mar 17 17:56:59.556347 containerd[1497]: time="2025-03-17T17:56:59.556257185Z" level=info msg="CreateContainer within sandbox \"53373e5e40f14a770efd102d85971e1bba31b3dee5e83a977f3970153fff5080\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"691afc94f272229f029df8cb3d33f0bf809dc2dd5a3bd3e1b48509a5c1622905\"" Mar 17 17:56:59.559468 containerd[1497]: time="2025-03-17T17:56:59.559152947Z" level=info msg="StartContainer for \"691afc94f272229f029df8cb3d33f0bf809dc2dd5a3bd3e1b48509a5c1622905\"" Mar 17 17:56:59.599963 systemd[1]: Started cri-containerd-691afc94f272229f029df8cb3d33f0bf809dc2dd5a3bd3e1b48509a5c1622905.scope - libcontainer container 691afc94f272229f029df8cb3d33f0bf809dc2dd5a3bd3e1b48509a5c1622905. Mar 17 17:56:59.641058 systemd[1]: cri-containerd-691afc94f272229f029df8cb3d33f0bf809dc2dd5a3bd3e1b48509a5c1622905.scope: Deactivated successfully. Mar 17 17:56:59.645562 containerd[1497]: time="2025-03-17T17:56:59.643877463Z" level=info msg="StartContainer for \"691afc94f272229f029df8cb3d33f0bf809dc2dd5a3bd3e1b48509a5c1622905\" returns successfully" Mar 17 17:56:59.692981 containerd[1497]: time="2025-03-17T17:56:59.692604305Z" level=info msg="shim disconnected" id=691afc94f272229f029df8cb3d33f0bf809dc2dd5a3bd3e1b48509a5c1622905 namespace=k8s.io Mar 17 17:56:59.692981 containerd[1497]: time="2025-03-17T17:56:59.692847435Z" level=warning msg="cleaning up after shim disconnected" id=691afc94f272229f029df8cb3d33f0bf809dc2dd5a3bd3e1b48509a5c1622905 namespace=k8s.io Mar 17 17:56:59.692981 containerd[1497]: time="2025-03-17T17:56:59.692862970Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:56:59.750993 kubelet[2658]: E0317 17:56:59.750903 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:56:59.756881 containerd[1497]: time="2025-03-17T17:56:59.756776864Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Mar 17 17:56:59.776055 kubelet[2658]: I0317 17:56:59.775933 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-cdhl8" podStartSLOduration=5.77590009 podStartE2EDuration="5.77590009s" podCreationTimestamp="2025-03-17 17:56:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:56:57.763658918 +0000 UTC m=+17.448853388" watchObservedRunningTime="2025-03-17 17:56:59.77590009 +0000 UTC m=+19.461094557" Mar 17 17:57:00.280347 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-691afc94f272229f029df8cb3d33f0bf809dc2dd5a3bd3e1b48509a5c1622905-rootfs.mount: Deactivated successfully. Mar 17 17:57:02.064147 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3734644400.mount: Deactivated successfully. Mar 17 17:57:03.310709 containerd[1497]: time="2025-03-17T17:57:03.310119233Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:57:03.312255 containerd[1497]: time="2025-03-17T17:57:03.312107604Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26866358" Mar 17 17:57:03.313890 containerd[1497]: time="2025-03-17T17:57:03.313798945Z" level=info msg="ImageCreate event name:\"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:57:03.320904 containerd[1497]: time="2025-03-17T17:57:03.320591175Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:57:03.322913 containerd[1497]: time="2025-03-17T17:57:03.322697699Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26855532\" in 3.565860597s" Mar 17 17:57:03.322913 containerd[1497]: time="2025-03-17T17:57:03.322753306Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\"" Mar 17 17:57:03.332044 containerd[1497]: time="2025-03-17T17:57:03.331938128Z" level=info msg="CreateContainer within sandbox \"53373e5e40f14a770efd102d85971e1bba31b3dee5e83a977f3970153fff5080\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 17 17:57:03.359180 containerd[1497]: time="2025-03-17T17:57:03.359109031Z" level=info msg="CreateContainer within sandbox \"53373e5e40f14a770efd102d85971e1bba31b3dee5e83a977f3970153fff5080\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"87f6b067445bc10d0ce7cf7071cbd0107936b2daa40209b2fbd375d66995239b\"" Mar 17 17:57:03.362059 containerd[1497]: time="2025-03-17T17:57:03.361998045Z" level=info msg="StartContainer for \"87f6b067445bc10d0ce7cf7071cbd0107936b2daa40209b2fbd375d66995239b\"" Mar 17 17:57:03.414978 systemd[1]: Started cri-containerd-87f6b067445bc10d0ce7cf7071cbd0107936b2daa40209b2fbd375d66995239b.scope - libcontainer container 87f6b067445bc10d0ce7cf7071cbd0107936b2daa40209b2fbd375d66995239b. Mar 17 17:57:03.455446 systemd[1]: cri-containerd-87f6b067445bc10d0ce7cf7071cbd0107936b2daa40209b2fbd375d66995239b.scope: Deactivated successfully. Mar 17 17:57:03.460330 containerd[1497]: time="2025-03-17T17:57:03.459979281Z" level=info msg="StartContainer for \"87f6b067445bc10d0ce7cf7071cbd0107936b2daa40209b2fbd375d66995239b\" returns successfully" Mar 17 17:57:03.480009 kubelet[2658]: I0317 17:57:03.476468 2658 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Mar 17 17:57:03.509397 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-87f6b067445bc10d0ce7cf7071cbd0107936b2daa40209b2fbd375d66995239b-rootfs.mount: Deactivated successfully. Mar 17 17:57:03.549366 kubelet[2658]: I0317 17:57:03.549140 2658 topology_manager.go:215] "Topology Admit Handler" podUID="bdf89199-f5f3-4c7d-81c4-a7e89df4416e" podNamespace="kube-system" podName="coredns-7db6d8ff4d-tcjtv" Mar 17 17:57:03.563891 kubelet[2658]: I0317 17:57:03.563698 2658 topology_manager.go:215] "Topology Admit Handler" podUID="1111987d-56da-4d33-9478-23c676eb8949" podNamespace="kube-system" podName="coredns-7db6d8ff4d-zhpfq" Mar 17 17:57:03.599979 systemd[1]: Created slice kubepods-burstable-podbdf89199_f5f3_4c7d_81c4_a7e89df4416e.slice - libcontainer container kubepods-burstable-podbdf89199_f5f3_4c7d_81c4_a7e89df4416e.slice. Mar 17 17:57:03.605269 containerd[1497]: time="2025-03-17T17:57:03.604316075Z" level=info msg="shim disconnected" id=87f6b067445bc10d0ce7cf7071cbd0107936b2daa40209b2fbd375d66995239b namespace=k8s.io Mar 17 17:57:03.605269 containerd[1497]: time="2025-03-17T17:57:03.604410322Z" level=warning msg="cleaning up after shim disconnected" id=87f6b067445bc10d0ce7cf7071cbd0107936b2daa40209b2fbd375d66995239b namespace=k8s.io Mar 17 17:57:03.605269 containerd[1497]: time="2025-03-17T17:57:03.604427889Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:57:03.630252 systemd[1]: Created slice kubepods-burstable-pod1111987d_56da_4d33_9478_23c676eb8949.slice - libcontainer container kubepods-burstable-pod1111987d_56da_4d33_9478_23c676eb8949.slice. Mar 17 17:57:03.695363 kubelet[2658]: I0317 17:57:03.695293 2658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bdf89199-f5f3-4c7d-81c4-a7e89df4416e-config-volume\") pod \"coredns-7db6d8ff4d-tcjtv\" (UID: \"bdf89199-f5f3-4c7d-81c4-a7e89df4416e\") " pod="kube-system/coredns-7db6d8ff4d-tcjtv" Mar 17 17:57:03.695363 kubelet[2658]: I0317 17:57:03.695373 2658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5l4jd\" (UniqueName: \"kubernetes.io/projected/bdf89199-f5f3-4c7d-81c4-a7e89df4416e-kube-api-access-5l4jd\") pod \"coredns-7db6d8ff4d-tcjtv\" (UID: \"bdf89199-f5f3-4c7d-81c4-a7e89df4416e\") " pod="kube-system/coredns-7db6d8ff4d-tcjtv" Mar 17 17:57:03.695604 kubelet[2658]: I0317 17:57:03.695408 2658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1111987d-56da-4d33-9478-23c676eb8949-config-volume\") pod \"coredns-7db6d8ff4d-zhpfq\" (UID: \"1111987d-56da-4d33-9478-23c676eb8949\") " pod="kube-system/coredns-7db6d8ff4d-zhpfq" Mar 17 17:57:03.695604 kubelet[2658]: I0317 17:57:03.695437 2658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdkd4\" (UniqueName: \"kubernetes.io/projected/1111987d-56da-4d33-9478-23c676eb8949-kube-api-access-pdkd4\") pod \"coredns-7db6d8ff4d-zhpfq\" (UID: \"1111987d-56da-4d33-9478-23c676eb8949\") " pod="kube-system/coredns-7db6d8ff4d-zhpfq" Mar 17 17:57:03.771224 kubelet[2658]: E0317 17:57:03.770937 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:57:03.780160 containerd[1497]: time="2025-03-17T17:57:03.779592040Z" level=info msg="CreateContainer within sandbox \"53373e5e40f14a770efd102d85971e1bba31b3dee5e83a977f3970153fff5080\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Mar 17 17:57:03.810218 containerd[1497]: time="2025-03-17T17:57:03.809011257Z" level=info msg="CreateContainer within sandbox \"53373e5e40f14a770efd102d85971e1bba31b3dee5e83a977f3970153fff5080\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"c7df0cc5106c45e8021d4b2b18e606ab04d2b1de08780da486de8b463def0f27\"" Mar 17 17:57:03.811939 containerd[1497]: time="2025-03-17T17:57:03.811034354Z" level=info msg="StartContainer for \"c7df0cc5106c45e8021d4b2b18e606ab04d2b1de08780da486de8b463def0f27\"" Mar 17 17:57:03.864054 systemd[1]: Started cri-containerd-c7df0cc5106c45e8021d4b2b18e606ab04d2b1de08780da486de8b463def0f27.scope - libcontainer container c7df0cc5106c45e8021d4b2b18e606ab04d2b1de08780da486de8b463def0f27. Mar 17 17:57:03.921644 kubelet[2658]: E0317 17:57:03.921553 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:57:03.923390 containerd[1497]: time="2025-03-17T17:57:03.922776826Z" level=info msg="StartContainer for \"c7df0cc5106c45e8021d4b2b18e606ab04d2b1de08780da486de8b463def0f27\" returns successfully" Mar 17 17:57:03.926340 containerd[1497]: time="2025-03-17T17:57:03.924930033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tcjtv,Uid:bdf89199-f5f3-4c7d-81c4-a7e89df4416e,Namespace:kube-system,Attempt:0,}" Mar 17 17:57:03.939499 kubelet[2658]: E0317 17:57:03.939428 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:57:03.940355 containerd[1497]: time="2025-03-17T17:57:03.940213271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zhpfq,Uid:1111987d-56da-4d33-9478-23c676eb8949,Namespace:kube-system,Attempt:0,}" Mar 17 17:57:04.009472 containerd[1497]: time="2025-03-17T17:57:04.009402416Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tcjtv,Uid:bdf89199-f5f3-4c7d-81c4-a7e89df4416e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8db9219c219d37a37d28e58d23fcd096dedc849e4236b4913a13507f429adb58\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Mar 17 17:57:04.010212 kubelet[2658]: E0317 17:57:04.010091 2658 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8db9219c219d37a37d28e58d23fcd096dedc849e4236b4913a13507f429adb58\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Mar 17 17:57:04.010900 kubelet[2658]: E0317 17:57:04.010334 2658 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8db9219c219d37a37d28e58d23fcd096dedc849e4236b4913a13507f429adb58\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-tcjtv" Mar 17 17:57:04.010900 kubelet[2658]: E0317 17:57:04.010373 2658 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8db9219c219d37a37d28e58d23fcd096dedc849e4236b4913a13507f429adb58\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-tcjtv" Mar 17 17:57:04.010900 kubelet[2658]: E0317 17:57:04.010446 2658 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-tcjtv_kube-system(bdf89199-f5f3-4c7d-81c4-a7e89df4416e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-tcjtv_kube-system(bdf89199-f5f3-4c7d-81c4-a7e89df4416e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8db9219c219d37a37d28e58d23fcd096dedc849e4236b4913a13507f429adb58\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-tcjtv" podUID="bdf89199-f5f3-4c7d-81c4-a7e89df4416e" Mar 17 17:57:04.021801 containerd[1497]: time="2025-03-17T17:57:04.021524572Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zhpfq,Uid:1111987d-56da-4d33-9478-23c676eb8949,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ffa14447d9412741c155322eb57c7a9dc9f7bd9ee69c322d9a230f719dce4a7d\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Mar 17 17:57:04.022429 kubelet[2658]: E0317 17:57:04.022299 2658 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ffa14447d9412741c155322eb57c7a9dc9f7bd9ee69c322d9a230f719dce4a7d\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Mar 17 17:57:04.022429 kubelet[2658]: E0317 17:57:04.022384 2658 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ffa14447d9412741c155322eb57c7a9dc9f7bd9ee69c322d9a230f719dce4a7d\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-zhpfq" Mar 17 17:57:04.022429 kubelet[2658]: E0317 17:57:04.022418 2658 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ffa14447d9412741c155322eb57c7a9dc9f7bd9ee69c322d9a230f719dce4a7d\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-zhpfq" Mar 17 17:57:04.023026 kubelet[2658]: E0317 17:57:04.022492 2658 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-zhpfq_kube-system(1111987d-56da-4d33-9478-23c676eb8949)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-zhpfq_kube-system(1111987d-56da-4d33-9478-23c676eb8949)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ffa14447d9412741c155322eb57c7a9dc9f7bd9ee69c322d9a230f719dce4a7d\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-zhpfq" podUID="1111987d-56da-4d33-9478-23c676eb8949" Mar 17 17:57:04.778666 kubelet[2658]: E0317 17:57:04.778110 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:57:04.797295 kubelet[2658]: I0317 17:57:04.796681 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-sfldn" podStartSLOduration=4.660541847 podStartE2EDuration="10.796639473s" podCreationTimestamp="2025-03-17 17:56:54 +0000 UTC" firstStartedPulling="2025-03-17 17:56:57.188984998 +0000 UTC m=+16.874179444" lastFinishedPulling="2025-03-17 17:57:03.325082601 +0000 UTC m=+23.010277070" observedRunningTime="2025-03-17 17:57:04.796116022 +0000 UTC m=+24.481310491" watchObservedRunningTime="2025-03-17 17:57:04.796639473 +0000 UTC m=+24.481833936" Mar 17 17:57:05.053036 systemd-networkd[1385]: flannel.1: Link UP Mar 17 17:57:05.053053 systemd-networkd[1385]: flannel.1: Gained carrier Mar 17 17:57:05.781947 kubelet[2658]: E0317 17:57:05.780825 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:57:07.104566 systemd-networkd[1385]: flannel.1: Gained IPv6LL Mar 17 17:57:15.565760 kubelet[2658]: E0317 17:57:15.565569 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:57:15.567493 containerd[1497]: time="2025-03-17T17:57:15.567424751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zhpfq,Uid:1111987d-56da-4d33-9478-23c676eb8949,Namespace:kube-system,Attempt:0,}" Mar 17 17:57:15.637283 systemd-networkd[1385]: cni0: Link UP Mar 17 17:57:15.637295 systemd-networkd[1385]: cni0: Gained carrier Mar 17 17:57:15.648311 systemd-networkd[1385]: cni0: Lost carrier Mar 17 17:57:15.659985 systemd-networkd[1385]: veth7dc6bc36: Link UP Mar 17 17:57:15.663197 kernel: cni0: port 1(veth7dc6bc36) entered blocking state Mar 17 17:57:15.663460 kernel: cni0: port 1(veth7dc6bc36) entered disabled state Mar 17 17:57:15.674237 kernel: veth7dc6bc36: entered allmulticast mode Mar 17 17:57:15.674752 kernel: veth7dc6bc36: entered promiscuous mode Mar 17 17:57:15.674796 kernel: cni0: port 1(veth7dc6bc36) entered blocking state Mar 17 17:57:15.674827 kernel: cni0: port 1(veth7dc6bc36) entered forwarding state Mar 17 17:57:15.674869 kernel: cni0: port 1(veth7dc6bc36) entered disabled state Mar 17 17:57:15.703933 kernel: cni0: port 1(veth7dc6bc36) entered blocking state Mar 17 17:57:15.705924 kernel: cni0: port 1(veth7dc6bc36) entered forwarding state Mar 17 17:57:15.703879 systemd-networkd[1385]: veth7dc6bc36: Gained carrier Mar 17 17:57:15.707181 systemd-networkd[1385]: cni0: Gained carrier Mar 17 17:57:15.720906 containerd[1497]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00009a8e8), "name":"cbr0", "type":"bridge"} Mar 17 17:57:15.720906 containerd[1497]: delegateAdd: netconf sent to delegate plugin: Mar 17 17:57:15.772973 containerd[1497]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-03-17T17:57:15.772053098Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:57:15.772973 containerd[1497]: time="2025-03-17T17:57:15.772151049Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:57:15.772973 containerd[1497]: time="2025-03-17T17:57:15.772171015Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:57:15.772973 containerd[1497]: time="2025-03-17T17:57:15.772309691Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:57:15.809423 systemd[1]: run-containerd-runc-k8s.io-502637fd1b7421d610632debab27cd2287af21fcbbe94f3eaed337ad47e6672d-runc.OKXa7E.mount: Deactivated successfully. Mar 17 17:57:15.831355 systemd[1]: Started cri-containerd-502637fd1b7421d610632debab27cd2287af21fcbbe94f3eaed337ad47e6672d.scope - libcontainer container 502637fd1b7421d610632debab27cd2287af21fcbbe94f3eaed337ad47e6672d. Mar 17 17:57:15.902867 containerd[1497]: time="2025-03-17T17:57:15.902777022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zhpfq,Uid:1111987d-56da-4d33-9478-23c676eb8949,Namespace:kube-system,Attempt:0,} returns sandbox id \"502637fd1b7421d610632debab27cd2287af21fcbbe94f3eaed337ad47e6672d\"" Mar 17 17:57:15.906704 kubelet[2658]: E0317 17:57:15.906226 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:57:15.912478 containerd[1497]: time="2025-03-17T17:57:15.912403936Z" level=info msg="CreateContainer within sandbox \"502637fd1b7421d610632debab27cd2287af21fcbbe94f3eaed337ad47e6672d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 17:57:15.940451 containerd[1497]: time="2025-03-17T17:57:15.940001768Z" level=info msg="CreateContainer within sandbox \"502637fd1b7421d610632debab27cd2287af21fcbbe94f3eaed337ad47e6672d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"001cec257fd61ec8a587dd2b1b5020d748972e3048ee8fdcc383433e0d8de103\"" Mar 17 17:57:15.941867 containerd[1497]: time="2025-03-17T17:57:15.941818009Z" level=info msg="StartContainer for \"001cec257fd61ec8a587dd2b1b5020d748972e3048ee8fdcc383433e0d8de103\"" Mar 17 17:57:15.985045 systemd[1]: Started cri-containerd-001cec257fd61ec8a587dd2b1b5020d748972e3048ee8fdcc383433e0d8de103.scope - libcontainer container 001cec257fd61ec8a587dd2b1b5020d748972e3048ee8fdcc383433e0d8de103. Mar 17 17:57:16.036684 containerd[1497]: time="2025-03-17T17:57:16.035073586Z" level=info msg="StartContainer for \"001cec257fd61ec8a587dd2b1b5020d748972e3048ee8fdcc383433e0d8de103\" returns successfully" Mar 17 17:57:16.592686 kubelet[2658]: E0317 17:57:16.583310 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:57:16.593326 containerd[1497]: time="2025-03-17T17:57:16.583990151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tcjtv,Uid:bdf89199-f5f3-4c7d-81c4-a7e89df4416e,Namespace:kube-system,Attempt:0,}" Mar 17 17:57:16.699945 kernel: cni0: port 2(veth4f7dac7e) entered blocking state Mar 17 17:57:16.700973 kernel: cni0: port 2(veth4f7dac7e) entered disabled state Mar 17 17:57:16.695855 systemd-networkd[1385]: veth4f7dac7e: Link UP Mar 17 17:57:16.713974 kernel: veth4f7dac7e: entered allmulticast mode Mar 17 17:57:16.721167 kernel: veth4f7dac7e: entered promiscuous mode Mar 17 17:57:16.747533 kernel: cni0: port 2(veth4f7dac7e) entered blocking state Mar 17 17:57:16.747784 kernel: cni0: port 2(veth4f7dac7e) entered forwarding state Mar 17 17:57:16.746097 systemd-networkd[1385]: veth4f7dac7e: Gained carrier Mar 17 17:57:16.761769 containerd[1497]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00001e938), "name":"cbr0", "type":"bridge"} Mar 17 17:57:16.761769 containerd[1497]: delegateAdd: netconf sent to delegate plugin: Mar 17 17:57:16.831895 systemd-networkd[1385]: veth7dc6bc36: Gained IPv6LL Mar 17 17:57:16.835408 containerd[1497]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-03-17T17:57:16.831209321Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:57:16.835408 containerd[1497]: time="2025-03-17T17:57:16.831306828Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:57:16.835408 containerd[1497]: time="2025-03-17T17:57:16.831463783Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:57:16.835408 containerd[1497]: time="2025-03-17T17:57:16.832966005Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:57:16.886701 kubelet[2658]: E0317 17:57:16.884352 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:57:16.929367 systemd[1]: Started cri-containerd-bceac0fb6d39cb465bcdab37afc0f8f84ecb6c76c563c0bd55a161342bca3e85.scope - libcontainer container bceac0fb6d39cb465bcdab37afc0f8f84ecb6c76c563c0bd55a161342bca3e85. Mar 17 17:57:16.968754 kubelet[2658]: I0317 17:57:16.957770 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-zhpfq" podStartSLOduration=21.957736073 podStartE2EDuration="21.957736073s" podCreationTimestamp="2025-03-17 17:56:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:57:16.925467262 +0000 UTC m=+36.610661741" watchObservedRunningTime="2025-03-17 17:57:16.957736073 +0000 UTC m=+36.642930538" Mar 17 17:57:17.092197 containerd[1497]: time="2025-03-17T17:57:17.092141363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tcjtv,Uid:bdf89199-f5f3-4c7d-81c4-a7e89df4416e,Namespace:kube-system,Attempt:0,} returns sandbox id \"bceac0fb6d39cb465bcdab37afc0f8f84ecb6c76c563c0bd55a161342bca3e85\"" Mar 17 17:57:17.095157 kubelet[2658]: E0317 17:57:17.094702 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:57:17.117657 containerd[1497]: time="2025-03-17T17:57:17.117284166Z" level=info msg="CreateContainer within sandbox \"bceac0fb6d39cb465bcdab37afc0f8f84ecb6c76c563c0bd55a161342bca3e85\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 17:57:17.244689 containerd[1497]: time="2025-03-17T17:57:17.242764425Z" level=info msg="CreateContainer within sandbox \"bceac0fb6d39cb465bcdab37afc0f8f84ecb6c76c563c0bd55a161342bca3e85\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5dbc5a7a235057b02fe15f0e2cc94e4901d8b9e0dd464beabc32904208bcc82d\"" Mar 17 17:57:17.251174 containerd[1497]: time="2025-03-17T17:57:17.247328017Z" level=info msg="StartContainer for \"5dbc5a7a235057b02fe15f0e2cc94e4901d8b9e0dd464beabc32904208bcc82d\"" Mar 17 17:57:17.363202 systemd[1]: Started cri-containerd-5dbc5a7a235057b02fe15f0e2cc94e4901d8b9e0dd464beabc32904208bcc82d.scope - libcontainer container 5dbc5a7a235057b02fe15f0e2cc94e4901d8b9e0dd464beabc32904208bcc82d. Mar 17 17:57:17.510531 containerd[1497]: time="2025-03-17T17:57:17.510156416Z" level=info msg="StartContainer for \"5dbc5a7a235057b02fe15f0e2cc94e4901d8b9e0dd464beabc32904208bcc82d\" returns successfully" Mar 17 17:57:17.599821 systemd-networkd[1385]: cni0: Gained IPv6LL Mar 17 17:57:17.961269 kubelet[2658]: E0317 17:57:17.960093 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:57:17.976345 kubelet[2658]: E0317 17:57:17.976267 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:57:18.021094 kubelet[2658]: I0317 17:57:18.019834 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-tcjtv" podStartSLOduration=23.019805265 podStartE2EDuration="23.019805265s" podCreationTimestamp="2025-03-17 17:56:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:57:18.019234685 +0000 UTC m=+37.704429154" watchObservedRunningTime="2025-03-17 17:57:18.019805265 +0000 UTC m=+37.704999732" Mar 17 17:57:18.175017 systemd-networkd[1385]: veth4f7dac7e: Gained IPv6LL Mar 17 17:57:18.964118 kubelet[2658]: E0317 17:57:18.963566 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:57:18.967799 kubelet[2658]: E0317 17:57:18.965808 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:57:19.968759 kubelet[2658]: E0317 17:57:19.965697 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:57:29.521148 systemd[1]: Started sshd@5-64.23.213.164:22-139.178.68.195:56742.service - OpenSSH per-connection server daemon (139.178.68.195:56742). Mar 17 17:57:29.638155 sshd[3582]: Accepted publickey for core from 139.178.68.195 port 56742 ssh2: RSA SHA256:nAUKsK2l9wjXYeF+xS7MSq6cfWij0pIIBV4i7QqSfSE Mar 17 17:57:29.640614 sshd-session[3582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:57:29.651007 systemd-logind[1472]: New session 6 of user core. Mar 17 17:57:29.658227 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 17 17:57:29.864964 sshd[3584]: Connection closed by 139.178.68.195 port 56742 Mar 17 17:57:29.865394 sshd-session[3582]: pam_unix(sshd:session): session closed for user core Mar 17 17:57:29.873774 systemd[1]: sshd@5-64.23.213.164:22-139.178.68.195:56742.service: Deactivated successfully. Mar 17 17:57:29.877695 systemd[1]: session-6.scope: Deactivated successfully. Mar 17 17:57:29.879220 systemd-logind[1472]: Session 6 logged out. Waiting for processes to exit. Mar 17 17:57:29.884508 systemd-logind[1472]: Removed session 6. Mar 17 17:57:34.884733 systemd[1]: Started sshd@6-64.23.213.164:22-139.178.68.195:56744.service - OpenSSH per-connection server daemon (139.178.68.195:56744). Mar 17 17:57:34.994313 sshd[3618]: Accepted publickey for core from 139.178.68.195 port 56744 ssh2: RSA SHA256:nAUKsK2l9wjXYeF+xS7MSq6cfWij0pIIBV4i7QqSfSE Mar 17 17:57:34.999915 sshd-session[3618]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:57:35.018817 systemd-logind[1472]: New session 7 of user core. Mar 17 17:57:35.030000 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 17 17:57:35.213358 sshd[3620]: Connection closed by 139.178.68.195 port 56744 Mar 17 17:57:35.215197 sshd-session[3618]: pam_unix(sshd:session): session closed for user core Mar 17 17:57:35.226307 systemd[1]: sshd@6-64.23.213.164:22-139.178.68.195:56744.service: Deactivated successfully. Mar 17 17:57:35.232724 systemd[1]: session-7.scope: Deactivated successfully. Mar 17 17:57:35.236508 systemd-logind[1472]: Session 7 logged out. Waiting for processes to exit. Mar 17 17:57:35.239438 systemd-logind[1472]: Removed session 7. Mar 17 17:57:40.239603 systemd[1]: Started sshd@7-64.23.213.164:22-139.178.68.195:38884.service - OpenSSH per-connection server daemon (139.178.68.195:38884). Mar 17 17:57:40.316380 sshd[3654]: Accepted publickey for core from 139.178.68.195 port 38884 ssh2: RSA SHA256:nAUKsK2l9wjXYeF+xS7MSq6cfWij0pIIBV4i7QqSfSE Mar 17 17:57:40.319501 sshd-session[3654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:57:40.329371 systemd-logind[1472]: New session 8 of user core. Mar 17 17:57:40.333957 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 17 17:57:40.562517 sshd[3659]: Connection closed by 139.178.68.195 port 38884 Mar 17 17:57:40.563196 sshd-session[3654]: pam_unix(sshd:session): session closed for user core Mar 17 17:57:40.583416 systemd[1]: sshd@7-64.23.213.164:22-139.178.68.195:38884.service: Deactivated successfully. Mar 17 17:57:40.591491 systemd[1]: session-8.scope: Deactivated successfully. Mar 17 17:57:40.599262 systemd-logind[1472]: Session 8 logged out. Waiting for processes to exit. Mar 17 17:57:40.613862 systemd[1]: Started sshd@8-64.23.213.164:22-139.178.68.195:38890.service - OpenSSH per-connection server daemon (139.178.68.195:38890). Mar 17 17:57:40.621172 systemd-logind[1472]: Removed session 8. Mar 17 17:57:40.756087 sshd[3685]: Accepted publickey for core from 139.178.68.195 port 38890 ssh2: RSA SHA256:nAUKsK2l9wjXYeF+xS7MSq6cfWij0pIIBV4i7QqSfSE Mar 17 17:57:40.759537 sshd-session[3685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:57:40.773166 systemd-logind[1472]: New session 9 of user core. Mar 17 17:57:40.781190 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 17 17:57:41.072366 sshd[3694]: Connection closed by 139.178.68.195 port 38890 Mar 17 17:57:41.077239 sshd-session[3685]: pam_unix(sshd:session): session closed for user core Mar 17 17:57:41.096374 systemd[1]: sshd@8-64.23.213.164:22-139.178.68.195:38890.service: Deactivated successfully. Mar 17 17:57:41.102900 systemd[1]: session-9.scope: Deactivated successfully. Mar 17 17:57:41.109306 systemd-logind[1472]: Session 9 logged out. Waiting for processes to exit. Mar 17 17:57:41.115436 systemd[1]: Started sshd@9-64.23.213.164:22-139.178.68.195:38906.service - OpenSSH per-connection server daemon (139.178.68.195:38906). Mar 17 17:57:41.122835 systemd-logind[1472]: Removed session 9. Mar 17 17:57:41.227949 sshd[3703]: Accepted publickey for core from 139.178.68.195 port 38906 ssh2: RSA SHA256:nAUKsK2l9wjXYeF+xS7MSq6cfWij0pIIBV4i7QqSfSE Mar 17 17:57:41.230720 sshd-session[3703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:57:41.241725 systemd-logind[1472]: New session 10 of user core. Mar 17 17:57:41.248163 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 17 17:57:41.459609 sshd[3706]: Connection closed by 139.178.68.195 port 38906 Mar 17 17:57:41.459334 sshd-session[3703]: pam_unix(sshd:session): session closed for user core Mar 17 17:57:41.472106 systemd[1]: sshd@9-64.23.213.164:22-139.178.68.195:38906.service: Deactivated successfully. Mar 17 17:57:41.476913 systemd[1]: session-10.scope: Deactivated successfully. Mar 17 17:57:41.482496 systemd-logind[1472]: Session 10 logged out. Waiting for processes to exit. Mar 17 17:57:41.489349 systemd-logind[1472]: Removed session 10. Mar 17 17:57:46.492470 systemd[1]: Started sshd@10-64.23.213.164:22-139.178.68.195:58142.service - OpenSSH per-connection server daemon (139.178.68.195:58142). Mar 17 17:57:46.564748 sshd[3741]: Accepted publickey for core from 139.178.68.195 port 58142 ssh2: RSA SHA256:nAUKsK2l9wjXYeF+xS7MSq6cfWij0pIIBV4i7QqSfSE Mar 17 17:57:46.568638 sshd-session[3741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:57:46.584206 systemd-logind[1472]: New session 11 of user core. Mar 17 17:57:46.591022 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 17 17:57:46.787994 sshd[3743]: Connection closed by 139.178.68.195 port 58142 Mar 17 17:57:46.786325 sshd-session[3741]: pam_unix(sshd:session): session closed for user core Mar 17 17:57:46.793923 systemd[1]: sshd@10-64.23.213.164:22-139.178.68.195:58142.service: Deactivated successfully. Mar 17 17:57:46.799925 systemd[1]: session-11.scope: Deactivated successfully. Mar 17 17:57:46.801508 systemd-logind[1472]: Session 11 logged out. Waiting for processes to exit. Mar 17 17:57:46.803603 systemd-logind[1472]: Removed session 11. Mar 17 17:57:47.571593 kubelet[2658]: E0317 17:57:47.571038 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:57:50.567482 kubelet[2658]: E0317 17:57:50.566722 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:57:51.830495 systemd[1]: Started sshd@11-64.23.213.164:22-139.178.68.195:58144.service - OpenSSH per-connection server daemon (139.178.68.195:58144). Mar 17 17:57:51.902958 sshd[3776]: Accepted publickey for core from 139.178.68.195 port 58144 ssh2: RSA SHA256:nAUKsK2l9wjXYeF+xS7MSq6cfWij0pIIBV4i7QqSfSE Mar 17 17:57:51.905362 sshd-session[3776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:57:51.915099 systemd-logind[1472]: New session 12 of user core. Mar 17 17:57:51.926157 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 17 17:57:52.129564 sshd[3778]: Connection closed by 139.178.68.195 port 58144 Mar 17 17:57:52.130937 sshd-session[3776]: pam_unix(sshd:session): session closed for user core Mar 17 17:57:52.138781 systemd-logind[1472]: Session 12 logged out. Waiting for processes to exit. Mar 17 17:57:52.139560 systemd[1]: sshd@11-64.23.213.164:22-139.178.68.195:58144.service: Deactivated successfully. Mar 17 17:57:52.145958 systemd[1]: session-12.scope: Deactivated successfully. Mar 17 17:57:52.150570 systemd-logind[1472]: Removed session 12. Mar 17 17:57:57.156156 systemd[1]: Started sshd@12-64.23.213.164:22-139.178.68.195:33742.service - OpenSSH per-connection server daemon (139.178.68.195:33742). Mar 17 17:57:57.223183 sshd[3811]: Accepted publickey for core from 139.178.68.195 port 33742 ssh2: RSA SHA256:nAUKsK2l9wjXYeF+xS7MSq6cfWij0pIIBV4i7QqSfSE Mar 17 17:57:57.225651 sshd-session[3811]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:57:57.233997 systemd-logind[1472]: New session 13 of user core. Mar 17 17:57:57.239386 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 17 17:57:57.484030 sshd[3813]: Connection closed by 139.178.68.195 port 33742 Mar 17 17:57:57.485220 sshd-session[3811]: pam_unix(sshd:session): session closed for user core Mar 17 17:57:57.490239 systemd-logind[1472]: Session 13 logged out. Waiting for processes to exit. Mar 17 17:57:57.491180 systemd[1]: sshd@12-64.23.213.164:22-139.178.68.195:33742.service: Deactivated successfully. Mar 17 17:57:57.497934 systemd[1]: session-13.scope: Deactivated successfully. Mar 17 17:57:57.501614 systemd-logind[1472]: Removed session 13. Mar 17 17:58:01.569085 kubelet[2658]: E0317 17:58:01.566928 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:58:02.511218 systemd[1]: Started sshd@13-64.23.213.164:22-139.178.68.195:33754.service - OpenSSH per-connection server daemon (139.178.68.195:33754). Mar 17 17:58:02.587686 sshd[3848]: Accepted publickey for core from 139.178.68.195 port 33754 ssh2: RSA SHA256:nAUKsK2l9wjXYeF+xS7MSq6cfWij0pIIBV4i7QqSfSE Mar 17 17:58:02.589723 sshd-session[3848]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:58:02.598274 systemd-logind[1472]: New session 14 of user core. Mar 17 17:58:02.610084 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 17 17:58:02.785356 sshd[3850]: Connection closed by 139.178.68.195 port 33754 Mar 17 17:58:02.786520 sshd-session[3848]: pam_unix(sshd:session): session closed for user core Mar 17 17:58:02.801403 systemd[1]: sshd@13-64.23.213.164:22-139.178.68.195:33754.service: Deactivated successfully. Mar 17 17:58:02.804819 systemd[1]: session-14.scope: Deactivated successfully. Mar 17 17:58:02.808860 systemd-logind[1472]: Session 14 logged out. Waiting for processes to exit. Mar 17 17:58:02.816235 systemd[1]: Started sshd@14-64.23.213.164:22-139.178.68.195:33764.service - OpenSSH per-connection server daemon (139.178.68.195:33764). Mar 17 17:58:02.820059 systemd-logind[1472]: Removed session 14. Mar 17 17:58:02.893337 sshd[3861]: Accepted publickey for core from 139.178.68.195 port 33764 ssh2: RSA SHA256:nAUKsK2l9wjXYeF+xS7MSq6cfWij0pIIBV4i7QqSfSE Mar 17 17:58:02.894422 sshd-session[3861]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:58:02.913279 systemd-logind[1472]: New session 15 of user core. Mar 17 17:58:02.919368 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 17 17:58:03.291125 sshd[3864]: Connection closed by 139.178.68.195 port 33764 Mar 17 17:58:03.292299 sshd-session[3861]: pam_unix(sshd:session): session closed for user core Mar 17 17:58:03.307076 systemd[1]: sshd@14-64.23.213.164:22-139.178.68.195:33764.service: Deactivated successfully. Mar 17 17:58:03.312015 systemd[1]: session-15.scope: Deactivated successfully. Mar 17 17:58:03.316020 systemd-logind[1472]: Session 15 logged out. Waiting for processes to exit. Mar 17 17:58:03.331916 systemd[1]: Started sshd@15-64.23.213.164:22-139.178.68.195:33778.service - OpenSSH per-connection server daemon (139.178.68.195:33778). Mar 17 17:58:03.343983 systemd-logind[1472]: Removed session 15. Mar 17 17:58:03.431947 sshd[3873]: Accepted publickey for core from 139.178.68.195 port 33778 ssh2: RSA SHA256:nAUKsK2l9wjXYeF+xS7MSq6cfWij0pIIBV4i7QqSfSE Mar 17 17:58:03.434513 sshd-session[3873]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:58:03.446763 systemd-logind[1472]: New session 16 of user core. Mar 17 17:58:03.456216 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 17 17:58:05.618515 sshd[3876]: Connection closed by 139.178.68.195 port 33778 Mar 17 17:58:05.619724 sshd-session[3873]: pam_unix(sshd:session): session closed for user core Mar 17 17:58:05.636969 systemd[1]: sshd@15-64.23.213.164:22-139.178.68.195:33778.service: Deactivated successfully. Mar 17 17:58:05.643515 systemd[1]: session-16.scope: Deactivated successfully. Mar 17 17:58:05.646896 systemd-logind[1472]: Session 16 logged out. Waiting for processes to exit. Mar 17 17:58:05.659874 systemd[1]: Started sshd@16-64.23.213.164:22-139.178.68.195:57244.service - OpenSSH per-connection server daemon (139.178.68.195:57244). Mar 17 17:58:05.662578 systemd-logind[1472]: Removed session 16. Mar 17 17:58:05.758687 sshd[3900]: Accepted publickey for core from 139.178.68.195 port 57244 ssh2: RSA SHA256:nAUKsK2l9wjXYeF+xS7MSq6cfWij0pIIBV4i7QqSfSE Mar 17 17:58:05.761655 sshd-session[3900]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:58:05.774675 systemd-logind[1472]: New session 17 of user core. Mar 17 17:58:05.776986 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 17 17:58:06.168358 sshd[3905]: Connection closed by 139.178.68.195 port 57244 Mar 17 17:58:06.170056 sshd-session[3900]: pam_unix(sshd:session): session closed for user core Mar 17 17:58:06.189955 systemd[1]: sshd@16-64.23.213.164:22-139.178.68.195:57244.service: Deactivated successfully. Mar 17 17:58:06.195584 systemd[1]: session-17.scope: Deactivated successfully. Mar 17 17:58:06.200780 systemd-logind[1472]: Session 17 logged out. Waiting for processes to exit. Mar 17 17:58:06.209693 systemd[1]: Started sshd@17-64.23.213.164:22-139.178.68.195:57256.service - OpenSSH per-connection server daemon (139.178.68.195:57256). Mar 17 17:58:06.214188 systemd-logind[1472]: Removed session 17. Mar 17 17:58:06.276757 sshd[3928]: Accepted publickey for core from 139.178.68.195 port 57256 ssh2: RSA SHA256:nAUKsK2l9wjXYeF+xS7MSq6cfWij0pIIBV4i7QqSfSE Mar 17 17:58:06.280039 sshd-session[3928]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:58:06.296818 systemd-logind[1472]: New session 18 of user core. Mar 17 17:58:06.307223 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 17 17:58:06.503278 sshd[3931]: Connection closed by 139.178.68.195 port 57256 Mar 17 17:58:06.504597 sshd-session[3928]: pam_unix(sshd:session): session closed for user core Mar 17 17:58:06.510008 systemd[1]: sshd@17-64.23.213.164:22-139.178.68.195:57256.service: Deactivated successfully. Mar 17 17:58:06.515423 systemd[1]: session-18.scope: Deactivated successfully. Mar 17 17:58:06.517215 systemd-logind[1472]: Session 18 logged out. Waiting for processes to exit. Mar 17 17:58:06.519935 systemd-logind[1472]: Removed session 18. Mar 17 17:58:10.566921 kubelet[2658]: E0317 17:58:10.566223 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:58:11.524107 systemd[1]: Started sshd@18-64.23.213.164:22-139.178.68.195:57266.service - OpenSSH per-connection server daemon (139.178.68.195:57266). Mar 17 17:58:11.597454 sshd[3965]: Accepted publickey for core from 139.178.68.195 port 57266 ssh2: RSA SHA256:nAUKsK2l9wjXYeF+xS7MSq6cfWij0pIIBV4i7QqSfSE Mar 17 17:58:11.599745 sshd-session[3965]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:58:11.607527 systemd-logind[1472]: New session 19 of user core. Mar 17 17:58:11.618578 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 17 17:58:11.785294 sshd[3967]: Connection closed by 139.178.68.195 port 57266 Mar 17 17:58:11.787309 sshd-session[3965]: pam_unix(sshd:session): session closed for user core Mar 17 17:58:11.792961 systemd[1]: sshd@18-64.23.213.164:22-139.178.68.195:57266.service: Deactivated successfully. Mar 17 17:58:11.798714 systemd[1]: session-19.scope: Deactivated successfully. Mar 17 17:58:11.802041 systemd-logind[1472]: Session 19 logged out. Waiting for processes to exit. Mar 17 17:58:11.804199 systemd-logind[1472]: Removed session 19. Mar 17 17:58:16.811899 systemd[1]: Started sshd@19-64.23.213.164:22-139.178.68.195:49852.service - OpenSSH per-connection server daemon (139.178.68.195:49852). Mar 17 17:58:16.893235 sshd[4003]: Accepted publickey for core from 139.178.68.195 port 49852 ssh2: RSA SHA256:nAUKsK2l9wjXYeF+xS7MSq6cfWij0pIIBV4i7QqSfSE Mar 17 17:58:16.895898 sshd-session[4003]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:58:16.908240 systemd-logind[1472]: New session 20 of user core. Mar 17 17:58:16.910349 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 17 17:58:17.193688 sshd[4005]: Connection closed by 139.178.68.195 port 49852 Mar 17 17:58:17.212214 sshd-session[4003]: pam_unix(sshd:session): session closed for user core Mar 17 17:58:17.218752 systemd[1]: sshd@19-64.23.213.164:22-139.178.68.195:49852.service: Deactivated successfully. Mar 17 17:58:17.222558 systemd[1]: session-20.scope: Deactivated successfully. Mar 17 17:58:17.227444 systemd-logind[1472]: Session 20 logged out. Waiting for processes to exit. Mar 17 17:58:17.229407 systemd-logind[1472]: Removed session 20. Mar 17 17:58:18.566665 kubelet[2658]: E0317 17:58:18.565789 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:58:22.237138 systemd[1]: Started sshd@20-64.23.213.164:22-139.178.68.195:49860.service - OpenSSH per-connection server daemon (139.178.68.195:49860). Mar 17 17:58:22.309447 sshd[4037]: Accepted publickey for core from 139.178.68.195 port 49860 ssh2: RSA SHA256:nAUKsK2l9wjXYeF+xS7MSq6cfWij0pIIBV4i7QqSfSE Mar 17 17:58:22.312016 sshd-session[4037]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:58:22.324527 systemd-logind[1472]: New session 21 of user core. Mar 17 17:58:22.331013 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 17 17:58:22.585445 sshd[4039]: Connection closed by 139.178.68.195 port 49860 Mar 17 17:58:22.586523 sshd-session[4037]: pam_unix(sshd:session): session closed for user core Mar 17 17:58:22.616715 systemd[1]: sshd@20-64.23.213.164:22-139.178.68.195:49860.service: Deactivated successfully. Mar 17 17:58:22.621612 systemd[1]: session-21.scope: Deactivated successfully. Mar 17 17:58:22.627211 systemd-logind[1472]: Session 21 logged out. Waiting for processes to exit. Mar 17 17:58:22.630417 systemd-logind[1472]: Removed session 21. Mar 17 17:58:27.569426 kubelet[2658]: E0317 17:58:27.566523 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:58:27.621201 systemd[1]: Started sshd@21-64.23.213.164:22-139.178.68.195:37446.service - OpenSSH per-connection server daemon (139.178.68.195:37446). Mar 17 17:58:27.725422 sshd[4073]: Accepted publickey for core from 139.178.68.195 port 37446 ssh2: RSA SHA256:nAUKsK2l9wjXYeF+xS7MSq6cfWij0pIIBV4i7QqSfSE Mar 17 17:58:27.730443 sshd-session[4073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:58:27.742608 systemd-logind[1472]: New session 22 of user core. Mar 17 17:58:27.752803 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 17 17:58:28.014763 sshd[4075]: Connection closed by 139.178.68.195 port 37446 Mar 17 17:58:28.016517 sshd-session[4073]: pam_unix(sshd:session): session closed for user core Mar 17 17:58:28.024319 systemd[1]: sshd@21-64.23.213.164:22-139.178.68.195:37446.service: Deactivated successfully. Mar 17 17:58:28.028419 systemd[1]: session-22.scope: Deactivated successfully. Mar 17 17:58:28.036082 systemd-logind[1472]: Session 22 logged out. Waiting for processes to exit. Mar 17 17:58:28.040178 systemd-logind[1472]: Removed session 22.