Mar 17 17:58:19.909921 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Mon Mar 17 16:09:25 -00 2025 Mar 17 17:58:19.909948 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=2a4a0f64c0160ed10b339be09fdc9d7e265b13f78aefc87616e79bf13c00bb1c Mar 17 17:58:19.909961 kernel: BIOS-provided physical RAM map: Mar 17 17:58:19.909968 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 17 17:58:19.909974 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 17 17:58:19.909981 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 17 17:58:19.909989 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Mar 17 17:58:19.909996 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Mar 17 17:58:19.910002 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 17 17:58:19.910009 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 17 17:58:19.910019 kernel: NX (Execute Disable) protection: active Mar 17 17:58:19.910030 kernel: APIC: Static calls initialized Mar 17 17:58:19.910037 kernel: SMBIOS 2.8 present. Mar 17 17:58:19.910044 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Mar 17 17:58:19.910053 kernel: Hypervisor detected: KVM Mar 17 17:58:19.910061 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 17 17:58:19.910074 kernel: kvm-clock: using sched offset of 3487652760 cycles Mar 17 17:58:19.910083 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 17 17:58:19.910091 kernel: tsc: Detected 2494.146 MHz processor Mar 17 17:58:19.910100 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 17 17:58:19.910108 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 17 17:58:19.910116 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Mar 17 17:58:19.910124 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 17 17:58:19.910132 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 17 17:58:19.910143 kernel: ACPI: Early table checksum verification disabled Mar 17 17:58:19.910151 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Mar 17 17:58:19.910159 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:58:19.910167 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:58:19.910175 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:58:19.910184 kernel: ACPI: FACS 0x000000007FFE0000 000040 Mar 17 17:58:19.910191 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:58:19.910202 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:58:19.910214 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:58:19.910230 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:58:19.910266 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Mar 17 17:58:19.910274 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Mar 17 17:58:19.910282 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Mar 17 17:58:19.910290 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Mar 17 17:58:19.910298 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Mar 17 17:58:19.910306 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Mar 17 17:58:19.910319 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Mar 17 17:58:19.910331 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Mar 17 17:58:19.910342 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Mar 17 17:58:19.910350 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Mar 17 17:58:19.910358 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Mar 17 17:58:19.910367 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Mar 17 17:58:19.910375 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Mar 17 17:58:19.910387 kernel: Zone ranges: Mar 17 17:58:19.910395 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 17 17:58:19.910403 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Mar 17 17:58:19.910411 kernel: Normal empty Mar 17 17:58:19.910419 kernel: Movable zone start for each node Mar 17 17:58:19.910428 kernel: Early memory node ranges Mar 17 17:58:19.910436 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 17 17:58:19.910444 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Mar 17 17:58:19.910452 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Mar 17 17:58:19.910461 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 17 17:58:19.910472 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 17 17:58:19.910482 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Mar 17 17:58:19.910491 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 17 17:58:19.910499 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 17 17:58:19.910508 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 17 17:58:19.910516 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 17 17:58:19.910524 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 17 17:58:19.910532 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 17 17:58:19.910540 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 17 17:58:19.910552 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 17 17:58:19.910560 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 17 17:58:19.910568 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 17 17:58:19.910576 kernel: TSC deadline timer available Mar 17 17:58:19.910584 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Mar 17 17:58:19.910593 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 17 17:58:19.910601 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Mar 17 17:58:19.910612 kernel: Booting paravirtualized kernel on KVM Mar 17 17:58:19.910620 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 17 17:58:19.910632 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Mar 17 17:58:19.910640 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Mar 17 17:58:19.910649 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Mar 17 17:58:19.910657 kernel: pcpu-alloc: [0] 0 1 Mar 17 17:58:19.910665 kernel: kvm-guest: PV spinlocks disabled, no host support Mar 17 17:58:19.910674 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=2a4a0f64c0160ed10b339be09fdc9d7e265b13f78aefc87616e79bf13c00bb1c Mar 17 17:58:19.910683 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 17 17:58:19.910692 kernel: random: crng init done Mar 17 17:58:19.910703 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 17 17:58:19.910711 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Mar 17 17:58:19.910720 kernel: Fallback order for Node 0: 0 Mar 17 17:58:19.910729 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Mar 17 17:58:19.910741 kernel: Policy zone: DMA32 Mar 17 17:58:19.910754 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 17 17:58:19.910768 kernel: Memory: 1969156K/2096612K available (14336K kernel code, 2303K rwdata, 22860K rodata, 43476K init, 1596K bss, 127196K reserved, 0K cma-reserved) Mar 17 17:58:19.910782 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 17 17:58:19.910791 kernel: Kernel/User page tables isolation: enabled Mar 17 17:58:19.910806 kernel: ftrace: allocating 37910 entries in 149 pages Mar 17 17:58:19.910820 kernel: ftrace: allocated 149 pages with 4 groups Mar 17 17:58:19.910834 kernel: Dynamic Preempt: voluntary Mar 17 17:58:19.910843 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 17 17:58:19.910852 kernel: rcu: RCU event tracing is enabled. Mar 17 17:58:19.910861 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 17 17:58:19.910869 kernel: Trampoline variant of Tasks RCU enabled. Mar 17 17:58:19.910878 kernel: Rude variant of Tasks RCU enabled. Mar 17 17:58:19.910886 kernel: Tracing variant of Tasks RCU enabled. Mar 17 17:58:19.910898 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 17 17:58:19.910906 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 17 17:58:19.910914 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Mar 17 17:58:19.910926 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 17 17:58:19.910934 kernel: Console: colour VGA+ 80x25 Mar 17 17:58:19.910943 kernel: printk: console [tty0] enabled Mar 17 17:58:19.910951 kernel: printk: console [ttyS0] enabled Mar 17 17:58:19.910959 kernel: ACPI: Core revision 20230628 Mar 17 17:58:19.910968 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 17 17:58:19.910979 kernel: APIC: Switch to symmetric I/O mode setup Mar 17 17:58:19.910988 kernel: x2apic enabled Mar 17 17:58:19.910997 kernel: APIC: Switched APIC routing to: physical x2apic Mar 17 17:58:19.911005 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 17 17:58:19.911014 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39fcb9af, max_idle_ns: 440795211412 ns Mar 17 17:58:19.911022 kernel: Calibrating delay loop (skipped) preset value.. 4988.29 BogoMIPS (lpj=2494146) Mar 17 17:58:19.911031 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Mar 17 17:58:19.911039 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Mar 17 17:58:19.911060 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 17 17:58:19.911069 kernel: Spectre V2 : Mitigation: Retpolines Mar 17 17:58:19.911077 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Mar 17 17:58:19.911086 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Mar 17 17:58:19.911098 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Mar 17 17:58:19.911107 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 17 17:58:19.911116 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Mar 17 17:58:19.911125 kernel: MDS: Mitigation: Clear CPU buffers Mar 17 17:58:19.911134 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Mar 17 17:58:19.911149 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 17 17:58:19.911158 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 17 17:58:19.911167 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 17 17:58:19.911176 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 17 17:58:19.911185 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Mar 17 17:58:19.911194 kernel: Freeing SMP alternatives memory: 32K Mar 17 17:58:19.911205 kernel: pid_max: default: 32768 minimum: 301 Mar 17 17:58:19.911218 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 17 17:58:19.911233 kernel: landlock: Up and running. Mar 17 17:58:19.911264 kernel: SELinux: Initializing. Mar 17 17:58:19.911278 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 17 17:58:19.911287 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 17 17:58:19.911296 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Mar 17 17:58:19.911320 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 17 17:58:19.911329 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 17 17:58:19.911338 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 17 17:58:19.911347 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Mar 17 17:58:19.911361 kernel: signal: max sigframe size: 1776 Mar 17 17:58:19.911369 kernel: rcu: Hierarchical SRCU implementation. Mar 17 17:58:19.911379 kernel: rcu: Max phase no-delay instances is 400. Mar 17 17:58:19.911388 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 17 17:58:19.911397 kernel: smp: Bringing up secondary CPUs ... Mar 17 17:58:19.911405 kernel: smpboot: x86: Booting SMP configuration: Mar 17 17:58:19.911414 kernel: .... node #0, CPUs: #1 Mar 17 17:58:19.911427 kernel: smp: Brought up 1 node, 2 CPUs Mar 17 17:58:19.911436 kernel: smpboot: Max logical packages: 1 Mar 17 17:58:19.911448 kernel: smpboot: Total of 2 processors activated (9976.58 BogoMIPS) Mar 17 17:58:19.911457 kernel: devtmpfs: initialized Mar 17 17:58:19.911466 kernel: x86/mm: Memory block size: 128MB Mar 17 17:58:19.911475 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 17 17:58:19.911484 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 17 17:58:19.911493 kernel: pinctrl core: initialized pinctrl subsystem Mar 17 17:58:19.911502 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 17 17:58:19.911511 kernel: audit: initializing netlink subsys (disabled) Mar 17 17:58:19.911520 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 17 17:58:19.911532 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 17 17:58:19.911540 kernel: audit: type=2000 audit(1742234299.719:1): state=initialized audit_enabled=0 res=1 Mar 17 17:58:19.911549 kernel: cpuidle: using governor menu Mar 17 17:58:19.911558 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 17 17:58:19.911567 kernel: dca service started, version 1.12.1 Mar 17 17:58:19.911576 kernel: PCI: Using configuration type 1 for base access Mar 17 17:58:19.911585 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 17 17:58:19.911594 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 17 17:58:19.911603 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 17 17:58:19.911621 kernel: ACPI: Added _OSI(Module Device) Mar 17 17:58:19.911630 kernel: ACPI: Added _OSI(Processor Device) Mar 17 17:58:19.911639 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 17 17:58:19.911648 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 17 17:58:19.911657 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 17 17:58:19.911665 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 17 17:58:19.911674 kernel: ACPI: Interpreter enabled Mar 17 17:58:19.911683 kernel: ACPI: PM: (supports S0 S5) Mar 17 17:58:19.911692 kernel: ACPI: Using IOAPIC for interrupt routing Mar 17 17:58:19.911704 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 17 17:58:19.911713 kernel: PCI: Using E820 reservations for host bridge windows Mar 17 17:58:19.911722 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Mar 17 17:58:19.911730 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 17 17:58:19.911955 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Mar 17 17:58:19.912095 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Mar 17 17:58:19.912206 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Mar 17 17:58:19.912224 kernel: acpiphp: Slot [3] registered Mar 17 17:58:19.912233 kernel: acpiphp: Slot [4] registered Mar 17 17:58:19.912265 kernel: acpiphp: Slot [5] registered Mar 17 17:58:19.912274 kernel: acpiphp: Slot [6] registered Mar 17 17:58:19.912294 kernel: acpiphp: Slot [7] registered Mar 17 17:58:19.912302 kernel: acpiphp: Slot [8] registered Mar 17 17:58:19.912311 kernel: acpiphp: Slot [9] registered Mar 17 17:58:19.912320 kernel: acpiphp: Slot [10] registered Mar 17 17:58:19.912329 kernel: acpiphp: Slot [11] registered Mar 17 17:58:19.912338 kernel: acpiphp: Slot [12] registered Mar 17 17:58:19.912351 kernel: acpiphp: Slot [13] registered Mar 17 17:58:19.912359 kernel: acpiphp: Slot [14] registered Mar 17 17:58:19.912368 kernel: acpiphp: Slot [15] registered Mar 17 17:58:19.912377 kernel: acpiphp: Slot [16] registered Mar 17 17:58:19.912386 kernel: acpiphp: Slot [17] registered Mar 17 17:58:19.912395 kernel: acpiphp: Slot [18] registered Mar 17 17:58:19.912403 kernel: acpiphp: Slot [19] registered Mar 17 17:58:19.912412 kernel: acpiphp: Slot [20] registered Mar 17 17:58:19.912421 kernel: acpiphp: Slot [21] registered Mar 17 17:58:19.912433 kernel: acpiphp: Slot [22] registered Mar 17 17:58:19.912441 kernel: acpiphp: Slot [23] registered Mar 17 17:58:19.912450 kernel: acpiphp: Slot [24] registered Mar 17 17:58:19.912459 kernel: acpiphp: Slot [25] registered Mar 17 17:58:19.912467 kernel: acpiphp: Slot [26] registered Mar 17 17:58:19.912476 kernel: acpiphp: Slot [27] registered Mar 17 17:58:19.912485 kernel: acpiphp: Slot [28] registered Mar 17 17:58:19.912493 kernel: acpiphp: Slot [29] registered Mar 17 17:58:19.912502 kernel: acpiphp: Slot [30] registered Mar 17 17:58:19.912511 kernel: acpiphp: Slot [31] registered Mar 17 17:58:19.912523 kernel: PCI host bridge to bus 0000:00 Mar 17 17:58:19.912646 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 17 17:58:19.912740 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 17 17:58:19.912830 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 17 17:58:19.912939 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Mar 17 17:58:19.913032 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Mar 17 17:58:19.913122 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 17 17:58:19.913949 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Mar 17 17:58:19.914107 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Mar 17 17:58:19.914218 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Mar 17 17:58:19.914352 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Mar 17 17:58:19.914459 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Mar 17 17:58:19.914611 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Mar 17 17:58:19.914758 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Mar 17 17:58:19.914888 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Mar 17 17:58:19.915011 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Mar 17 17:58:19.915116 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Mar 17 17:58:19.915338 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Mar 17 17:58:19.915471 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Mar 17 17:58:19.915606 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Mar 17 17:58:19.915724 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Mar 17 17:58:19.915824 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Mar 17 17:58:19.915921 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Mar 17 17:58:19.916025 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Mar 17 17:58:19.916122 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Mar 17 17:58:19.916218 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 17 17:58:19.917361 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Mar 17 17:58:19.917497 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Mar 17 17:58:19.917600 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Mar 17 17:58:19.917700 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Mar 17 17:58:19.917821 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 17 17:58:19.917931 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Mar 17 17:58:19.918052 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Mar 17 17:58:19.918179 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Mar 17 17:58:19.919419 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Mar 17 17:58:19.919548 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Mar 17 17:58:19.919676 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Mar 17 17:58:19.919783 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Mar 17 17:58:19.919907 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Mar 17 17:58:19.920006 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Mar 17 17:58:19.920117 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Mar 17 17:58:19.921298 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Mar 17 17:58:19.921484 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Mar 17 17:58:19.921613 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Mar 17 17:58:19.921715 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Mar 17 17:58:19.921826 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Mar 17 17:58:19.921932 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Mar 17 17:58:19.922038 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Mar 17 17:58:19.922178 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Mar 17 17:58:19.922193 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 17 17:58:19.922216 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 17 17:58:19.922225 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 17 17:58:19.923269 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 17 17:58:19.923285 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Mar 17 17:58:19.923316 kernel: iommu: Default domain type: Translated Mar 17 17:58:19.923331 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 17 17:58:19.923342 kernel: PCI: Using ACPI for IRQ routing Mar 17 17:58:19.923351 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 17 17:58:19.923364 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 17 17:58:19.923377 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Mar 17 17:58:19.923528 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Mar 17 17:58:19.923653 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Mar 17 17:58:19.923808 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 17 17:58:19.923835 kernel: vgaarb: loaded Mar 17 17:58:19.923848 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 17 17:58:19.923857 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 17 17:58:19.923866 kernel: clocksource: Switched to clocksource kvm-clock Mar 17 17:58:19.923876 kernel: VFS: Disk quotas dquot_6.6.0 Mar 17 17:58:19.923885 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 17 17:58:19.923894 kernel: pnp: PnP ACPI init Mar 17 17:58:19.923903 kernel: pnp: PnP ACPI: found 4 devices Mar 17 17:58:19.923914 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 17 17:58:19.923930 kernel: NET: Registered PF_INET protocol family Mar 17 17:58:19.923939 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 17 17:58:19.923948 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Mar 17 17:58:19.923957 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 17 17:58:19.923966 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 17 17:58:19.923977 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Mar 17 17:58:19.923991 kernel: TCP: Hash tables configured (established 16384 bind 16384) Mar 17 17:58:19.924004 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 17 17:58:19.924022 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 17 17:58:19.924036 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 17 17:58:19.924050 kernel: NET: Registered PF_XDP protocol family Mar 17 17:58:19.924196 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 17 17:58:19.924849 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 17 17:58:19.924992 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 17 17:58:19.925101 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Mar 17 17:58:19.925191 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Mar 17 17:58:19.926400 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Mar 17 17:58:19.926525 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Mar 17 17:58:19.926540 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Mar 17 17:58:19.926640 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 32182 usecs Mar 17 17:58:19.926653 kernel: PCI: CLS 0 bytes, default 64 Mar 17 17:58:19.926663 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Mar 17 17:58:19.926672 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39fcb9af, max_idle_ns: 440795211412 ns Mar 17 17:58:19.926681 kernel: Initialise system trusted keyrings Mar 17 17:58:19.926691 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Mar 17 17:58:19.926703 kernel: Key type asymmetric registered Mar 17 17:58:19.926712 kernel: Asymmetric key parser 'x509' registered Mar 17 17:58:19.926721 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 17 17:58:19.926730 kernel: io scheduler mq-deadline registered Mar 17 17:58:19.926739 kernel: io scheduler kyber registered Mar 17 17:58:19.926748 kernel: io scheduler bfq registered Mar 17 17:58:19.926757 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 17 17:58:19.926766 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Mar 17 17:58:19.926775 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Mar 17 17:58:19.926787 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Mar 17 17:58:19.926796 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 17 17:58:19.926805 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 17 17:58:19.926814 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 17 17:58:19.926823 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 17 17:58:19.926832 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 17 17:58:19.926960 kernel: rtc_cmos 00:03: RTC can wake from S4 Mar 17 17:58:19.926975 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 17 17:58:19.927082 kernel: rtc_cmos 00:03: registered as rtc0 Mar 17 17:58:19.927173 kernel: rtc_cmos 00:03: setting system clock to 2025-03-17T17:58:19 UTC (1742234299) Mar 17 17:58:19.927284 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Mar 17 17:58:19.927311 kernel: intel_pstate: CPU model not supported Mar 17 17:58:19.927321 kernel: NET: Registered PF_INET6 protocol family Mar 17 17:58:19.927330 kernel: Segment Routing with IPv6 Mar 17 17:58:19.927338 kernel: In-situ OAM (IOAM) with IPv6 Mar 17 17:58:19.927347 kernel: NET: Registered PF_PACKET protocol family Mar 17 17:58:19.927356 kernel: Key type dns_resolver registered Mar 17 17:58:19.927376 kernel: IPI shorthand broadcast: enabled Mar 17 17:58:19.927390 kernel: sched_clock: Marking stable (864002198, 90636622)->(1062508922, -107870102) Mar 17 17:58:19.927403 kernel: registered taskstats version 1 Mar 17 17:58:19.927416 kernel: Loading compiled-in X.509 certificates Mar 17 17:58:19.927427 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: 2d438fc13e28f87f3f580874887bade2e2b0c7dd' Mar 17 17:58:19.927436 kernel: Key type .fscrypt registered Mar 17 17:58:19.927445 kernel: Key type fscrypt-provisioning registered Mar 17 17:58:19.927453 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 17 17:58:19.927467 kernel: ima: Allocated hash algorithm: sha1 Mar 17 17:58:19.927476 kernel: ima: No architecture policies found Mar 17 17:58:19.927485 kernel: clk: Disabling unused clocks Mar 17 17:58:19.927493 kernel: Freeing unused kernel image (initmem) memory: 43476K Mar 17 17:58:19.927503 kernel: Write protecting the kernel read-only data: 38912k Mar 17 17:58:19.927531 kernel: Freeing unused kernel image (rodata/data gap) memory: 1716K Mar 17 17:58:19.927543 kernel: Run /init as init process Mar 17 17:58:19.927553 kernel: with arguments: Mar 17 17:58:19.927563 kernel: /init Mar 17 17:58:19.927575 kernel: with environment: Mar 17 17:58:19.927584 kernel: HOME=/ Mar 17 17:58:19.927593 kernel: TERM=linux Mar 17 17:58:19.927602 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 17 17:58:19.927613 systemd[1]: Successfully made /usr/ read-only. Mar 17 17:58:19.927627 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 17 17:58:19.927638 systemd[1]: Detected virtualization kvm. Mar 17 17:58:19.927648 systemd[1]: Detected architecture x86-64. Mar 17 17:58:19.927660 systemd[1]: Running in initrd. Mar 17 17:58:19.927671 systemd[1]: No hostname configured, using default hostname. Mar 17 17:58:19.927687 systemd[1]: Hostname set to . Mar 17 17:58:19.927697 systemd[1]: Initializing machine ID from VM UUID. Mar 17 17:58:19.927707 systemd[1]: Queued start job for default target initrd.target. Mar 17 17:58:19.927717 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:58:19.927727 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:58:19.927738 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 17 17:58:19.927751 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:58:19.927761 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 17 17:58:19.927772 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 17 17:58:19.927785 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 17 17:58:19.927795 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 17 17:58:19.927805 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:58:19.927815 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:58:19.927828 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:58:19.927838 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:58:19.927851 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:58:19.927861 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:58:19.927871 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:58:19.927884 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:58:19.927894 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 17 17:58:19.927904 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 17 17:58:19.927914 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:58:19.927924 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:58:19.927934 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:58:19.927943 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:58:19.927954 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 17 17:58:19.927964 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:58:19.927977 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 17 17:58:19.927986 systemd[1]: Starting systemd-fsck-usr.service... Mar 17 17:58:19.927996 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:58:19.928006 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:58:19.928016 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:58:19.928069 systemd-journald[183]: Collecting audit messages is disabled. Mar 17 17:58:19.928099 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 17 17:58:19.928109 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:58:19.928122 systemd[1]: Finished systemd-fsck-usr.service. Mar 17 17:58:19.928139 systemd-journald[183]: Journal started Mar 17 17:58:19.928161 systemd-journald[183]: Runtime Journal (/run/log/journal/6ef8b000a04b44f3a612017593cce8e1) is 4.9M, max 39.3M, 34.4M free. Mar 17 17:58:19.930280 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 17 17:58:19.934712 systemd-modules-load[184]: Inserted module 'overlay' Mar 17 17:58:19.963281 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:58:19.965254 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 17 17:58:19.967922 systemd-modules-load[184]: Inserted module 'br_netfilter' Mar 17 17:58:19.968964 kernel: Bridge firewalling registered Mar 17 17:58:19.975047 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:58:19.979081 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:58:19.981454 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:58:19.988525 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:58:19.991057 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:58:19.993326 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:58:19.997529 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:58:20.022832 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:58:20.024779 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:58:20.026360 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:58:20.027107 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:58:20.035537 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 17 17:58:20.039528 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:58:20.056263 dracut-cmdline[220]: dracut-dracut-053 Mar 17 17:58:20.058272 dracut-cmdline[220]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=2a4a0f64c0160ed10b339be09fdc9d7e265b13f78aefc87616e79bf13c00bb1c Mar 17 17:58:20.088572 systemd-resolved[221]: Positive Trust Anchors: Mar 17 17:58:20.089210 systemd-resolved[221]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:58:20.089788 systemd-resolved[221]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:58:20.094892 systemd-resolved[221]: Defaulting to hostname 'linux'. Mar 17 17:58:20.096550 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:58:20.097023 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:58:20.161291 kernel: SCSI subsystem initialized Mar 17 17:58:20.171276 kernel: Loading iSCSI transport class v2.0-870. Mar 17 17:58:20.184271 kernel: iscsi: registered transport (tcp) Mar 17 17:58:20.209547 kernel: iscsi: registered transport (qla4xxx) Mar 17 17:58:20.209642 kernel: QLogic iSCSI HBA Driver Mar 17 17:58:20.266889 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 17 17:58:20.277542 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 17 17:58:20.308466 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 17 17:58:20.308559 kernel: device-mapper: uevent: version 1.0.3 Mar 17 17:58:20.309600 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 17 17:58:20.353283 kernel: raid6: avx2x4 gen() 15051 MB/s Mar 17 17:58:20.370289 kernel: raid6: avx2x2 gen() 15591 MB/s Mar 17 17:58:20.387434 kernel: raid6: avx2x1 gen() 12411 MB/s Mar 17 17:58:20.387521 kernel: raid6: using algorithm avx2x2 gen() 15591 MB/s Mar 17 17:58:20.405439 kernel: raid6: .... xor() 20059 MB/s, rmw enabled Mar 17 17:58:20.405531 kernel: raid6: using avx2x2 recovery algorithm Mar 17 17:58:20.428276 kernel: xor: automatically using best checksumming function avx Mar 17 17:58:20.606301 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 17 17:58:20.621096 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:58:20.627571 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:58:20.660486 systemd-udevd[404]: Using default interface naming scheme 'v255'. Mar 17 17:58:20.666792 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:58:20.675690 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 17 17:58:20.694717 dracut-pre-trigger[411]: rd.md=0: removing MD RAID activation Mar 17 17:58:20.729165 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:58:20.734473 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:58:20.807357 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:58:20.815795 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 17 17:58:20.839938 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 17 17:58:20.843884 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:58:20.844891 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:58:20.845733 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:58:20.850447 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 17 17:58:20.880292 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:58:20.907272 kernel: libata version 3.00 loaded. Mar 17 17:58:20.914269 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Mar 17 17:58:20.938486 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Mar 17 17:58:20.941016 kernel: scsi host0: Virtio SCSI HBA Mar 17 17:58:20.941485 kernel: ata_piix 0000:00:01.1: version 2.13 Mar 17 17:58:20.941658 kernel: scsi host1: ata_piix Mar 17 17:58:20.941797 kernel: scsi host2: ata_piix Mar 17 17:58:20.942133 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Mar 17 17:58:20.942159 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Mar 17 17:58:20.942178 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 17 17:58:20.942198 kernel: GPT:9289727 != 125829119 Mar 17 17:58:20.942215 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 17 17:58:20.942233 kernel: GPT:9289727 != 125829119 Mar 17 17:58:20.942276 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 17 17:58:20.942288 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 17:58:20.942307 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Mar 17 17:58:20.958167 kernel: cryptd: max_cpu_qlen set to 1000 Mar 17 17:58:20.958188 kernel: virtio_blk virtio5: [vdb] 964 512-byte logical blocks (494 kB/482 KiB) Mar 17 17:58:20.960797 kernel: ACPI: bus type USB registered Mar 17 17:58:20.960852 kernel: usbcore: registered new interface driver usbfs Mar 17 17:58:20.960866 kernel: usbcore: registered new interface driver hub Mar 17 17:58:20.961468 kernel: usbcore: registered new device driver usb Mar 17 17:58:20.966580 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:58:20.967328 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:58:20.968384 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:58:20.969315 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:58:20.969870 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:58:20.970897 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:58:20.976615 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:58:20.978417 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 17 17:58:21.020689 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:58:21.031540 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:58:21.051887 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:58:21.105532 kernel: AVX2 version of gcm_enc/dec engaged. Mar 17 17:58:21.107272 kernel: AES CTR mode by8 optimization enabled Mar 17 17:58:21.139094 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (449) Mar 17 17:58:21.148260 kernel: BTRFS: device fsid 16b3954e-2e86-4c7f-a948-d3d3817b1bdc devid 1 transid 42 /dev/vda3 scanned by (udev-worker) (450) Mar 17 17:58:21.174473 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 17 17:58:21.191624 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Mar 17 17:58:21.195115 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Mar 17 17:58:21.195392 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Mar 17 17:58:21.195529 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Mar 17 17:58:21.195658 kernel: hub 1-0:1.0: USB hub found Mar 17 17:58:21.195801 kernel: hub 1-0:1.0: 2 ports detected Mar 17 17:58:21.193108 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 17 17:58:21.204304 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 17 17:58:21.212393 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 17 17:58:21.213000 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 17 17:58:21.222494 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 17 17:58:21.229949 disk-uuid[550]: Primary Header is updated. Mar 17 17:58:21.229949 disk-uuid[550]: Secondary Entries is updated. Mar 17 17:58:21.229949 disk-uuid[550]: Secondary Header is updated. Mar 17 17:58:21.246061 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 17:58:21.261271 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 17:58:22.255355 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 17:58:22.255625 disk-uuid[551]: The operation has completed successfully. Mar 17 17:58:22.314719 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 17 17:58:22.314829 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 17 17:58:22.364646 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 17 17:58:22.369081 sh[562]: Success Mar 17 17:58:22.387032 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Mar 17 17:58:22.457019 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 17 17:58:22.460389 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 17 17:58:22.461836 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 17 17:58:22.494610 kernel: BTRFS info (device dm-0): first mount of filesystem 16b3954e-2e86-4c7f-a948-d3d3817b1bdc Mar 17 17:58:22.494673 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:58:22.496720 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 17 17:58:22.496809 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 17 17:58:22.497649 kernel: BTRFS info (device dm-0): using free space tree Mar 17 17:58:22.505660 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 17 17:58:22.506826 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 17 17:58:22.512466 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 17 17:58:22.515122 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 17 17:58:22.531923 kernel: BTRFS info (device vda6): first mount of filesystem e64ce651-fa93-44de-893d-ff1e0bc9061f Mar 17 17:58:22.531990 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:58:22.532351 kernel: BTRFS info (device vda6): using free space tree Mar 17 17:58:22.537279 kernel: BTRFS info (device vda6): auto enabling async discard Mar 17 17:58:22.549555 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 17 17:58:22.551698 kernel: BTRFS info (device vda6): last unmount of filesystem e64ce651-fa93-44de-893d-ff1e0bc9061f Mar 17 17:58:22.557163 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 17 17:58:22.565470 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 17 17:58:22.652912 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:58:22.663649 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:58:22.702839 systemd-networkd[749]: lo: Link UP Mar 17 17:58:22.702854 systemd-networkd[749]: lo: Gained carrier Mar 17 17:58:22.706626 systemd-networkd[749]: Enumeration completed Mar 17 17:58:22.706793 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:58:22.707559 systemd[1]: Reached target network.target - Network. Mar 17 17:58:22.708637 systemd-networkd[749]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Mar 17 17:58:22.708644 systemd-networkd[749]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Mar 17 17:58:22.709754 systemd-networkd[749]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:58:22.709760 systemd-networkd[749]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:58:22.710619 systemd-networkd[749]: eth0: Link UP Mar 17 17:58:22.710626 systemd-networkd[749]: eth0: Gained carrier Mar 17 17:58:22.710639 systemd-networkd[749]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Mar 17 17:58:22.715110 systemd-networkd[749]: eth1: Link UP Mar 17 17:58:22.715909 systemd-networkd[749]: eth1: Gained carrier Mar 17 17:58:22.715942 systemd-networkd[749]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:58:22.726175 ignition[654]: Ignition 2.20.0 Mar 17 17:58:22.726209 ignition[654]: Stage: fetch-offline Mar 17 17:58:22.726310 ignition[654]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:58:22.726326 ignition[654]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Mar 17 17:58:22.726574 ignition[654]: parsed url from cmdline: "" Mar 17 17:58:22.726581 ignition[654]: no config URL provided Mar 17 17:58:22.726590 ignition[654]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 17:58:22.726603 ignition[654]: no config at "/usr/lib/ignition/user.ign" Mar 17 17:58:22.726614 ignition[654]: failed to fetch config: resource requires networking Mar 17 17:58:22.727305 ignition[654]: Ignition finished successfully Mar 17 17:58:22.730657 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:58:22.732435 systemd-networkd[749]: eth1: DHCPv4 address 10.124.0.35/20 acquired from 169.254.169.253 Mar 17 17:58:22.734432 systemd-networkd[749]: eth0: DHCPv4 address 159.223.200.207/20, gateway 159.223.192.1 acquired from 169.254.169.253 Mar 17 17:58:22.740059 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 17 17:58:22.765868 ignition[757]: Ignition 2.20.0 Mar 17 17:58:22.765880 ignition[757]: Stage: fetch Mar 17 17:58:22.766116 ignition[757]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:58:22.766129 ignition[757]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Mar 17 17:58:22.766254 ignition[757]: parsed url from cmdline: "" Mar 17 17:58:22.766258 ignition[757]: no config URL provided Mar 17 17:58:22.766264 ignition[757]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 17:58:22.766277 ignition[757]: no config at "/usr/lib/ignition/user.ign" Mar 17 17:58:22.766311 ignition[757]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Mar 17 17:58:22.779949 ignition[757]: GET result: OK Mar 17 17:58:22.780662 ignition[757]: parsing config with SHA512: a95f50e67a8a2f8fe8c239c1d8e9e0fbfd7c0fdf5e35bcc4dfcec3bf12cfce341e47471146d1d41e08c7d60457ff1b8d8bdd157465d1bc0aacca279fb6b3c78b Mar 17 17:58:22.786011 unknown[757]: fetched base config from "system" Mar 17 17:58:22.786023 unknown[757]: fetched base config from "system" Mar 17 17:58:22.786030 unknown[757]: fetched user config from "digitalocean" Mar 17 17:58:22.786634 ignition[757]: fetch: fetch complete Mar 17 17:58:22.786647 ignition[757]: fetch: fetch passed Mar 17 17:58:22.788552 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 17 17:58:22.786731 ignition[757]: Ignition finished successfully Mar 17 17:58:22.795598 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 17 17:58:22.828345 ignition[764]: Ignition 2.20.0 Mar 17 17:58:22.828364 ignition[764]: Stage: kargs Mar 17 17:58:22.828594 ignition[764]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:58:22.831116 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 17 17:58:22.828607 ignition[764]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Mar 17 17:58:22.829471 ignition[764]: kargs: kargs passed Mar 17 17:58:22.829543 ignition[764]: Ignition finished successfully Mar 17 17:58:22.845521 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 17 17:58:22.860785 ignition[770]: Ignition 2.20.0 Mar 17 17:58:22.860806 ignition[770]: Stage: disks Mar 17 17:58:22.861388 ignition[770]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:58:22.861401 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Mar 17 17:58:22.866098 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 17 17:58:22.862881 ignition[770]: disks: disks passed Mar 17 17:58:22.867694 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 17 17:58:22.862935 ignition[770]: Ignition finished successfully Mar 17 17:58:22.868348 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 17 17:58:22.868977 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:58:22.869888 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:58:22.870495 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:58:22.881601 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 17 17:58:22.899732 systemd-fsck[778]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 17 17:58:22.902499 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 17 17:58:23.502452 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 17 17:58:23.609257 kernel: EXT4-fs (vda9): mounted filesystem 21764504-a65e-45eb-84e1-376b55b62aba r/w with ordered data mode. Quota mode: none. Mar 17 17:58:23.610144 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 17 17:58:23.611405 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 17 17:58:23.625437 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:58:23.628301 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 17 17:58:23.631599 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... Mar 17 17:58:23.643279 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (786) Mar 17 17:58:23.643731 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 17 17:58:23.649280 kernel: BTRFS info (device vda6): first mount of filesystem e64ce651-fa93-44de-893d-ff1e0bc9061f Mar 17 17:58:23.649319 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:58:23.649340 kernel: BTRFS info (device vda6): using free space tree Mar 17 17:58:23.648700 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 17 17:58:23.648768 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:58:23.651806 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 17 17:58:23.662269 kernel: BTRFS info (device vda6): auto enabling async discard Mar 17 17:58:23.662684 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 17 17:58:23.669033 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:58:23.730531 coreos-metadata[789]: Mar 17 17:58:23.730 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Mar 17 17:58:23.739375 coreos-metadata[788]: Mar 17 17:58:23.739 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Mar 17 17:58:23.745263 coreos-metadata[789]: Mar 17 17:58:23.744 INFO Fetch successful Mar 17 17:58:23.748945 coreos-metadata[789]: Mar 17 17:58:23.748 INFO wrote hostname ci-4230.1.0-6-847a660ba6 to /sysroot/etc/hostname Mar 17 17:58:23.751374 initrd-setup-root[816]: cut: /sysroot/etc/passwd: No such file or directory Mar 17 17:58:23.750482 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 17 17:58:23.756895 coreos-metadata[788]: Mar 17 17:58:23.756 INFO Fetch successful Mar 17 17:58:23.762453 initrd-setup-root[824]: cut: /sysroot/etc/group: No such file or directory Mar 17 17:58:23.765414 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. Mar 17 17:58:23.765598 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. Mar 17 17:58:23.770022 initrd-setup-root[832]: cut: /sysroot/etc/shadow: No such file or directory Mar 17 17:58:23.775491 initrd-setup-root[839]: cut: /sysroot/etc/gshadow: No such file or directory Mar 17 17:58:23.885797 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 17 17:58:23.889434 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 17 17:58:23.892445 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 17 17:58:23.905487 kernel: BTRFS info (device vda6): last unmount of filesystem e64ce651-fa93-44de-893d-ff1e0bc9061f Mar 17 17:58:23.905465 systemd-networkd[749]: eth0: Gained IPv6LL Mar 17 17:58:23.923872 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 17 17:58:23.934012 ignition[908]: INFO : Ignition 2.20.0 Mar 17 17:58:23.934012 ignition[908]: INFO : Stage: mount Mar 17 17:58:23.935001 ignition[908]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:58:23.935001 ignition[908]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Mar 17 17:58:23.936723 ignition[908]: INFO : mount: mount passed Mar 17 17:58:23.936723 ignition[908]: INFO : Ignition finished successfully Mar 17 17:58:23.936971 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 17 17:58:23.943385 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 17 17:58:24.225586 systemd-networkd[749]: eth1: Gained IPv6LL Mar 17 17:58:24.493774 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 17 17:58:24.507568 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:58:24.517270 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (918) Mar 17 17:58:24.520393 kernel: BTRFS info (device vda6): first mount of filesystem e64ce651-fa93-44de-893d-ff1e0bc9061f Mar 17 17:58:24.520468 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:58:24.520492 kernel: BTRFS info (device vda6): using free space tree Mar 17 17:58:24.524284 kernel: BTRFS info (device vda6): auto enabling async discard Mar 17 17:58:24.526546 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:58:24.551869 ignition[935]: INFO : Ignition 2.20.0 Mar 17 17:58:24.552776 ignition[935]: INFO : Stage: files Mar 17 17:58:24.553521 ignition[935]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:58:24.555330 ignition[935]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Mar 17 17:58:24.556311 ignition[935]: DEBUG : files: compiled without relabeling support, skipping Mar 17 17:58:24.557303 ignition[935]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 17 17:58:24.557303 ignition[935]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 17 17:58:24.559824 ignition[935]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 17 17:58:24.560531 ignition[935]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 17 17:58:24.560531 ignition[935]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 17 17:58:24.560397 unknown[935]: wrote ssh authorized keys file for user: core Mar 17 17:58:24.562646 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Mar 17 17:58:24.562646 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Mar 17 17:58:24.600572 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 17 17:58:25.552438 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Mar 17 17:58:25.552438 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 17 17:58:25.554576 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 17 17:58:25.554576 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 17 17:58:25.554576 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 17 17:58:25.554576 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 17:58:25.554576 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 17:58:25.554576 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 17:58:25.554576 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 17:58:25.554576 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:58:25.554576 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:58:25.554576 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Mar 17 17:58:25.554576 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Mar 17 17:58:25.554576 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Mar 17 17:58:25.554576 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Mar 17 17:58:25.890452 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 17 17:58:26.256292 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Mar 17 17:58:26.256292 ignition[935]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 17 17:58:26.258745 ignition[935]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 17:58:26.259646 ignition[935]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 17:58:26.259646 ignition[935]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 17 17:58:26.259646 ignition[935]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Mar 17 17:58:26.261940 ignition[935]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Mar 17 17:58:26.261940 ignition[935]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:58:26.261940 ignition[935]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:58:26.261940 ignition[935]: INFO : files: files passed Mar 17 17:58:26.261940 ignition[935]: INFO : Ignition finished successfully Mar 17 17:58:26.262136 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 17 17:58:26.276611 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 17 17:58:26.280030 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 17 17:58:26.283928 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 17 17:58:26.284140 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 17 17:58:26.315860 initrd-setup-root-after-ignition[964]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:58:26.315860 initrd-setup-root-after-ignition[964]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:58:26.317873 initrd-setup-root-after-ignition[968]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:58:26.321469 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:58:26.322351 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 17 17:58:26.326719 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 17 17:58:26.384816 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 17 17:58:26.385012 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 17 17:58:26.386637 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 17 17:58:26.387112 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 17 17:58:26.387966 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 17 17:58:26.396563 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 17 17:58:26.411052 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:58:26.417488 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 17 17:58:26.432535 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:58:26.433868 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:58:26.434401 systemd[1]: Stopped target timers.target - Timer Units. Mar 17 17:58:26.435331 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 17 17:58:26.435477 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:58:26.436491 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 17 17:58:26.436989 systemd[1]: Stopped target basic.target - Basic System. Mar 17 17:58:26.437776 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 17 17:58:26.438504 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:58:26.439255 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 17 17:58:26.440050 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 17 17:58:26.440897 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:58:26.441737 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 17 17:58:26.442482 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 17 17:58:26.443348 systemd[1]: Stopped target swap.target - Swaps. Mar 17 17:58:26.444003 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 17 17:58:26.444141 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:58:26.445229 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:58:26.446083 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:58:26.446882 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 17 17:58:26.447013 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:58:26.447646 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 17 17:58:26.447787 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 17 17:58:26.448706 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 17 17:58:26.448873 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:58:26.449636 systemd[1]: ignition-files.service: Deactivated successfully. Mar 17 17:58:26.449765 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 17 17:58:26.450704 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 17 17:58:26.450929 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 17 17:58:26.465170 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 17 17:58:26.468580 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 17 17:58:26.468943 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 17 17:58:26.469148 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:58:26.469775 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 17 17:58:26.469946 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:58:26.480044 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 17 17:58:26.481234 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 17 17:58:26.489282 ignition[988]: INFO : Ignition 2.20.0 Mar 17 17:58:26.489282 ignition[988]: INFO : Stage: umount Mar 17 17:58:26.490386 ignition[988]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:58:26.490386 ignition[988]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Mar 17 17:58:26.492361 ignition[988]: INFO : umount: umount passed Mar 17 17:58:26.492361 ignition[988]: INFO : Ignition finished successfully Mar 17 17:58:26.492494 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 17 17:58:26.492641 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 17 17:58:26.496761 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 17 17:58:26.496937 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 17 17:58:26.501998 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 17 17:58:26.502106 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 17 17:58:26.504513 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 17 17:58:26.504610 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 17 17:58:26.505191 systemd[1]: Stopped target network.target - Network. Mar 17 17:58:26.505631 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 17 17:58:26.505711 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:58:26.506204 systemd[1]: Stopped target paths.target - Path Units. Mar 17 17:58:26.509412 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 17 17:58:26.513448 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:58:26.514087 systemd[1]: Stopped target slices.target - Slice Units. Mar 17 17:58:26.514510 systemd[1]: Stopped target sockets.target - Socket Units. Mar 17 17:58:26.515419 systemd[1]: iscsid.socket: Deactivated successfully. Mar 17 17:58:26.515472 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:58:26.516027 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 17 17:58:26.516080 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:58:26.516746 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 17 17:58:26.516809 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 17 17:58:26.517348 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 17 17:58:26.517388 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 17 17:58:26.518316 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 17 17:58:26.518900 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 17 17:58:26.521145 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 17 17:58:26.521975 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 17 17:58:26.522087 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 17 17:58:26.523349 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 17 17:58:26.523486 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 17 17:58:26.527635 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 17 17:58:26.527799 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 17 17:58:26.532581 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 17 17:58:26.532969 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 17:58:26.533126 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 17 17:58:26.535397 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 17 17:58:26.536440 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 17 17:58:26.536526 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:58:26.546948 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 17 17:58:26.547519 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 17 17:58:26.547635 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:58:26.548202 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 17:58:26.548301 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:58:26.548881 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 17 17:58:26.548932 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 17 17:58:26.549553 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 17 17:58:26.549614 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:58:26.550450 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:58:26.553769 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 17 17:58:26.553860 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 17 17:58:26.564106 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 17 17:58:26.565287 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:58:26.570708 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 17 17:58:26.570797 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 17 17:58:26.571432 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 17 17:58:26.571489 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:58:26.571960 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 17 17:58:26.572029 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:58:26.573069 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 17 17:58:26.573131 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 17 17:58:26.574376 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:58:26.574440 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:58:26.593282 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 17 17:58:26.594622 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 17 17:58:26.594818 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:58:26.597566 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 17 17:58:26.597687 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:58:26.598375 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 17 17:58:26.598467 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:58:26.598997 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:58:26.599062 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:58:26.603592 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 17 17:58:26.603724 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 17 17:58:26.604489 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 17 17:58:26.604649 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 17 17:58:26.606144 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 17 17:58:26.606345 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 17 17:58:26.610484 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 17 17:58:26.622733 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 17 17:58:26.635616 systemd[1]: Switching root. Mar 17 17:58:26.676955 systemd-journald[183]: Journal stopped Mar 17 17:58:27.921069 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Mar 17 17:58:27.921143 kernel: SELinux: policy capability network_peer_controls=1 Mar 17 17:58:27.921159 kernel: SELinux: policy capability open_perms=1 Mar 17 17:58:27.921176 kernel: SELinux: policy capability extended_socket_class=1 Mar 17 17:58:27.921187 kernel: SELinux: policy capability always_check_network=0 Mar 17 17:58:27.921199 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 17 17:58:27.921231 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 17 17:58:27.925420 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 17 17:58:27.925442 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 17 17:58:27.925456 kernel: audit: type=1403 audit(1742234306.795:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 17 17:58:27.925477 systemd[1]: Successfully loaded SELinux policy in 36.772ms. Mar 17 17:58:27.925508 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 15.074ms. Mar 17 17:58:27.925528 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 17 17:58:27.925541 systemd[1]: Detected virtualization kvm. Mar 17 17:58:27.925803 systemd[1]: Detected architecture x86-64. Mar 17 17:58:27.925836 systemd[1]: Detected first boot. Mar 17 17:58:27.925856 systemd[1]: Hostname set to . Mar 17 17:58:27.925869 systemd[1]: Initializing machine ID from VM UUID. Mar 17 17:58:27.925883 zram_generator::config[1037]: No configuration found. Mar 17 17:58:27.925909 kernel: Guest personality initialized and is inactive Mar 17 17:58:27.925922 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Mar 17 17:58:27.925935 kernel: Initialized host personality Mar 17 17:58:27.925948 kernel: NET: Registered PF_VSOCK protocol family Mar 17 17:58:27.926006 systemd[1]: Populated /etc with preset unit settings. Mar 17 17:58:27.926039 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 17 17:58:27.926058 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 17 17:58:27.926081 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 17 17:58:27.926100 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 17 17:58:27.926119 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 17 17:58:27.926132 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 17 17:58:27.926150 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 17 17:58:27.926163 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 17 17:58:27.926177 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 17 17:58:27.926189 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 17 17:58:27.926204 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 17 17:58:27.926216 systemd[1]: Created slice user.slice - User and Session Slice. Mar 17 17:58:27.926249 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:58:27.926276 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:58:27.926297 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 17 17:58:27.926315 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 17 17:58:27.926335 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 17 17:58:27.926354 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:58:27.926373 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 17 17:58:27.926394 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:58:27.926407 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 17 17:58:27.926431 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 17 17:58:27.926445 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 17 17:58:27.926461 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 17 17:58:27.926476 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:58:27.926489 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:58:27.926502 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:58:27.926515 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:58:27.926533 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 17 17:58:27.926546 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 17 17:58:27.926559 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 17 17:58:27.926572 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:58:27.926585 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:58:27.926598 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:58:27.926610 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 17 17:58:27.926623 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 17 17:58:27.926637 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 17 17:58:27.926653 systemd[1]: Mounting media.mount - External Media Directory... Mar 17 17:58:27.926666 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:58:27.926679 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 17 17:58:27.926692 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 17 17:58:27.926704 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 17 17:58:27.926723 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 17 17:58:27.926736 systemd[1]: Reached target machines.target - Containers. Mar 17 17:58:27.926748 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 17 17:58:27.926763 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:58:27.926777 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:58:27.926790 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 17 17:58:27.926802 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:58:27.926815 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:58:27.926828 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:58:27.926840 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 17 17:58:27.926853 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:58:27.926867 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 17:58:27.926884 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 17 17:58:27.926905 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 17 17:58:27.926925 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 17 17:58:27.926945 systemd[1]: Stopped systemd-fsck-usr.service. Mar 17 17:58:27.926959 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 17:58:27.926973 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:58:27.926986 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:58:27.926999 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 17 17:58:27.927015 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 17 17:58:27.927035 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 17 17:58:27.927051 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:58:27.927065 systemd[1]: verity-setup.service: Deactivated successfully. Mar 17 17:58:27.927077 systemd[1]: Stopped verity-setup.service. Mar 17 17:58:27.927093 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:58:27.927106 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 17 17:58:27.927121 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 17 17:58:27.927139 systemd[1]: Mounted media.mount - External Media Directory. Mar 17 17:58:27.927170 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 17 17:58:27.927196 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 17 17:58:27.927216 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 17 17:58:27.927232 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:58:27.932377 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 17 17:58:27.932410 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 17 17:58:27.932431 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:58:27.932446 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:58:27.932459 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:58:27.932472 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:58:27.932493 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:58:27.932506 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 17 17:58:27.932561 systemd-journald[1103]: Collecting audit messages is disabled. Mar 17 17:58:27.932598 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 17 17:58:27.932616 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 17:58:27.932629 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:58:27.932642 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 17 17:58:27.932659 kernel: loop: module loaded Mar 17 17:58:27.932674 systemd-journald[1103]: Journal started Mar 17 17:58:27.932699 systemd-journald[1103]: Runtime Journal (/run/log/journal/6ef8b000a04b44f3a612017593cce8e1) is 4.9M, max 39.3M, 34.4M free. Mar 17 17:58:27.599475 systemd[1]: Queued start job for default target multi-user.target. Mar 17 17:58:27.610586 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 17 17:58:27.611059 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 17 17:58:27.938355 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 17 17:58:27.942275 kernel: fuse: init (API version 7.39) Mar 17 17:58:27.951265 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 17 17:58:27.951361 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:58:27.962276 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 17 17:58:27.965450 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:58:27.973271 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 17 17:58:27.990284 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:58:27.995273 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 17 17:58:28.003496 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 17 17:58:28.018272 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:58:28.023497 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 17 17:58:28.023705 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 17 17:58:28.024558 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:58:28.024758 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:58:28.025914 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 17 17:58:28.027715 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 17 17:58:28.028277 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 17 17:58:28.028965 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 17 17:58:28.058273 kernel: loop0: detected capacity change from 0 to 8 Mar 17 17:58:28.060953 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 17 17:58:28.068551 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 17 17:58:28.069362 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 17 17:58:28.078037 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 17 17:58:28.082525 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 17 17:58:28.112645 systemd-journald[1103]: Time spent on flushing to /var/log/journal/6ef8b000a04b44f3a612017593cce8e1 is 61.209ms for 1000 entries. Mar 17 17:58:28.112645 systemd-journald[1103]: System Journal (/var/log/journal/6ef8b000a04b44f3a612017593cce8e1) is 8M, max 195.6M, 187.6M free. Mar 17 17:58:28.197956 systemd-journald[1103]: Received client request to flush runtime journal. Mar 17 17:58:28.198044 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 17 17:58:28.198087 kernel: ACPI: bus type drm_connector registered Mar 17 17:58:28.198742 kernel: loop1: detected capacity change from 0 to 138176 Mar 17 17:58:28.113539 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 17 17:58:28.118661 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:58:28.120860 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 17 17:58:28.154950 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:58:28.156132 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:58:28.197625 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:58:28.200480 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 17 17:58:28.206385 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 17 17:58:28.215297 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 17 17:58:28.234534 systemd-tmpfiles[1130]: ACLs are not supported, ignoring. Mar 17 17:58:28.234552 systemd-tmpfiles[1130]: ACLs are not supported, ignoring. Mar 17 17:58:28.250175 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:58:28.261432 kernel: loop2: detected capacity change from 0 to 147912 Mar 17 17:58:28.262484 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 17 17:58:28.263405 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:58:28.281165 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 17 17:58:28.313817 kernel: loop3: detected capacity change from 0 to 218376 Mar 17 17:58:28.323787 udevadm[1179]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 17 17:58:28.360292 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 17 17:58:28.372580 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:58:28.400388 kernel: loop4: detected capacity change from 0 to 8 Mar 17 17:58:28.409314 kernel: loop5: detected capacity change from 0 to 138176 Mar 17 17:58:28.426783 kernel: loop6: detected capacity change from 0 to 147912 Mar 17 17:58:28.428545 systemd-tmpfiles[1184]: ACLs are not supported, ignoring. Mar 17 17:58:28.428579 systemd-tmpfiles[1184]: ACLs are not supported, ignoring. Mar 17 17:58:28.448302 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:58:28.451274 kernel: loop7: detected capacity change from 0 to 218376 Mar 17 17:58:28.475838 (sd-merge)[1186]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Mar 17 17:58:28.478916 (sd-merge)[1186]: Merged extensions into '/usr'. Mar 17 17:58:28.487706 systemd[1]: Reload requested from client PID 1129 ('systemd-sysext') (unit systemd-sysext.service)... Mar 17 17:58:28.487740 systemd[1]: Reloading... Mar 17 17:58:28.679705 zram_generator::config[1217]: No configuration found. Mar 17 17:58:28.849507 ldconfig[1125]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 17 17:58:28.899269 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:58:29.008811 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 17 17:58:29.009738 systemd[1]: Reloading finished in 520 ms. Mar 17 17:58:29.035574 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 17 17:58:29.036884 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 17 17:58:29.065667 systemd[1]: Starting ensure-sysext.service... Mar 17 17:58:29.075646 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:58:29.107505 systemd[1]: Reload requested from client PID 1259 ('systemctl') (unit ensure-sysext.service)... Mar 17 17:58:29.107533 systemd[1]: Reloading... Mar 17 17:58:29.172269 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 17 17:58:29.173779 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 17 17:58:29.177550 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 17 17:58:29.178045 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Mar 17 17:58:29.178145 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Mar 17 17:58:29.192459 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:58:29.192479 systemd-tmpfiles[1260]: Skipping /boot Mar 17 17:58:29.257432 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:58:29.257451 systemd-tmpfiles[1260]: Skipping /boot Mar 17 17:58:29.325287 zram_generator::config[1292]: No configuration found. Mar 17 17:58:29.526888 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:58:29.613998 systemd[1]: Reloading finished in 505 ms. Mar 17 17:58:29.627920 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 17 17:58:29.641340 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:58:29.655678 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:58:29.658538 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 17 17:58:29.662638 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 17 17:58:29.668567 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:58:29.677574 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:58:29.687394 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 17 17:58:29.693701 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:58:29.693913 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:58:29.703114 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:58:29.706566 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:58:29.714688 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:58:29.715298 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:58:29.715430 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 17:58:29.715526 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:58:29.726507 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:58:29.726770 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:58:29.727009 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:58:29.727165 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 17:58:29.737744 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 17 17:58:29.739335 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:58:29.746996 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 17 17:58:29.752330 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 17 17:58:29.759506 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:58:29.759926 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:58:29.766592 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:58:29.768102 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:58:29.768310 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 17:58:29.771113 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 17 17:58:29.771732 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:58:29.772695 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:58:29.774451 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:58:29.783352 systemd[1]: Finished ensure-sysext.service. Mar 17 17:58:29.794011 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 17 17:58:29.797062 systemd-udevd[1339]: Using default interface naming scheme 'v255'. Mar 17 17:58:29.804208 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:58:29.804647 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:58:29.806220 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:58:29.814642 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 17 17:58:29.818913 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:58:29.819170 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:58:29.819959 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:58:29.820929 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:58:29.826170 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:58:29.826299 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 17:58:29.836729 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 17 17:58:29.842548 augenrules[1374]: No rules Mar 17 17:58:29.844987 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:58:29.845354 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:58:29.858032 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 17 17:58:29.863901 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:58:29.875573 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:58:29.961606 systemd-resolved[1337]: Positive Trust Anchors: Mar 17 17:58:29.962281 systemd-resolved[1337]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:58:29.962398 systemd-resolved[1337]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:58:29.968915 systemd-resolved[1337]: Using system hostname 'ci-4230.1.0-6-847a660ba6'. Mar 17 17:58:29.971230 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:58:29.971734 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:58:30.018690 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 17 17:58:30.019707 systemd[1]: Reached target time-set.target - System Time Set. Mar 17 17:58:30.043279 systemd-networkd[1387]: lo: Link UP Mar 17 17:58:30.043289 systemd-networkd[1387]: lo: Gained carrier Mar 17 17:58:30.045762 systemd-networkd[1387]: Enumeration completed Mar 17 17:58:30.045930 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:58:30.046581 systemd[1]: Reached target network.target - Network. Mar 17 17:58:30.054490 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 17 17:58:30.063481 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 17 17:58:30.087003 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 17 17:58:30.110173 systemd[1]: Condition check resulted in dev-disk-by\x2dlabel-config\x2d2.device - /dev/disk/by-label/config-2 being skipped. Mar 17 17:58:30.118544 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (1400) Mar 17 17:58:30.120392 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Mar 17 17:58:30.122364 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:58:30.122643 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:58:30.129542 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:58:30.139749 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:58:30.154982 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:58:30.156483 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:58:30.156525 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 17:58:30.156558 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 17:58:30.156574 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:58:30.164505 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 17 17:58:30.190102 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:58:30.192409 kernel: ISO 9660 Extensions: RRIP_1991A Mar 17 17:58:30.190970 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:58:30.196562 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Mar 17 17:58:30.198041 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:58:30.199560 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:58:30.200684 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:58:30.201302 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:58:30.210008 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:58:30.211496 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 17 17:58:30.211038 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:58:30.225316 kernel: ACPI: button: Power Button [PWRF] Mar 17 17:58:30.243607 systemd-networkd[1387]: eth0: Configuring with /run/systemd/network/10-92:17:f5:8f:fc:38.network. Mar 17 17:58:30.246568 systemd-networkd[1387]: eth1: Configuring with /run/systemd/network/10-e6:dd:1e:85:e1:78.network. Mar 17 17:58:30.247949 systemd-networkd[1387]: eth0: Link UP Mar 17 17:58:30.247958 systemd-networkd[1387]: eth0: Gained carrier Mar 17 17:58:30.252802 systemd-networkd[1387]: eth1: Link UP Mar 17 17:58:30.252948 systemd-networkd[1387]: eth1: Gained carrier Mar 17 17:58:30.261457 systemd-timesyncd[1367]: Network configuration changed, trying to establish connection. Mar 17 17:58:30.262316 systemd-timesyncd[1367]: Network configuration changed, trying to establish connection. Mar 17 17:58:30.290292 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Mar 17 17:58:30.302722 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 17 17:58:30.304172 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 17 17:58:30.313467 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 17 17:58:30.339873 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 17 17:58:30.369332 kernel: mousedev: PS/2 mouse device common for all mice Mar 17 17:58:30.388851 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Mar 17 17:58:30.388926 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Mar 17 17:58:30.414273 kernel: Console: switching to colour dummy device 80x25 Mar 17 17:58:30.414365 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Mar 17 17:58:30.414380 kernel: [drm] features: -context_init Mar 17 17:58:30.414417 kernel: [drm] number of scanouts: 1 Mar 17 17:58:30.415258 kernel: [drm] number of cap sets: 0 Mar 17 17:58:30.427809 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:58:30.432282 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Mar 17 17:58:30.439720 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:58:30.440096 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:58:30.442298 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Mar 17 17:58:30.444316 kernel: Console: switching to colour frame buffer device 128x48 Mar 17 17:58:30.446768 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:58:30.451516 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Mar 17 17:58:30.483557 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:58:30.483812 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:58:30.487936 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 17 17:58:30.494608 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:58:30.546307 kernel: EDAC MC: Ver: 3.0.0 Mar 17 17:58:30.572156 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 17 17:58:30.583872 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 17 17:58:30.593588 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:58:30.602287 lvm[1448]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:58:30.628889 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 17 17:58:30.631693 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:58:30.631907 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:58:30.632131 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 17 17:58:30.632373 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 17 17:58:30.632700 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 17 17:58:30.632928 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 17 17:58:30.633042 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 17 17:58:30.633155 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 17 17:58:30.633196 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:58:30.633310 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:58:30.635287 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 17 17:58:30.637118 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 17 17:58:30.642868 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 17 17:58:30.644525 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 17 17:58:30.645263 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 17 17:58:30.656384 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 17 17:58:30.657751 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 17 17:58:30.666559 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 17 17:58:30.668061 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 17 17:58:30.670724 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:58:30.672530 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:58:30.673171 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:58:30.673209 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:58:30.674518 lvm[1454]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:58:30.680426 systemd[1]: Starting containerd.service - containerd container runtime... Mar 17 17:58:30.685663 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 17 17:58:30.695377 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 17 17:58:30.700441 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 17 17:58:30.705910 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 17 17:58:30.706714 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 17 17:58:30.717632 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 17 17:58:30.725049 jq[1458]: false Mar 17 17:58:30.727379 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 17 17:58:30.732422 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 17 17:58:30.736394 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 17 17:58:30.747092 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 17 17:58:30.751062 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 17 17:58:30.751782 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 17 17:58:30.758452 systemd[1]: Starting update-engine.service - Update Engine... Mar 17 17:58:30.761720 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 17 17:58:30.766047 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 17 17:58:30.774054 extend-filesystems[1461]: Found loop4 Mar 17 17:58:30.782806 extend-filesystems[1461]: Found loop5 Mar 17 17:58:30.782806 extend-filesystems[1461]: Found loop6 Mar 17 17:58:30.782806 extend-filesystems[1461]: Found loop7 Mar 17 17:58:30.782806 extend-filesystems[1461]: Found vda Mar 17 17:58:30.777803 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 17 17:58:30.814121 coreos-metadata[1456]: Mar 17 17:58:30.777 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Mar 17 17:58:30.814121 coreos-metadata[1456]: Mar 17 17:58:30.795 INFO Fetch successful Mar 17 17:58:30.816346 extend-filesystems[1461]: Found vda1 Mar 17 17:58:30.816346 extend-filesystems[1461]: Found vda2 Mar 17 17:58:30.816346 extend-filesystems[1461]: Found vda3 Mar 17 17:58:30.816346 extend-filesystems[1461]: Found usr Mar 17 17:58:30.816346 extend-filesystems[1461]: Found vda4 Mar 17 17:58:30.816346 extend-filesystems[1461]: Found vda6 Mar 17 17:58:30.816346 extend-filesystems[1461]: Found vda7 Mar 17 17:58:30.816346 extend-filesystems[1461]: Found vda9 Mar 17 17:58:30.816346 extend-filesystems[1461]: Checking size of /dev/vda9 Mar 17 17:58:30.816346 extend-filesystems[1461]: Resized partition /dev/vda9 Mar 17 17:58:30.844900 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Mar 17 17:58:30.778521 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 17 17:58:30.845123 extend-filesystems[1490]: resize2fs 1.47.1 (20-May-2024) Mar 17 17:58:30.815880 systemd[1]: motdgen.service: Deactivated successfully. Mar 17 17:58:30.816589 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 17 17:58:30.846467 jq[1471]: true Mar 17 17:58:30.847764 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 17 17:58:30.847985 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 17 17:58:30.854764 dbus-daemon[1457]: [system] SELinux support is enabled Mar 17 17:58:30.856823 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 17 17:58:30.869182 jq[1491]: true Mar 17 17:58:30.897205 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 17 17:58:30.897267 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 17 17:58:30.900233 tar[1475]: linux-amd64/LICENSE Mar 17 17:58:30.900233 tar[1475]: linux-amd64/helm Mar 17 17:58:30.900361 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 17 17:58:30.900474 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Mar 17 17:58:30.900497 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 17 17:58:30.921454 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Mar 17 17:58:30.919171 (ntainerd)[1494]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 17 17:58:30.922252 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 17 17:58:30.924039 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 17 17:58:30.946704 extend-filesystems[1490]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 17 17:58:30.946704 extend-filesystems[1490]: old_desc_blocks = 1, new_desc_blocks = 8 Mar 17 17:58:30.946704 extend-filesystems[1490]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Mar 17 17:58:30.941095 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 17 17:58:30.963394 update_engine[1469]: I20250317 17:58:30.939635 1469 main.cc:92] Flatcar Update Engine starting Mar 17 17:58:30.971514 extend-filesystems[1461]: Resized filesystem in /dev/vda9 Mar 17 17:58:30.971514 extend-filesystems[1461]: Found vdb Mar 17 17:58:30.942965 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 17 17:58:30.988602 update_engine[1469]: I20250317 17:58:30.973943 1469 update_check_scheduler.cc:74] Next update check in 6m22s Mar 17 17:58:30.960156 systemd[1]: Started update-engine.service - Update Engine. Mar 17 17:58:30.976222 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 17 17:58:31.036672 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (1396) Mar 17 17:58:31.089576 bash[1520]: Updated "/home/core/.ssh/authorized_keys" Mar 17 17:58:31.101822 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 17 17:58:31.119517 systemd[1]: Starting sshkeys.service... Mar 17 17:58:31.132562 systemd-logind[1468]: New seat seat0. Mar 17 17:58:31.136347 systemd-logind[1468]: Watching system buttons on /dev/input/event1 (Power Button) Mar 17 17:58:31.136369 systemd-logind[1468]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 17 17:58:31.137519 systemd[1]: Started systemd-logind.service - User Login Management. Mar 17 17:58:31.222923 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 17 17:58:31.234668 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 17 17:58:31.326101 coreos-metadata[1524]: Mar 17 17:58:31.326 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Mar 17 17:58:31.335719 locksmithd[1506]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 17 17:58:31.339544 coreos-metadata[1524]: Mar 17 17:58:31.339 INFO Fetch successful Mar 17 17:58:31.361916 unknown[1524]: wrote ssh authorized keys file for user: core Mar 17 17:58:31.409127 update-ssh-keys[1536]: Updated "/home/core/.ssh/authorized_keys" Mar 17 17:58:31.410890 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 17 17:58:31.417482 systemd[1]: Finished sshkeys.service. Mar 17 17:58:31.541523 containerd[1494]: time="2025-03-17T17:58:31.541417923Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Mar 17 17:58:31.584649 sshd_keygen[1493]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 17 17:58:31.585364 systemd-networkd[1387]: eth1: Gained IPv6LL Mar 17 17:58:31.586113 systemd-timesyncd[1367]: Network configuration changed, trying to establish connection. Mar 17 17:58:31.589283 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 17 17:58:31.592859 systemd[1]: Reached target network-online.target - Network is Online. Mar 17 17:58:31.600678 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:58:31.611788 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 17 17:58:31.637198 containerd[1494]: time="2025-03-17T17:58:31.637137730Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:58:31.647033 containerd[1494]: time="2025-03-17T17:58:31.646985226Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.83-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:58:31.648509 containerd[1494]: time="2025-03-17T17:58:31.647156037Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 17 17:58:31.648509 containerd[1494]: time="2025-03-17T17:58:31.647182196Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 17 17:58:31.648509 containerd[1494]: time="2025-03-17T17:58:31.647360151Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 17 17:58:31.648509 containerd[1494]: time="2025-03-17T17:58:31.647376530Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 17 17:58:31.648509 containerd[1494]: time="2025-03-17T17:58:31.647430465Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:58:31.648509 containerd[1494]: time="2025-03-17T17:58:31.647441713Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:58:31.648509 containerd[1494]: time="2025-03-17T17:58:31.647663841Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:58:31.648509 containerd[1494]: time="2025-03-17T17:58:31.647677513Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 17 17:58:31.648509 containerd[1494]: time="2025-03-17T17:58:31.647690681Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:58:31.648509 containerd[1494]: time="2025-03-17T17:58:31.647699980Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 17 17:58:31.648509 containerd[1494]: time="2025-03-17T17:58:31.647788546Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:58:31.648509 containerd[1494]: time="2025-03-17T17:58:31.648004733Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:58:31.648875 containerd[1494]: time="2025-03-17T17:58:31.648172936Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:58:31.648875 containerd[1494]: time="2025-03-17T17:58:31.648186405Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 17 17:58:31.648875 containerd[1494]: time="2025-03-17T17:58:31.648291220Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 17 17:58:31.648875 containerd[1494]: time="2025-03-17T17:58:31.648350042Z" level=info msg="metadata content store policy set" policy=shared Mar 17 17:58:31.660450 containerd[1494]: time="2025-03-17T17:58:31.659965292Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 17 17:58:31.660450 containerd[1494]: time="2025-03-17T17:58:31.660055770Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 17 17:58:31.660450 containerd[1494]: time="2025-03-17T17:58:31.660080197Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 17 17:58:31.660450 containerd[1494]: time="2025-03-17T17:58:31.660102864Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 17 17:58:31.660450 containerd[1494]: time="2025-03-17T17:58:31.660123817Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 17 17:58:31.660450 containerd[1494]: time="2025-03-17T17:58:31.660375725Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 17 17:58:31.660748 containerd[1494]: time="2025-03-17T17:58:31.660709564Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 17 17:58:31.661671 containerd[1494]: time="2025-03-17T17:58:31.660859134Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 17 17:58:31.661671 containerd[1494]: time="2025-03-17T17:58:31.660883545Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 17 17:58:31.661671 containerd[1494]: time="2025-03-17T17:58:31.660903134Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 17 17:58:31.661671 containerd[1494]: time="2025-03-17T17:58:31.660922486Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 17 17:58:31.661671 containerd[1494]: time="2025-03-17T17:58:31.660941876Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 17 17:58:31.661671 containerd[1494]: time="2025-03-17T17:58:31.660960039Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 17 17:58:31.661671 containerd[1494]: time="2025-03-17T17:58:31.660979221Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 17 17:58:31.661671 containerd[1494]: time="2025-03-17T17:58:31.661000027Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 17 17:58:31.661671 containerd[1494]: time="2025-03-17T17:58:31.661018964Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 17 17:58:31.661671 containerd[1494]: time="2025-03-17T17:58:31.661035893Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 17 17:58:31.661671 containerd[1494]: time="2025-03-17T17:58:31.661052788Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 17 17:58:31.661671 containerd[1494]: time="2025-03-17T17:58:31.661079537Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 17 17:58:31.661671 containerd[1494]: time="2025-03-17T17:58:31.661099588Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 17 17:58:31.661671 containerd[1494]: time="2025-03-17T17:58:31.661116385Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 17 17:58:31.661993 containerd[1494]: time="2025-03-17T17:58:31.661133905Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 17 17:58:31.661993 containerd[1494]: time="2025-03-17T17:58:31.661150962Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 17 17:58:31.661993 containerd[1494]: time="2025-03-17T17:58:31.661170647Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 17 17:58:31.661993 containerd[1494]: time="2025-03-17T17:58:31.661187496Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 17 17:58:31.661993 containerd[1494]: time="2025-03-17T17:58:31.661204536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 17 17:58:31.661993 containerd[1494]: time="2025-03-17T17:58:31.661222039Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 17 17:58:31.665441 containerd[1494]: time="2025-03-17T17:58:31.664796535Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 17 17:58:31.665441 containerd[1494]: time="2025-03-17T17:58:31.664866345Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 17 17:58:31.665441 containerd[1494]: time="2025-03-17T17:58:31.664894033Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 17 17:58:31.665441 containerd[1494]: time="2025-03-17T17:58:31.664914273Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 17 17:58:31.665441 containerd[1494]: time="2025-03-17T17:58:31.664949818Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 17 17:58:31.665441 containerd[1494]: time="2025-03-17T17:58:31.664986730Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 17 17:58:31.665441 containerd[1494]: time="2025-03-17T17:58:31.665142073Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 17 17:58:31.665441 containerd[1494]: time="2025-03-17T17:58:31.665168592Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 17 17:58:31.667316 containerd[1494]: time="2025-03-17T17:58:31.667088735Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 17 17:58:31.667316 containerd[1494]: time="2025-03-17T17:58:31.667160302Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 17 17:58:31.667316 containerd[1494]: time="2025-03-17T17:58:31.667178322Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 17 17:58:31.667316 containerd[1494]: time="2025-03-17T17:58:31.667196281Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 17 17:58:31.667316 containerd[1494]: time="2025-03-17T17:58:31.667211696Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 17 17:58:31.667316 containerd[1494]: time="2025-03-17T17:58:31.667232387Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 17 17:58:31.667316 containerd[1494]: time="2025-03-17T17:58:31.667262185Z" level=info msg="NRI interface is disabled by configuration." Mar 17 17:58:31.667316 containerd[1494]: time="2025-03-17T17:58:31.667305052Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 17 17:58:31.667859 containerd[1494]: time="2025-03-17T17:58:31.667760888Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 17 17:58:31.667859 containerd[1494]: time="2025-03-17T17:58:31.667840094Z" level=info msg="Connect containerd service" Mar 17 17:58:31.668112 containerd[1494]: time="2025-03-17T17:58:31.667910962Z" level=info msg="using legacy CRI server" Mar 17 17:58:31.668112 containerd[1494]: time="2025-03-17T17:58:31.667926468Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 17 17:58:31.668112 containerd[1494]: time="2025-03-17T17:58:31.668096261Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 17 17:58:31.671399 containerd[1494]: time="2025-03-17T17:58:31.670100114Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 17:58:31.671399 containerd[1494]: time="2025-03-17T17:58:31.670403867Z" level=info msg="Start subscribing containerd event" Mar 17 17:58:31.671399 containerd[1494]: time="2025-03-17T17:58:31.670460151Z" level=info msg="Start recovering state" Mar 17 17:58:31.671399 containerd[1494]: time="2025-03-17T17:58:31.670522725Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 17 17:58:31.671399 containerd[1494]: time="2025-03-17T17:58:31.670532827Z" level=info msg="Start event monitor" Mar 17 17:58:31.671399 containerd[1494]: time="2025-03-17T17:58:31.670557991Z" level=info msg="Start snapshots syncer" Mar 17 17:58:31.671399 containerd[1494]: time="2025-03-17T17:58:31.670566990Z" level=info msg="Start cni network conf syncer for default" Mar 17 17:58:31.671399 containerd[1494]: time="2025-03-17T17:58:31.670574476Z" level=info msg="Start streaming server" Mar 17 17:58:31.671399 containerd[1494]: time="2025-03-17T17:58:31.670591154Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 17 17:58:31.671399 containerd[1494]: time="2025-03-17T17:58:31.670654661Z" level=info msg="containerd successfully booted in 0.131147s" Mar 17 17:58:31.671422 systemd[1]: Started containerd.service - containerd container runtime. Mar 17 17:58:31.685371 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 17 17:58:31.698651 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 17 17:58:31.701283 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 17 17:58:31.731131 systemd[1]: issuegen.service: Deactivated successfully. Mar 17 17:58:31.732106 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 17 17:58:31.746731 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 17 17:58:31.762205 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 17 17:58:31.772249 systemd[1]: Started sshd@0-159.223.200.207:22-139.178.68.195:53360.service - OpenSSH per-connection server daemon (139.178.68.195:53360). Mar 17 17:58:31.788928 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 17 17:58:31.801019 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 17 17:58:31.808664 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 17 17:58:31.809348 systemd[1]: Reached target getty.target - Login Prompts. Mar 17 17:58:31.924289 sshd[1570]: Accepted publickey for core from 139.178.68.195 port 53360 ssh2: RSA SHA256:nAUKsK2l9wjXYeF+xS7MSq6cfWij0pIIBV4i7QqSfSE Mar 17 17:58:31.928565 sshd-session[1570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:58:31.944051 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 17 17:58:31.957213 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 17 17:58:31.979332 systemd-logind[1468]: New session 1 of user core. Mar 17 17:58:31.991113 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 17 17:58:32.005899 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 17 17:58:32.020936 (systemd)[1577]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 17 17:58:32.026610 systemd-logind[1468]: New session c1 of user core. Mar 17 17:58:32.098368 systemd-networkd[1387]: eth0: Gained IPv6LL Mar 17 17:58:32.099011 systemd-timesyncd[1367]: Network configuration changed, trying to establish connection. Mar 17 17:58:32.192505 tar[1475]: linux-amd64/README.md Mar 17 17:58:32.231029 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 17 17:58:32.249036 systemd[1577]: Queued start job for default target default.target. Mar 17 17:58:32.258452 systemd[1577]: Created slice app.slice - User Application Slice. Mar 17 17:58:32.258499 systemd[1577]: Reached target paths.target - Paths. Mar 17 17:58:32.258559 systemd[1577]: Reached target timers.target - Timers. Mar 17 17:58:32.262450 systemd[1577]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 17 17:58:32.274681 systemd[1577]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 17 17:58:32.275592 systemd[1577]: Reached target sockets.target - Sockets. Mar 17 17:58:32.275660 systemd[1577]: Reached target basic.target - Basic System. Mar 17 17:58:32.275700 systemd[1577]: Reached target default.target - Main User Target. Mar 17 17:58:32.275732 systemd[1577]: Startup finished in 230ms. Mar 17 17:58:32.276793 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 17 17:58:32.284467 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 17 17:58:32.367700 systemd[1]: Started sshd@1-159.223.200.207:22-139.178.68.195:53370.service - OpenSSH per-connection server daemon (139.178.68.195:53370). Mar 17 17:58:32.434118 sshd[1591]: Accepted publickey for core from 139.178.68.195 port 53370 ssh2: RSA SHA256:nAUKsK2l9wjXYeF+xS7MSq6cfWij0pIIBV4i7QqSfSE Mar 17 17:58:32.437533 sshd-session[1591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:58:32.446829 systemd-logind[1468]: New session 2 of user core. Mar 17 17:58:32.451524 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 17 17:58:32.519194 sshd[1593]: Connection closed by 139.178.68.195 port 53370 Mar 17 17:58:32.520747 sshd-session[1591]: pam_unix(sshd:session): session closed for user core Mar 17 17:58:32.531707 systemd[1]: sshd@1-159.223.200.207:22-139.178.68.195:53370.service: Deactivated successfully. Mar 17 17:58:32.534085 systemd[1]: session-2.scope: Deactivated successfully. Mar 17 17:58:32.536383 systemd-logind[1468]: Session 2 logged out. Waiting for processes to exit. Mar 17 17:58:32.544728 systemd[1]: Started sshd@2-159.223.200.207:22-139.178.68.195:53380.service - OpenSSH per-connection server daemon (139.178.68.195:53380). Mar 17 17:58:32.549488 systemd-logind[1468]: Removed session 2. Mar 17 17:58:32.597937 sshd[1598]: Accepted publickey for core from 139.178.68.195 port 53380 ssh2: RSA SHA256:nAUKsK2l9wjXYeF+xS7MSq6cfWij0pIIBV4i7QqSfSE Mar 17 17:58:32.599737 sshd-session[1598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:58:32.608943 systemd-logind[1468]: New session 3 of user core. Mar 17 17:58:32.614588 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 17 17:58:32.681270 sshd[1601]: Connection closed by 139.178.68.195 port 53380 Mar 17 17:58:32.681827 sshd-session[1598]: pam_unix(sshd:session): session closed for user core Mar 17 17:58:32.685550 systemd[1]: sshd@2-159.223.200.207:22-139.178.68.195:53380.service: Deactivated successfully. Mar 17 17:58:32.688319 systemd[1]: session-3.scope: Deactivated successfully. Mar 17 17:58:32.690476 systemd-logind[1468]: Session 3 logged out. Waiting for processes to exit. Mar 17 17:58:32.691873 systemd-logind[1468]: Removed session 3. Mar 17 17:58:32.836758 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:58:32.838163 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 17 17:58:32.842470 systemd[1]: Startup finished in 1.006s (kernel) + 7.095s (initrd) + 6.083s (userspace) = 14.185s. Mar 17 17:58:32.847768 (kubelet)[1611]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:58:33.492207 kubelet[1611]: E0317 17:58:33.492123 1611 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:58:33.495882 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:58:33.496102 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:58:33.496766 systemd[1]: kubelet.service: Consumed 1.152s CPU time, 258.5M memory peak. Mar 17 17:58:42.708064 systemd[1]: Started sshd@3-159.223.200.207:22-139.178.68.195:50830.service - OpenSSH per-connection server daemon (139.178.68.195:50830). Mar 17 17:58:42.756272 sshd[1623]: Accepted publickey for core from 139.178.68.195 port 50830 ssh2: RSA SHA256:nAUKsK2l9wjXYeF+xS7MSq6cfWij0pIIBV4i7QqSfSE Mar 17 17:58:42.758015 sshd-session[1623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:58:42.763558 systemd-logind[1468]: New session 4 of user core. Mar 17 17:58:42.779498 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 17 17:58:42.843743 sshd[1625]: Connection closed by 139.178.68.195 port 50830 Mar 17 17:58:42.843206 sshd-session[1623]: pam_unix(sshd:session): session closed for user core Mar 17 17:58:42.860976 systemd[1]: sshd@3-159.223.200.207:22-139.178.68.195:50830.service: Deactivated successfully. Mar 17 17:58:42.863276 systemd[1]: session-4.scope: Deactivated successfully. Mar 17 17:58:42.865502 systemd-logind[1468]: Session 4 logged out. Waiting for processes to exit. Mar 17 17:58:42.870800 systemd[1]: Started sshd@4-159.223.200.207:22-139.178.68.195:50842.service - OpenSSH per-connection server daemon (139.178.68.195:50842). Mar 17 17:58:42.872902 systemd-logind[1468]: Removed session 4. Mar 17 17:58:42.927901 sshd[1630]: Accepted publickey for core from 139.178.68.195 port 50842 ssh2: RSA SHA256:nAUKsK2l9wjXYeF+xS7MSq6cfWij0pIIBV4i7QqSfSE Mar 17 17:58:42.929708 sshd-session[1630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:58:42.938188 systemd-logind[1468]: New session 5 of user core. Mar 17 17:58:42.947534 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 17 17:58:43.006519 sshd[1633]: Connection closed by 139.178.68.195 port 50842 Mar 17 17:58:43.007902 sshd-session[1630]: pam_unix(sshd:session): session closed for user core Mar 17 17:58:43.030746 systemd[1]: sshd@4-159.223.200.207:22-139.178.68.195:50842.service: Deactivated successfully. Mar 17 17:58:43.033469 systemd[1]: session-5.scope: Deactivated successfully. Mar 17 17:58:43.034789 systemd-logind[1468]: Session 5 logged out. Waiting for processes to exit. Mar 17 17:58:43.046656 systemd[1]: Started sshd@5-159.223.200.207:22-139.178.68.195:50850.service - OpenSSH per-connection server daemon (139.178.68.195:50850). Mar 17 17:58:43.048779 systemd-logind[1468]: Removed session 5. Mar 17 17:58:43.096796 sshd[1638]: Accepted publickey for core from 139.178.68.195 port 50850 ssh2: RSA SHA256:nAUKsK2l9wjXYeF+xS7MSq6cfWij0pIIBV4i7QqSfSE Mar 17 17:58:43.099091 sshd-session[1638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:58:43.106326 systemd-logind[1468]: New session 6 of user core. Mar 17 17:58:43.117615 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 17 17:58:43.180198 sshd[1641]: Connection closed by 139.178.68.195 port 50850 Mar 17 17:58:43.180900 sshd-session[1638]: pam_unix(sshd:session): session closed for user core Mar 17 17:58:43.193691 systemd[1]: sshd@5-159.223.200.207:22-139.178.68.195:50850.service: Deactivated successfully. Mar 17 17:58:43.196440 systemd[1]: session-6.scope: Deactivated successfully. Mar 17 17:58:43.198344 systemd-logind[1468]: Session 6 logged out. Waiting for processes to exit. Mar 17 17:58:43.208796 systemd[1]: Started sshd@6-159.223.200.207:22-139.178.68.195:50852.service - OpenSSH per-connection server daemon (139.178.68.195:50852). Mar 17 17:58:43.210659 systemd-logind[1468]: Removed session 6. Mar 17 17:58:43.256017 sshd[1646]: Accepted publickey for core from 139.178.68.195 port 50852 ssh2: RSA SHA256:nAUKsK2l9wjXYeF+xS7MSq6cfWij0pIIBV4i7QqSfSE Mar 17 17:58:43.258160 sshd-session[1646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:58:43.266177 systemd-logind[1468]: New session 7 of user core. Mar 17 17:58:43.281566 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 17 17:58:43.353987 sudo[1650]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 17 17:58:43.354496 sudo[1650]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:58:43.679823 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 17 17:58:43.689407 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:58:43.858381 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:58:43.873938 (kubelet)[1674]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:58:43.877617 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 17 17:58:43.887289 (dockerd)[1676]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 17 17:58:43.930411 kubelet[1674]: E0317 17:58:43.930230 1674 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:58:43.935627 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:58:43.935847 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:58:43.937891 systemd[1]: kubelet.service: Consumed 184ms CPU time, 103.7M memory peak. Mar 17 17:58:44.333819 dockerd[1676]: time="2025-03-17T17:58:44.333656881Z" level=info msg="Starting up" Mar 17 17:58:44.470041 dockerd[1676]: time="2025-03-17T17:58:44.469784007Z" level=info msg="Loading containers: start." Mar 17 17:58:44.658264 kernel: Initializing XFRM netlink socket Mar 17 17:58:44.690291 systemd-timesyncd[1367]: Network configuration changed, trying to establish connection. Mar 17 17:58:44.762470 systemd-networkd[1387]: docker0: Link UP Mar 17 17:58:45.967685 systemd-resolved[1337]: Clock change detected. Flushing caches. Mar 17 17:58:45.967948 systemd-timesyncd[1367]: Contacted time server 75.72.171.171:123 (2.flatcar.pool.ntp.org). Mar 17 17:58:45.968009 systemd-timesyncd[1367]: Initial clock synchronization to Mon 2025-03-17 17:58:45.966931 UTC. Mar 17 17:58:45.971109 dockerd[1676]: time="2025-03-17T17:58:45.970453078Z" level=info msg="Loading containers: done." Mar 17 17:58:45.987762 dockerd[1676]: time="2025-03-17T17:58:45.987719837Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 17 17:58:45.988081 dockerd[1676]: time="2025-03-17T17:58:45.988060575Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Mar 17 17:58:45.988277 dockerd[1676]: time="2025-03-17T17:58:45.988253999Z" level=info msg="Daemon has completed initialization" Mar 17 17:58:46.020775 dockerd[1676]: time="2025-03-17T17:58:46.020687443Z" level=info msg="API listen on /run/docker.sock" Mar 17 17:58:46.021036 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 17 17:58:46.758326 containerd[1494]: time="2025-03-17T17:58:46.758208137Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.3\"" Mar 17 17:58:47.354453 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2891397618.mount: Deactivated successfully. Mar 17 17:58:48.524766 containerd[1494]: time="2025-03-17T17:58:48.524688305Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:58:48.525736 containerd[1494]: time="2025-03-17T17:58:48.525698187Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.3: active requests=0, bytes read=28682430" Mar 17 17:58:48.526598 containerd[1494]: time="2025-03-17T17:58:48.526229128Z" level=info msg="ImageCreate event name:\"sha256:f8bdc4cfa0651e2d7edb4678d2b90129aef82a19249b37dc8d4705e8bd604295\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:58:48.529008 containerd[1494]: time="2025-03-17T17:58:48.528968612Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:279e45cf07e4f56925c3c5237179eb63616788426a96e94df5fedf728b18926e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:58:48.530212 containerd[1494]: time="2025-03-17T17:58:48.530175708Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.3\" with image id \"sha256:f8bdc4cfa0651e2d7edb4678d2b90129aef82a19249b37dc8d4705e8bd604295\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:279e45cf07e4f56925c3c5237179eb63616788426a96e94df5fedf728b18926e\", size \"28679230\" in 1.771921997s" Mar 17 17:58:48.530325 containerd[1494]: time="2025-03-17T17:58:48.530311656Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.3\" returns image reference \"sha256:f8bdc4cfa0651e2d7edb4678d2b90129aef82a19249b37dc8d4705e8bd604295\"" Mar 17 17:58:48.531214 containerd[1494]: time="2025-03-17T17:58:48.531118872Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.3\"" Mar 17 17:58:49.939423 containerd[1494]: time="2025-03-17T17:58:49.937840016Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:58:49.940695 containerd[1494]: time="2025-03-17T17:58:49.940622533Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.3: active requests=0, bytes read=24779684" Mar 17 17:58:49.941541 containerd[1494]: time="2025-03-17T17:58:49.941449338Z" level=info msg="ImageCreate event name:\"sha256:085818208a5213f37ef6d103caaf8e1e243816a614eb5b87a98bfffe79c687b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:58:49.944858 containerd[1494]: time="2025-03-17T17:58:49.944749235Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:54456a96a1bbdc35dcc2e70fcc1355bf655af67694e40b650ac12e83521f6411\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:58:49.949852 containerd[1494]: time="2025-03-17T17:58:49.948901678Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.3\" with image id \"sha256:085818208a5213f37ef6d103caaf8e1e243816a614eb5b87a98bfffe79c687b5\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:54456a96a1bbdc35dcc2e70fcc1355bf655af67694e40b650ac12e83521f6411\", size \"26267292\" in 1.417533982s" Mar 17 17:58:49.949852 containerd[1494]: time="2025-03-17T17:58:49.948965870Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.3\" returns image reference \"sha256:085818208a5213f37ef6d103caaf8e1e243816a614eb5b87a98bfffe79c687b5\"" Mar 17 17:58:49.951869 containerd[1494]: time="2025-03-17T17:58:49.951792074Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.3\"" Mar 17 17:58:51.074783 containerd[1494]: time="2025-03-17T17:58:51.074730605Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:58:51.076059 containerd[1494]: time="2025-03-17T17:58:51.075899728Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.3: active requests=0, bytes read=19171419" Mar 17 17:58:51.076059 containerd[1494]: time="2025-03-17T17:58:51.075985309Z" level=info msg="ImageCreate event name:\"sha256:b4260bf5078ab1b01dd05fb05015fc436b7100b7b9b5ea738e247a86008b16b8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:58:51.079723 containerd[1494]: time="2025-03-17T17:58:51.079634118Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:aafae2e3a8d65bc6dc3a0c6095c24bc72b1ff608e1417f0f5e860ce4a61c27df\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:58:51.080917 containerd[1494]: time="2025-03-17T17:58:51.080773689Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.3\" with image id \"sha256:b4260bf5078ab1b01dd05fb05015fc436b7100b7b9b5ea738e247a86008b16b8\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:aafae2e3a8d65bc6dc3a0c6095c24bc72b1ff608e1417f0f5e860ce4a61c27df\", size \"20659045\" in 1.128902571s" Mar 17 17:58:51.080917 containerd[1494]: time="2025-03-17T17:58:51.080827134Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.3\" returns image reference \"sha256:b4260bf5078ab1b01dd05fb05015fc436b7100b7b9b5ea738e247a86008b16b8\"" Mar 17 17:58:51.081931 containerd[1494]: time="2025-03-17T17:58:51.081887941Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.3\"" Mar 17 17:58:52.117807 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount192335027.mount: Deactivated successfully. Mar 17 17:58:52.647614 containerd[1494]: time="2025-03-17T17:58:52.647538593Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:58:52.648845 containerd[1494]: time="2025-03-17T17:58:52.648750956Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.3: active requests=0, bytes read=30918185" Mar 17 17:58:52.650006 containerd[1494]: time="2025-03-17T17:58:52.649906695Z" level=info msg="ImageCreate event name:\"sha256:a1ae78fd2f9d8fc345928378dc947c7f1e95f01c1a552781827071867a95d09c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:58:52.652712 containerd[1494]: time="2025-03-17T17:58:52.652655067Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:5015269547a0b7dd2c062758e9a64467b58978ff2502cad4c3f5cdf4aa554ad3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:58:52.653782 containerd[1494]: time="2025-03-17T17:58:52.653635396Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.3\" with image id \"sha256:a1ae78fd2f9d8fc345928378dc947c7f1e95f01c1a552781827071867a95d09c\", repo tag \"registry.k8s.io/kube-proxy:v1.32.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:5015269547a0b7dd2c062758e9a64467b58978ff2502cad4c3f5cdf4aa554ad3\", size \"30917204\" in 1.571692566s" Mar 17 17:58:52.653782 containerd[1494]: time="2025-03-17T17:58:52.653678502Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.3\" returns image reference \"sha256:a1ae78fd2f9d8fc345928378dc947c7f1e95f01c1a552781827071867a95d09c\"" Mar 17 17:58:52.654502 containerd[1494]: time="2025-03-17T17:58:52.654305946Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Mar 17 17:58:52.656004 systemd-resolved[1337]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Mar 17 17:58:53.088808 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount538436864.mount: Deactivated successfully. Mar 17 17:58:54.070249 containerd[1494]: time="2025-03-17T17:58:54.070176712Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:58:54.072457 containerd[1494]: time="2025-03-17T17:58:54.072286875Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Mar 17 17:58:54.073049 containerd[1494]: time="2025-03-17T17:58:54.073003926Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:58:54.077848 containerd[1494]: time="2025-03-17T17:58:54.077763521Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.423426153s" Mar 17 17:58:54.077848 containerd[1494]: time="2025-03-17T17:58:54.077806841Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Mar 17 17:58:54.078633 containerd[1494]: time="2025-03-17T17:58:54.076746994Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:58:54.078633 containerd[1494]: time="2025-03-17T17:58:54.078313673Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 17 17:58:54.467351 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3206080392.mount: Deactivated successfully. Mar 17 17:58:54.472641 containerd[1494]: time="2025-03-17T17:58:54.472564630Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:58:54.473424 containerd[1494]: time="2025-03-17T17:58:54.473344894Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Mar 17 17:58:54.475147 containerd[1494]: time="2025-03-17T17:58:54.473663474Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:58:54.476214 containerd[1494]: time="2025-03-17T17:58:54.476167502Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:58:54.477552 containerd[1494]: time="2025-03-17T17:58:54.477502471Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 399.157925ms" Mar 17 17:58:54.477720 containerd[1494]: time="2025-03-17T17:58:54.477694959Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Mar 17 17:58:54.478504 containerd[1494]: time="2025-03-17T17:58:54.478476573Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Mar 17 17:58:54.966396 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4010281896.mount: Deactivated successfully. Mar 17 17:58:55.353795 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 17 17:58:55.361467 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:58:55.523067 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:58:55.537344 (kubelet)[2057]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:58:55.620855 kubelet[2057]: E0317 17:58:55.620221 2057 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:58:55.624431 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:58:55.624645 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:58:55.626121 systemd[1]: kubelet.service: Consumed 189ms CPU time, 103.7M memory peak. Mar 17 17:58:55.736036 systemd-resolved[1337]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Mar 17 17:58:56.852443 containerd[1494]: time="2025-03-17T17:58:56.852367969Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:58:56.853650 containerd[1494]: time="2025-03-17T17:58:56.853114728Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551320" Mar 17 17:58:56.854751 containerd[1494]: time="2025-03-17T17:58:56.854685048Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:58:56.857855 containerd[1494]: time="2025-03-17T17:58:56.857799437Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:58:56.859263 containerd[1494]: time="2025-03-17T17:58:56.859079639Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.380420953s" Mar 17 17:58:56.859263 containerd[1494]: time="2025-03-17T17:58:56.859130500Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Mar 17 17:58:59.481594 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:58:59.481774 systemd[1]: kubelet.service: Consumed 189ms CPU time, 103.7M memory peak. Mar 17 17:58:59.495273 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:58:59.548007 systemd[1]: Reload requested from client PID 2097 ('systemctl') (unit session-7.scope)... Mar 17 17:58:59.548031 systemd[1]: Reloading... Mar 17 17:58:59.699850 zram_generator::config[2141]: No configuration found. Mar 17 17:58:59.846474 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:58:59.971790 systemd[1]: Reloading finished in 423 ms. Mar 17 17:59:00.035132 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:59:00.039439 (kubelet)[2186]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:59:00.049191 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:59:00.051377 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 17:59:00.051690 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:59:00.051761 systemd[1]: kubelet.service: Consumed 138ms CPU time, 93.4M memory peak. Mar 17 17:59:00.059300 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:59:00.232459 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:59:00.246950 (kubelet)[2202]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:59:00.337090 kubelet[2202]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:59:00.337090 kubelet[2202]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 17 17:59:00.337090 kubelet[2202]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:59:00.337748 kubelet[2202]: I0317 17:59:00.337224 2202 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 17:59:00.633107 kubelet[2202]: I0317 17:59:00.633035 2202 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Mar 17 17:59:00.633107 kubelet[2202]: I0317 17:59:00.633083 2202 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 17:59:00.633475 kubelet[2202]: I0317 17:59:00.633449 2202 server.go:954] "Client rotation is on, will bootstrap in background" Mar 17 17:59:00.662300 kubelet[2202]: I0317 17:59:00.662239 2202 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 17:59:00.671598 kubelet[2202]: E0317 17:59:00.670709 2202 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://159.223.200.207:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 159.223.200.207:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:59:00.679791 kubelet[2202]: E0317 17:59:00.679738 2202 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 17 17:59:00.679791 kubelet[2202]: I0317 17:59:00.679789 2202 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 17 17:59:00.685584 kubelet[2202]: I0317 17:59:00.685543 2202 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 17:59:00.685882 kubelet[2202]: I0317 17:59:00.685834 2202 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 17:59:00.686121 kubelet[2202]: I0317 17:59:00.685883 2202 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.1.0-6-847a660ba6","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 17 17:59:00.686216 kubelet[2202]: I0317 17:59:00.686130 2202 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 17:59:00.686216 kubelet[2202]: I0317 17:59:00.686140 2202 container_manager_linux.go:304] "Creating device plugin manager" Mar 17 17:59:00.686350 kubelet[2202]: I0317 17:59:00.686331 2202 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:59:00.689839 kubelet[2202]: I0317 17:59:00.689775 2202 kubelet.go:446] "Attempting to sync node with API server" Mar 17 17:59:00.690001 kubelet[2202]: I0317 17:59:00.689895 2202 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 17:59:00.690001 kubelet[2202]: I0317 17:59:00.689925 2202 kubelet.go:352] "Adding apiserver pod source" Mar 17 17:59:00.690001 kubelet[2202]: I0317 17:59:00.689940 2202 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 17:59:00.692761 kubelet[2202]: W0317 17:59:00.692673 2202 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://159.223.200.207:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.0-6-847a660ba6&limit=500&resourceVersion=0": dial tcp 159.223.200.207:6443: connect: connection refused Mar 17 17:59:00.692761 kubelet[2202]: E0317 17:59:00.692742 2202 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://159.223.200.207:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.0-6-847a660ba6&limit=500&resourceVersion=0\": dial tcp 159.223.200.207:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:59:00.694980 kubelet[2202]: W0317 17:59:00.694457 2202 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://159.223.200.207:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 159.223.200.207:6443: connect: connection refused Mar 17 17:59:00.694980 kubelet[2202]: E0317 17:59:00.694498 2202 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://159.223.200.207:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 159.223.200.207:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:59:00.696723 kubelet[2202]: I0317 17:59:00.696676 2202 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 17:59:00.700781 kubelet[2202]: I0317 17:59:00.700612 2202 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 17:59:00.701771 kubelet[2202]: W0317 17:59:00.701201 2202 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 17 17:59:00.703284 kubelet[2202]: I0317 17:59:00.703085 2202 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 17 17:59:00.703284 kubelet[2202]: I0317 17:59:00.703123 2202 server.go:1287] "Started kubelet" Mar 17 17:59:00.704373 kubelet[2202]: I0317 17:59:00.704142 2202 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 17:59:00.705256 kubelet[2202]: I0317 17:59:00.705212 2202 server.go:490] "Adding debug handlers to kubelet server" Mar 17 17:59:00.708123 kubelet[2202]: I0317 17:59:00.707627 2202 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 17:59:00.708123 kubelet[2202]: I0317 17:59:00.708002 2202 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 17:59:00.711881 kubelet[2202]: I0317 17:59:00.711844 2202 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 17:59:00.713868 kubelet[2202]: E0317 17:59:00.710077 2202 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://159.223.200.207:6443/api/v1/namespaces/default/events\": dial tcp 159.223.200.207:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.1.0-6-847a660ba6.182da8e6f1186d1f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.1.0-6-847a660ba6,UID:ci-4230.1.0-6-847a660ba6,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.1.0-6-847a660ba6,},FirstTimestamp:2025-03-17 17:59:00.703104287 +0000 UTC m=+0.449661004,LastTimestamp:2025-03-17 17:59:00.703104287 +0000 UTC m=+0.449661004,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.1.0-6-847a660ba6,}" Mar 17 17:59:00.713868 kubelet[2202]: I0317 17:59:00.713057 2202 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 17 17:59:00.716862 kubelet[2202]: E0317 17:59:00.716807 2202 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230.1.0-6-847a660ba6\" not found" Mar 17 17:59:00.717474 kubelet[2202]: I0317 17:59:00.717030 2202 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 17 17:59:00.717474 kubelet[2202]: I0317 17:59:00.717267 2202 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 17:59:00.717474 kubelet[2202]: I0317 17:59:00.717318 2202 reconciler.go:26] "Reconciler: start to sync state" Mar 17 17:59:00.718118 kubelet[2202]: W0317 17:59:00.718062 2202 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://159.223.200.207:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 159.223.200.207:6443: connect: connection refused Mar 17 17:59:00.718258 kubelet[2202]: E0317 17:59:00.718236 2202 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://159.223.200.207:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 159.223.200.207:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:59:00.718749 kubelet[2202]: E0317 17:59:00.718696 2202 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://159.223.200.207:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.0-6-847a660ba6?timeout=10s\": dial tcp 159.223.200.207:6443: connect: connection refused" interval="200ms" Mar 17 17:59:00.722056 kubelet[2202]: I0317 17:59:00.720927 2202 factory.go:221] Registration of the systemd container factory successfully Mar 17 17:59:00.722056 kubelet[2202]: I0317 17:59:00.721078 2202 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 17:59:00.724223 kubelet[2202]: I0317 17:59:00.724174 2202 factory.go:221] Registration of the containerd container factory successfully Mar 17 17:59:00.757245 kubelet[2202]: E0317 17:59:00.757210 2202 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 17:59:00.759190 kubelet[2202]: I0317 17:59:00.759114 2202 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 17:59:00.761256 kubelet[2202]: I0317 17:59:00.761226 2202 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 17:59:00.761256 kubelet[2202]: I0317 17:59:00.761256 2202 status_manager.go:227] "Starting to sync pod status with apiserver" Mar 17 17:59:00.761433 kubelet[2202]: I0317 17:59:00.761277 2202 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 17 17:59:00.761433 kubelet[2202]: I0317 17:59:00.761285 2202 kubelet.go:2388] "Starting kubelet main sync loop" Mar 17 17:59:00.761433 kubelet[2202]: E0317 17:59:00.761343 2202 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 17:59:00.762659 kubelet[2202]: W0317 17:59:00.762597 2202 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://159.223.200.207:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 159.223.200.207:6443: connect: connection refused Mar 17 17:59:00.762659 kubelet[2202]: E0317 17:59:00.762654 2202 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://159.223.200.207:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 159.223.200.207:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:59:00.766544 kubelet[2202]: I0317 17:59:00.766470 2202 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 17 17:59:00.766544 kubelet[2202]: I0317 17:59:00.766492 2202 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 17 17:59:00.766882 kubelet[2202]: I0317 17:59:00.766735 2202 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:59:00.770706 kubelet[2202]: I0317 17:59:00.770247 2202 policy_none.go:49] "None policy: Start" Mar 17 17:59:00.770706 kubelet[2202]: I0317 17:59:00.770295 2202 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 17 17:59:00.770706 kubelet[2202]: I0317 17:59:00.770317 2202 state_mem.go:35] "Initializing new in-memory state store" Mar 17 17:59:00.777986 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 17 17:59:00.790047 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 17 17:59:00.805618 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 17 17:59:00.807439 kubelet[2202]: I0317 17:59:00.807407 2202 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 17:59:00.807656 kubelet[2202]: I0317 17:59:00.807640 2202 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 17 17:59:00.807721 kubelet[2202]: I0317 17:59:00.807658 2202 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 17:59:00.808946 kubelet[2202]: I0317 17:59:00.808601 2202 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 17:59:00.810495 kubelet[2202]: E0317 17:59:00.810440 2202 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 17 17:59:00.810495 kubelet[2202]: E0317 17:59:00.810487 2202 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230.1.0-6-847a660ba6\" not found" Mar 17 17:59:00.875011 systemd[1]: Created slice kubepods-burstable-pod585dff515ddbb5a71d7b06a5c94fd15e.slice - libcontainer container kubepods-burstable-pod585dff515ddbb5a71d7b06a5c94fd15e.slice. Mar 17 17:59:00.893752 kubelet[2202]: E0317 17:59:00.893615 2202 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.1.0-6-847a660ba6\" not found" node="ci-4230.1.0-6-847a660ba6" Mar 17 17:59:00.899205 systemd[1]: Created slice kubepods-burstable-podf2ddf8ea1a05b241ffb7324439f07409.slice - libcontainer container kubepods-burstable-podf2ddf8ea1a05b241ffb7324439f07409.slice. Mar 17 17:59:00.909240 kubelet[2202]: I0317 17:59:00.909204 2202 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230.1.0-6-847a660ba6" Mar 17 17:59:00.909712 kubelet[2202]: E0317 17:59:00.909623 2202 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://159.223.200.207:6443/api/v1/nodes\": dial tcp 159.223.200.207:6443: connect: connection refused" node="ci-4230.1.0-6-847a660ba6" Mar 17 17:59:00.911440 kubelet[2202]: E0317 17:59:00.911406 2202 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.1.0-6-847a660ba6\" not found" node="ci-4230.1.0-6-847a660ba6" Mar 17 17:59:00.914444 systemd[1]: Created slice kubepods-burstable-podb4df9bb439a95d436695668b6b60ed9f.slice - libcontainer container kubepods-burstable-podb4df9bb439a95d436695668b6b60ed9f.slice. Mar 17 17:59:00.917255 kubelet[2202]: E0317 17:59:00.917203 2202 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.1.0-6-847a660ba6\" not found" node="ci-4230.1.0-6-847a660ba6" Mar 17 17:59:00.918658 kubelet[2202]: I0317 17:59:00.918622 2202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f2ddf8ea1a05b241ffb7324439f07409-ca-certs\") pod \"kube-controller-manager-ci-4230.1.0-6-847a660ba6\" (UID: \"f2ddf8ea1a05b241ffb7324439f07409\") " pod="kube-system/kube-controller-manager-ci-4230.1.0-6-847a660ba6" Mar 17 17:59:00.918839 kubelet[2202]: I0317 17:59:00.918722 2202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f2ddf8ea1a05b241ffb7324439f07409-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.1.0-6-847a660ba6\" (UID: \"f2ddf8ea1a05b241ffb7324439f07409\") " pod="kube-system/kube-controller-manager-ci-4230.1.0-6-847a660ba6" Mar 17 17:59:00.918839 kubelet[2202]: I0317 17:59:00.918757 2202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f2ddf8ea1a05b241ffb7324439f07409-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.1.0-6-847a660ba6\" (UID: \"f2ddf8ea1a05b241ffb7324439f07409\") " pod="kube-system/kube-controller-manager-ci-4230.1.0-6-847a660ba6" Mar 17 17:59:00.918839 kubelet[2202]: I0317 17:59:00.918802 2202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b4df9bb439a95d436695668b6b60ed9f-kubeconfig\") pod \"kube-scheduler-ci-4230.1.0-6-847a660ba6\" (UID: \"b4df9bb439a95d436695668b6b60ed9f\") " pod="kube-system/kube-scheduler-ci-4230.1.0-6-847a660ba6" Mar 17 17:59:00.918935 kubelet[2202]: I0317 17:59:00.918879 2202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/585dff515ddbb5a71d7b06a5c94fd15e-k8s-certs\") pod \"kube-apiserver-ci-4230.1.0-6-847a660ba6\" (UID: \"585dff515ddbb5a71d7b06a5c94fd15e\") " pod="kube-system/kube-apiserver-ci-4230.1.0-6-847a660ba6" Mar 17 17:59:00.919639 kubelet[2202]: I0317 17:59:00.919398 2202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/585dff515ddbb5a71d7b06a5c94fd15e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.1.0-6-847a660ba6\" (UID: \"585dff515ddbb5a71d7b06a5c94fd15e\") " pod="kube-system/kube-apiserver-ci-4230.1.0-6-847a660ba6" Mar 17 17:59:00.919639 kubelet[2202]: I0317 17:59:00.919448 2202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f2ddf8ea1a05b241ffb7324439f07409-k8s-certs\") pod \"kube-controller-manager-ci-4230.1.0-6-847a660ba6\" (UID: \"f2ddf8ea1a05b241ffb7324439f07409\") " pod="kube-system/kube-controller-manager-ci-4230.1.0-6-847a660ba6" Mar 17 17:59:00.919639 kubelet[2202]: I0317 17:59:00.919485 2202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f2ddf8ea1a05b241ffb7324439f07409-kubeconfig\") pod \"kube-controller-manager-ci-4230.1.0-6-847a660ba6\" (UID: \"f2ddf8ea1a05b241ffb7324439f07409\") " pod="kube-system/kube-controller-manager-ci-4230.1.0-6-847a660ba6" Mar 17 17:59:00.919639 kubelet[2202]: I0317 17:59:00.919505 2202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/585dff515ddbb5a71d7b06a5c94fd15e-ca-certs\") pod \"kube-apiserver-ci-4230.1.0-6-847a660ba6\" (UID: \"585dff515ddbb5a71d7b06a5c94fd15e\") " pod="kube-system/kube-apiserver-ci-4230.1.0-6-847a660ba6" Mar 17 17:59:00.919639 kubelet[2202]: E0317 17:59:00.919576 2202 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://159.223.200.207:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.0-6-847a660ba6?timeout=10s\": dial tcp 159.223.200.207:6443: connect: connection refused" interval="400ms" Mar 17 17:59:01.111693 kubelet[2202]: I0317 17:59:01.111616 2202 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230.1.0-6-847a660ba6" Mar 17 17:59:01.112316 kubelet[2202]: E0317 17:59:01.112237 2202 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://159.223.200.207:6443/api/v1/nodes\": dial tcp 159.223.200.207:6443: connect: connection refused" node="ci-4230.1.0-6-847a660ba6" Mar 17 17:59:01.196120 kubelet[2202]: E0317 17:59:01.195940 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 17:59:01.198717 containerd[1494]: time="2025-03-17T17:59:01.198643108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.1.0-6-847a660ba6,Uid:585dff515ddbb5a71d7b06a5c94fd15e,Namespace:kube-system,Attempt:0,}" Mar 17 17:59:01.201927 systemd-resolved[1337]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. Mar 17 17:59:01.212782 kubelet[2202]: E0317 17:59:01.212336 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 17:59:01.213385 containerd[1494]: time="2025-03-17T17:59:01.213302850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.1.0-6-847a660ba6,Uid:f2ddf8ea1a05b241ffb7324439f07409,Namespace:kube-system,Attempt:0,}" Mar 17 17:59:01.217698 kubelet[2202]: E0317 17:59:01.217652 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 17:59:01.218394 containerd[1494]: time="2025-03-17T17:59:01.218341807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.1.0-6-847a660ba6,Uid:b4df9bb439a95d436695668b6b60ed9f,Namespace:kube-system,Attempt:0,}" Mar 17 17:59:01.321384 kubelet[2202]: E0317 17:59:01.321293 2202 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://159.223.200.207:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.0-6-847a660ba6?timeout=10s\": dial tcp 159.223.200.207:6443: connect: connection refused" interval="800ms" Mar 17 17:59:01.513471 kubelet[2202]: I0317 17:59:01.513351 2202 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230.1.0-6-847a660ba6" Mar 17 17:59:01.514515 kubelet[2202]: E0317 17:59:01.514471 2202 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://159.223.200.207:6443/api/v1/nodes\": dial tcp 159.223.200.207:6443: connect: connection refused" node="ci-4230.1.0-6-847a660ba6" Mar 17 17:59:01.656885 kubelet[2202]: W0317 17:59:01.656733 2202 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://159.223.200.207:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 159.223.200.207:6443: connect: connection refused Mar 17 17:59:01.656885 kubelet[2202]: E0317 17:59:01.656843 2202 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://159.223.200.207:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 159.223.200.207:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:59:01.671516 kubelet[2202]: W0317 17:59:01.671327 2202 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://159.223.200.207:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 159.223.200.207:6443: connect: connection refused Mar 17 17:59:01.671516 kubelet[2202]: E0317 17:59:01.671394 2202 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://159.223.200.207:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 159.223.200.207:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:59:01.680311 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2881896109.mount: Deactivated successfully. Mar 17 17:59:01.685356 containerd[1494]: time="2025-03-17T17:59:01.685287686Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:59:01.686481 containerd[1494]: time="2025-03-17T17:59:01.686418305Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 17 17:59:01.688676 containerd[1494]: time="2025-03-17T17:59:01.688553829Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:59:01.692572 containerd[1494]: time="2025-03-17T17:59:01.692300025Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 17:59:01.694616 containerd[1494]: time="2025-03-17T17:59:01.694544842Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 17:59:01.694754 containerd[1494]: time="2025-03-17T17:59:01.694687662Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:59:01.701248 containerd[1494]: time="2025-03-17T17:59:01.701188383Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:59:01.703873 containerd[1494]: time="2025-03-17T17:59:01.703114181Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 504.331865ms" Mar 17 17:59:01.705166 containerd[1494]: time="2025-03-17T17:59:01.704875640Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 486.389668ms" Mar 17 17:59:01.709519 containerd[1494]: time="2025-03-17T17:59:01.709243613Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:59:01.710835 containerd[1494]: time="2025-03-17T17:59:01.710698601Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 497.238767ms" Mar 17 17:59:01.731559 kubelet[2202]: W0317 17:59:01.731401 2202 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://159.223.200.207:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.0-6-847a660ba6&limit=500&resourceVersion=0": dial tcp 159.223.200.207:6443: connect: connection refused Mar 17 17:59:01.731559 kubelet[2202]: E0317 17:59:01.731491 2202 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://159.223.200.207:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.0-6-847a660ba6&limit=500&resourceVersion=0\": dial tcp 159.223.200.207:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:59:01.890460 containerd[1494]: time="2025-03-17T17:59:01.888105317Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:59:01.891698 containerd[1494]: time="2025-03-17T17:59:01.891587198Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:59:01.891698 containerd[1494]: time="2025-03-17T17:59:01.891647722Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:59:01.892027 containerd[1494]: time="2025-03-17T17:59:01.891965829Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:59:01.892195 containerd[1494]: time="2025-03-17T17:59:01.892136408Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:59:01.892488 containerd[1494]: time="2025-03-17T17:59:01.892408906Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:59:01.892594 containerd[1494]: time="2025-03-17T17:59:01.892544681Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:59:01.893834 containerd[1494]: time="2025-03-17T17:59:01.893746425Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:59:01.897523 containerd[1494]: time="2025-03-17T17:59:01.897158507Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:59:01.897523 containerd[1494]: time="2025-03-17T17:59:01.897249566Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:59:01.897523 containerd[1494]: time="2025-03-17T17:59:01.897274127Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:59:01.897523 containerd[1494]: time="2025-03-17T17:59:01.897391296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:59:01.934232 systemd[1]: Started cri-containerd-09a0a39501c2ec9d373bf71556f7d9eba65daccf52d70fc7fbd88e011ab0df02.scope - libcontainer container 09a0a39501c2ec9d373bf71556f7d9eba65daccf52d70fc7fbd88e011ab0df02. Mar 17 17:59:01.937106 systemd[1]: Started cri-containerd-a9f4c5227a6b4f16ee02a4e370f4af5d810d103573b5a385cb427fc5512ed617.scope - libcontainer container a9f4c5227a6b4f16ee02a4e370f4af5d810d103573b5a385cb427fc5512ed617. Mar 17 17:59:01.945998 systemd[1]: Started cri-containerd-33799bd5705cc7706026ca88d49627f7d519477dea5f3136ba2fc84c8ee9618f.scope - libcontainer container 33799bd5705cc7706026ca88d49627f7d519477dea5f3136ba2fc84c8ee9618f. Mar 17 17:59:01.949086 kubelet[2202]: W0317 17:59:01.946935 2202 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://159.223.200.207:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 159.223.200.207:6443: connect: connection refused Mar 17 17:59:01.949086 kubelet[2202]: E0317 17:59:01.946993 2202 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://159.223.200.207:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 159.223.200.207:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:59:02.047298 containerd[1494]: time="2025-03-17T17:59:02.047249528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.1.0-6-847a660ba6,Uid:b4df9bb439a95d436695668b6b60ed9f,Namespace:kube-system,Attempt:0,} returns sandbox id \"a9f4c5227a6b4f16ee02a4e370f4af5d810d103573b5a385cb427fc5512ed617\"" Mar 17 17:59:02.050464 kubelet[2202]: E0317 17:59:02.049908 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 17:59:02.056741 containerd[1494]: time="2025-03-17T17:59:02.056357371Z" level=info msg="CreateContainer within sandbox \"a9f4c5227a6b4f16ee02a4e370f4af5d810d103573b5a385cb427fc5512ed617\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 17 17:59:02.064839 containerd[1494]: time="2025-03-17T17:59:02.064770807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.1.0-6-847a660ba6,Uid:585dff515ddbb5a71d7b06a5c94fd15e,Namespace:kube-system,Attempt:0,} returns sandbox id \"09a0a39501c2ec9d373bf71556f7d9eba65daccf52d70fc7fbd88e011ab0df02\"" Mar 17 17:59:02.066427 kubelet[2202]: E0317 17:59:02.066268 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 17:59:02.070444 containerd[1494]: time="2025-03-17T17:59:02.070163779Z" level=info msg="CreateContainer within sandbox \"09a0a39501c2ec9d373bf71556f7d9eba65daccf52d70fc7fbd88e011ab0df02\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 17 17:59:02.074937 containerd[1494]: time="2025-03-17T17:59:02.074704944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.1.0-6-847a660ba6,Uid:f2ddf8ea1a05b241ffb7324439f07409,Namespace:kube-system,Attempt:0,} returns sandbox id \"33799bd5705cc7706026ca88d49627f7d519477dea5f3136ba2fc84c8ee9618f\"" Mar 17 17:59:02.076787 kubelet[2202]: E0317 17:59:02.076539 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 17:59:02.079808 containerd[1494]: time="2025-03-17T17:59:02.079759046Z" level=info msg="CreateContainer within sandbox \"33799bd5705cc7706026ca88d49627f7d519477dea5f3136ba2fc84c8ee9618f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 17 17:59:02.086870 containerd[1494]: time="2025-03-17T17:59:02.086773777Z" level=info msg="CreateContainer within sandbox \"a9f4c5227a6b4f16ee02a4e370f4af5d810d103573b5a385cb427fc5512ed617\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7f363f30a487309821bbe4782eadd0aac2a9ea294ffb8b18fff90188cc7364e9\"" Mar 17 17:59:02.088849 containerd[1494]: time="2025-03-17T17:59:02.088674673Z" level=info msg="StartContainer for \"7f363f30a487309821bbe4782eadd0aac2a9ea294ffb8b18fff90188cc7364e9\"" Mar 17 17:59:02.096806 containerd[1494]: time="2025-03-17T17:59:02.096314050Z" level=info msg="CreateContainer within sandbox \"09a0a39501c2ec9d373bf71556f7d9eba65daccf52d70fc7fbd88e011ab0df02\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"17f9b7b0815bb5cf8709bacfe75ce87fc458b5d2b458b12340de831540c6f989\"" Mar 17 17:59:02.097715 containerd[1494]: time="2025-03-17T17:59:02.097516958Z" level=info msg="StartContainer for \"17f9b7b0815bb5cf8709bacfe75ce87fc458b5d2b458b12340de831540c6f989\"" Mar 17 17:59:02.100391 containerd[1494]: time="2025-03-17T17:59:02.100294974Z" level=info msg="CreateContainer within sandbox \"33799bd5705cc7706026ca88d49627f7d519477dea5f3136ba2fc84c8ee9618f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"cb96e1cb2cccf8a31fdad7cae53f738da4f3c2c8b68bc3911358e95e37570b30\"" Mar 17 17:59:02.101157 containerd[1494]: time="2025-03-17T17:59:02.101007783Z" level=info msg="StartContainer for \"cb96e1cb2cccf8a31fdad7cae53f738da4f3c2c8b68bc3911358e95e37570b30\"" Mar 17 17:59:02.122565 kubelet[2202]: E0317 17:59:02.122511 2202 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://159.223.200.207:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.0-6-847a660ba6?timeout=10s\": dial tcp 159.223.200.207:6443: connect: connection refused" interval="1.6s" Mar 17 17:59:02.142587 systemd[1]: Started cri-containerd-7f363f30a487309821bbe4782eadd0aac2a9ea294ffb8b18fff90188cc7364e9.scope - libcontainer container 7f363f30a487309821bbe4782eadd0aac2a9ea294ffb8b18fff90188cc7364e9. Mar 17 17:59:02.171481 systemd[1]: Started cri-containerd-cb96e1cb2cccf8a31fdad7cae53f738da4f3c2c8b68bc3911358e95e37570b30.scope - libcontainer container cb96e1cb2cccf8a31fdad7cae53f738da4f3c2c8b68bc3911358e95e37570b30. Mar 17 17:59:02.185193 systemd[1]: Started cri-containerd-17f9b7b0815bb5cf8709bacfe75ce87fc458b5d2b458b12340de831540c6f989.scope - libcontainer container 17f9b7b0815bb5cf8709bacfe75ce87fc458b5d2b458b12340de831540c6f989. Mar 17 17:59:02.264792 containerd[1494]: time="2025-03-17T17:59:02.264035104Z" level=info msg="StartContainer for \"7f363f30a487309821bbe4782eadd0aac2a9ea294ffb8b18fff90188cc7364e9\" returns successfully" Mar 17 17:59:02.286319 containerd[1494]: time="2025-03-17T17:59:02.285000430Z" level=info msg="StartContainer for \"cb96e1cb2cccf8a31fdad7cae53f738da4f3c2c8b68bc3911358e95e37570b30\" returns successfully" Mar 17 17:59:02.312566 containerd[1494]: time="2025-03-17T17:59:02.312496037Z" level=info msg="StartContainer for \"17f9b7b0815bb5cf8709bacfe75ce87fc458b5d2b458b12340de831540c6f989\" returns successfully" Mar 17 17:59:02.322764 kubelet[2202]: I0317 17:59:02.322717 2202 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230.1.0-6-847a660ba6" Mar 17 17:59:02.324703 kubelet[2202]: E0317 17:59:02.324641 2202 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://159.223.200.207:6443/api/v1/nodes\": dial tcp 159.223.200.207:6443: connect: connection refused" node="ci-4230.1.0-6-847a660ba6" Mar 17 17:59:02.791190 kubelet[2202]: E0317 17:59:02.789341 2202 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.1.0-6-847a660ba6\" not found" node="ci-4230.1.0-6-847a660ba6" Mar 17 17:59:02.791190 kubelet[2202]: E0317 17:59:02.789562 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 17:59:02.793487 kubelet[2202]: E0317 17:59:02.793453 2202 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.1.0-6-847a660ba6\" not found" node="ci-4230.1.0-6-847a660ba6" Mar 17 17:59:02.793645 kubelet[2202]: E0317 17:59:02.793625 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 17:59:02.804300 kubelet[2202]: E0317 17:59:02.804264 2202 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.1.0-6-847a660ba6\" not found" node="ci-4230.1.0-6-847a660ba6" Mar 17 17:59:02.804300 kubelet[2202]: E0317 17:59:02.804407 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 17:59:03.802622 kubelet[2202]: E0317 17:59:03.802579 2202 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.1.0-6-847a660ba6\" not found" node="ci-4230.1.0-6-847a660ba6" Mar 17 17:59:03.803243 kubelet[2202]: E0317 17:59:03.802776 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 17:59:03.803243 kubelet[2202]: E0317 17:59:03.803208 2202 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.1.0-6-847a660ba6\" not found" node="ci-4230.1.0-6-847a660ba6" Mar 17 17:59:03.803388 kubelet[2202]: E0317 17:59:03.803360 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 17:59:03.926911 kubelet[2202]: I0317 17:59:03.926580 2202 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230.1.0-6-847a660ba6" Mar 17 17:59:04.828344 kubelet[2202]: E0317 17:59:04.828271 2202 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230.1.0-6-847a660ba6\" not found" node="ci-4230.1.0-6-847a660ba6" Mar 17 17:59:04.868661 kubelet[2202]: I0317 17:59:04.868410 2202 kubelet_node_status.go:79] "Successfully registered node" node="ci-4230.1.0-6-847a660ba6" Mar 17 17:59:04.868661 kubelet[2202]: E0317 17:59:04.868463 2202 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"ci-4230.1.0-6-847a660ba6\": node \"ci-4230.1.0-6-847a660ba6\" not found" Mar 17 17:59:04.919308 kubelet[2202]: I0317 17:59:04.918996 2202 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230.1.0-6-847a660ba6" Mar 17 17:59:04.961096 kubelet[2202]: E0317 17:59:04.961048 2202 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4230.1.0-6-847a660ba6\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4230.1.0-6-847a660ba6" Mar 17 17:59:04.961533 kubelet[2202]: I0317 17:59:04.961352 2202 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.1.0-6-847a660ba6" Mar 17 17:59:04.965782 kubelet[2202]: E0317 17:59:04.965503 2202 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230.1.0-6-847a660ba6\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4230.1.0-6-847a660ba6" Mar 17 17:59:04.965782 kubelet[2202]: I0317 17:59:04.965537 2202 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.1.0-6-847a660ba6" Mar 17 17:59:04.967969 kubelet[2202]: E0317 17:59:04.967919 2202 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230.1.0-6-847a660ba6\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4230.1.0-6-847a660ba6" Mar 17 17:59:05.697402 kubelet[2202]: I0317 17:59:05.697304 2202 apiserver.go:52] "Watching apiserver" Mar 17 17:59:05.717771 kubelet[2202]: I0317 17:59:05.717714 2202 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 17:59:05.950053 kubelet[2202]: I0317 17:59:05.949683 2202 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.1.0-6-847a660ba6" Mar 17 17:59:05.957666 kubelet[2202]: W0317 17:59:05.957536 2202 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 17:59:05.958017 kubelet[2202]: E0317 17:59:05.957857 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 17:59:06.808862 kubelet[2202]: E0317 17:59:06.808752 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 17:59:07.168931 kubelet[2202]: I0317 17:59:07.168643 2202 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230.1.0-6-847a660ba6" Mar 17 17:59:07.174351 kubelet[2202]: W0317 17:59:07.174283 2202 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 17:59:07.174689 kubelet[2202]: E0317 17:59:07.174637 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 17:59:07.214480 systemd[1]: Reload requested from client PID 2474 ('systemctl') (unit session-7.scope)... Mar 17 17:59:07.214505 systemd[1]: Reloading... Mar 17 17:59:07.360869 zram_generator::config[2519]: No configuration found. Mar 17 17:59:07.519973 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:59:07.662362 systemd[1]: Reloading finished in 447 ms. Mar 17 17:59:07.695323 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:59:07.714698 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 17:59:07.715135 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:59:07.715283 systemd[1]: kubelet.service: Consumed 918ms CPU time, 118.6M memory peak. Mar 17 17:59:07.734344 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:59:07.895095 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:59:07.903872 (kubelet)[2569]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:59:07.976853 kubelet[2569]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:59:07.976853 kubelet[2569]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 17 17:59:07.976853 kubelet[2569]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:59:07.976853 kubelet[2569]: I0317 17:59:07.976362 2569 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 17:59:07.988791 kubelet[2569]: I0317 17:59:07.988723 2569 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Mar 17 17:59:07.989002 kubelet[2569]: I0317 17:59:07.988988 2569 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 17:59:07.989634 kubelet[2569]: I0317 17:59:07.989419 2569 server.go:954] "Client rotation is on, will bootstrap in background" Mar 17 17:59:07.991585 kubelet[2569]: I0317 17:59:07.991553 2569 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 17 17:59:07.996878 kubelet[2569]: I0317 17:59:07.996294 2569 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 17:59:08.009588 kubelet[2569]: E0317 17:59:08.009221 2569 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 17 17:59:08.009588 kubelet[2569]: I0317 17:59:08.009276 2569 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 17 17:59:08.016749 kubelet[2569]: I0317 17:59:08.015523 2569 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 17:59:08.017131 kubelet[2569]: I0317 17:59:08.016780 2569 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 17:59:08.017236 kubelet[2569]: I0317 17:59:08.016859 2569 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.1.0-6-847a660ba6","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 17 17:59:08.017236 kubelet[2569]: I0317 17:59:08.017174 2569 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 17:59:08.017236 kubelet[2569]: I0317 17:59:08.017190 2569 container_manager_linux.go:304] "Creating device plugin manager" Mar 17 17:59:08.017911 kubelet[2569]: I0317 17:59:08.017289 2569 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:59:08.017911 kubelet[2569]: I0317 17:59:08.017786 2569 kubelet.go:446] "Attempting to sync node with API server" Mar 17 17:59:08.017911 kubelet[2569]: I0317 17:59:08.017856 2569 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 17:59:08.017911 kubelet[2569]: I0317 17:59:08.017889 2569 kubelet.go:352] "Adding apiserver pod source" Mar 17 17:59:08.020862 kubelet[2569]: I0317 17:59:08.018519 2569 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 17:59:08.020862 kubelet[2569]: I0317 17:59:08.020413 2569 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 17:59:08.021122 kubelet[2569]: I0317 17:59:08.021031 2569 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 17:59:08.022411 kubelet[2569]: I0317 17:59:08.022382 2569 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 17 17:59:08.022580 kubelet[2569]: I0317 17:59:08.022432 2569 server.go:1287] "Started kubelet" Mar 17 17:59:08.027447 kubelet[2569]: I0317 17:59:08.027412 2569 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 17:59:08.042098 kubelet[2569]: I0317 17:59:08.042038 2569 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 17:59:08.046977 kubelet[2569]: I0317 17:59:08.046927 2569 server.go:490] "Adding debug handlers to kubelet server" Mar 17 17:59:08.049059 kubelet[2569]: I0317 17:59:08.048986 2569 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 17:59:08.049266 kubelet[2569]: I0317 17:59:08.049251 2569 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 17:59:08.049526 kubelet[2569]: I0317 17:59:08.049499 2569 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 17 17:59:08.056652 kubelet[2569]: I0317 17:59:08.056599 2569 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 17 17:59:08.058664 kubelet[2569]: I0317 17:59:08.056802 2569 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 17:59:08.059721 kubelet[2569]: I0317 17:59:08.059676 2569 factory.go:221] Registration of the systemd container factory successfully Mar 17 17:59:08.062552 kubelet[2569]: I0317 17:59:08.062507 2569 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 17:59:08.063445 kubelet[2569]: I0317 17:59:08.062365 2569 reconciler.go:26] "Reconciler: start to sync state" Mar 17 17:59:08.066421 kubelet[2569]: E0317 17:59:08.066335 2569 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 17:59:08.068756 kubelet[2569]: I0317 17:59:08.067969 2569 factory.go:221] Registration of the containerd container factory successfully Mar 17 17:59:08.076130 kubelet[2569]: I0317 17:59:08.076014 2569 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 17:59:08.084211 kubelet[2569]: I0317 17:59:08.084165 2569 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 17:59:08.085355 kubelet[2569]: I0317 17:59:08.084442 2569 status_manager.go:227] "Starting to sync pod status with apiserver" Mar 17 17:59:08.086785 kubelet[2569]: I0317 17:59:08.086745 2569 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 17 17:59:08.086962 kubelet[2569]: I0317 17:59:08.086910 2569 kubelet.go:2388] "Starting kubelet main sync loop" Mar 17 17:59:08.087023 kubelet[2569]: E0317 17:59:08.087004 2569 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 17:59:08.138727 kubelet[2569]: I0317 17:59:08.138696 2569 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 17 17:59:08.138952 kubelet[2569]: I0317 17:59:08.138937 2569 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 17 17:59:08.139035 kubelet[2569]: I0317 17:59:08.139026 2569 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:59:08.139351 kubelet[2569]: I0317 17:59:08.139328 2569 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 17 17:59:08.139484 kubelet[2569]: I0317 17:59:08.139447 2569 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 17 17:59:08.139557 kubelet[2569]: I0317 17:59:08.139549 2569 policy_none.go:49] "None policy: Start" Mar 17 17:59:08.139621 kubelet[2569]: I0317 17:59:08.139611 2569 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 17 17:59:08.139697 kubelet[2569]: I0317 17:59:08.139688 2569 state_mem.go:35] "Initializing new in-memory state store" Mar 17 17:59:08.139958 kubelet[2569]: I0317 17:59:08.139942 2569 state_mem.go:75] "Updated machine memory state" Mar 17 17:59:08.145414 kubelet[2569]: I0317 17:59:08.145280 2569 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 17:59:08.146255 kubelet[2569]: I0317 17:59:08.146224 2569 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 17 17:59:08.146475 kubelet[2569]: I0317 17:59:08.146429 2569 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 17:59:08.146996 kubelet[2569]: I0317 17:59:08.146970 2569 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 17:59:08.152691 kubelet[2569]: E0317 17:59:08.152617 2569 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 17 17:59:08.189039 kubelet[2569]: I0317 17:59:08.188618 2569 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.1.0-6-847a660ba6" Mar 17 17:59:08.190989 kubelet[2569]: I0317 17:59:08.190807 2569 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230.1.0-6-847a660ba6" Mar 17 17:59:08.191537 kubelet[2569]: I0317 17:59:08.191521 2569 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.1.0-6-847a660ba6" Mar 17 17:59:08.200649 kubelet[2569]: W0317 17:59:08.200292 2569 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 17:59:08.200649 kubelet[2569]: W0317 17:59:08.200526 2569 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 17:59:08.200649 kubelet[2569]: E0317 17:59:08.200577 2569 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4230.1.0-6-847a660ba6\" already exists" pod="kube-system/kube-controller-manager-ci-4230.1.0-6-847a660ba6" Mar 17 17:59:08.202369 kubelet[2569]: W0317 17:59:08.202331 2569 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 17:59:08.202459 kubelet[2569]: E0317 17:59:08.202414 2569 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230.1.0-6-847a660ba6\" already exists" pod="kube-system/kube-apiserver-ci-4230.1.0-6-847a660ba6" Mar 17 17:59:08.255418 kubelet[2569]: I0317 17:59:08.254599 2569 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230.1.0-6-847a660ba6" Mar 17 17:59:08.265963 kubelet[2569]: I0317 17:59:08.265480 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/585dff515ddbb5a71d7b06a5c94fd15e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.1.0-6-847a660ba6\" (UID: \"585dff515ddbb5a71d7b06a5c94fd15e\") " pod="kube-system/kube-apiserver-ci-4230.1.0-6-847a660ba6" Mar 17 17:59:08.265963 kubelet[2569]: I0317 17:59:08.265545 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f2ddf8ea1a05b241ffb7324439f07409-ca-certs\") pod \"kube-controller-manager-ci-4230.1.0-6-847a660ba6\" (UID: \"f2ddf8ea1a05b241ffb7324439f07409\") " pod="kube-system/kube-controller-manager-ci-4230.1.0-6-847a660ba6" Mar 17 17:59:08.265963 kubelet[2569]: I0317 17:59:08.265574 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f2ddf8ea1a05b241ffb7324439f07409-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.1.0-6-847a660ba6\" (UID: \"f2ddf8ea1a05b241ffb7324439f07409\") " pod="kube-system/kube-controller-manager-ci-4230.1.0-6-847a660ba6" Mar 17 17:59:08.265963 kubelet[2569]: I0317 17:59:08.265608 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b4df9bb439a95d436695668b6b60ed9f-kubeconfig\") pod \"kube-scheduler-ci-4230.1.0-6-847a660ba6\" (UID: \"b4df9bb439a95d436695668b6b60ed9f\") " pod="kube-system/kube-scheduler-ci-4230.1.0-6-847a660ba6" Mar 17 17:59:08.265963 kubelet[2569]: I0317 17:59:08.265632 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/585dff515ddbb5a71d7b06a5c94fd15e-ca-certs\") pod \"kube-apiserver-ci-4230.1.0-6-847a660ba6\" (UID: \"585dff515ddbb5a71d7b06a5c94fd15e\") " pod="kube-system/kube-apiserver-ci-4230.1.0-6-847a660ba6" Mar 17 17:59:08.266219 kubelet[2569]: I0317 17:59:08.265657 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/585dff515ddbb5a71d7b06a5c94fd15e-k8s-certs\") pod \"kube-apiserver-ci-4230.1.0-6-847a660ba6\" (UID: \"585dff515ddbb5a71d7b06a5c94fd15e\") " pod="kube-system/kube-apiserver-ci-4230.1.0-6-847a660ba6" Mar 17 17:59:08.266219 kubelet[2569]: I0317 17:59:08.265684 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f2ddf8ea1a05b241ffb7324439f07409-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.1.0-6-847a660ba6\" (UID: \"f2ddf8ea1a05b241ffb7324439f07409\") " pod="kube-system/kube-controller-manager-ci-4230.1.0-6-847a660ba6" Mar 17 17:59:08.266219 kubelet[2569]: I0317 17:59:08.265712 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f2ddf8ea1a05b241ffb7324439f07409-k8s-certs\") pod \"kube-controller-manager-ci-4230.1.0-6-847a660ba6\" (UID: \"f2ddf8ea1a05b241ffb7324439f07409\") " pod="kube-system/kube-controller-manager-ci-4230.1.0-6-847a660ba6" Mar 17 17:59:08.266219 kubelet[2569]: I0317 17:59:08.265740 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f2ddf8ea1a05b241ffb7324439f07409-kubeconfig\") pod \"kube-controller-manager-ci-4230.1.0-6-847a660ba6\" (UID: \"f2ddf8ea1a05b241ffb7324439f07409\") " pod="kube-system/kube-controller-manager-ci-4230.1.0-6-847a660ba6" Mar 17 17:59:08.267275 kubelet[2569]: I0317 17:59:08.266865 2569 kubelet_node_status.go:125] "Node was previously registered" node="ci-4230.1.0-6-847a660ba6" Mar 17 17:59:08.267275 kubelet[2569]: I0317 17:59:08.266940 2569 kubelet_node_status.go:79] "Successfully registered node" node="ci-4230.1.0-6-847a660ba6" Mar 17 17:59:08.501581 kubelet[2569]: E0317 17:59:08.501449 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 17:59:08.502387 kubelet[2569]: E0317 17:59:08.501870 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 17:59:08.503687 kubelet[2569]: E0317 17:59:08.503543 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 17:59:09.028713 kubelet[2569]: I0317 17:59:09.028659 2569 apiserver.go:52] "Watching apiserver" Mar 17 17:59:09.059283 kubelet[2569]: I0317 17:59:09.059223 2569 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 17:59:09.120847 kubelet[2569]: E0317 17:59:09.119257 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 17:59:09.120847 kubelet[2569]: I0317 17:59:09.120285 2569 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.1.0-6-847a660ba6" Mar 17 17:59:09.120847 kubelet[2569]: I0317 17:59:09.120740 2569 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.1.0-6-847a660ba6" Mar 17 17:59:09.187808 kubelet[2569]: W0317 17:59:09.185587 2569 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 17:59:09.187808 kubelet[2569]: E0317 17:59:09.185640 2569 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230.1.0-6-847a660ba6\" already exists" pod="kube-system/kube-scheduler-ci-4230.1.0-6-847a660ba6" Mar 17 17:59:09.187808 kubelet[2569]: E0317 17:59:09.185811 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 17:59:09.187808 kubelet[2569]: W0317 17:59:09.186524 2569 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 17:59:09.187808 kubelet[2569]: E0317 17:59:09.186695 2569 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230.1.0-6-847a660ba6\" already exists" pod="kube-system/kube-apiserver-ci-4230.1.0-6-847a660ba6" Mar 17 17:59:09.187808 kubelet[2569]: E0317 17:59:09.186941 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 17:59:09.219504 kubelet[2569]: I0317 17:59:09.219410 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230.1.0-6-847a660ba6" podStartSLOduration=1.2193679880000001 podStartE2EDuration="1.219367988s" podCreationTimestamp="2025-03-17 17:59:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:59:09.218193857 +0000 UTC m=+1.306691087" watchObservedRunningTime="2025-03-17 17:59:09.219367988 +0000 UTC m=+1.307865189" Mar 17 17:59:09.240578 kubelet[2569]: I0317 17:59:09.238720 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230.1.0-6-847a660ba6" podStartSLOduration=2.238702879 podStartE2EDuration="2.238702879s" podCreationTimestamp="2025-03-17 17:59:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:59:09.238501026 +0000 UTC m=+1.326998235" watchObservedRunningTime="2025-03-17 17:59:09.238702879 +0000 UTC m=+1.327200080" Mar 17 17:59:09.295455 sudo[1650]: pam_unix(sudo:session): session closed for user root Mar 17 17:59:09.299508 sshd[1649]: Connection closed by 139.178.68.195 port 50852 Mar 17 17:59:09.300104 sshd-session[1646]: pam_unix(sshd:session): session closed for user core Mar 17 17:59:09.306000 systemd-logind[1468]: Session 7 logged out. Waiting for processes to exit. Mar 17 17:59:09.307403 systemd[1]: sshd@6-159.223.200.207:22-139.178.68.195:50852.service: Deactivated successfully. Mar 17 17:59:09.312072 systemd[1]: session-7.scope: Deactivated successfully. Mar 17 17:59:09.312828 systemd[1]: session-7.scope: Consumed 4.229s CPU time, 166.5M memory peak. Mar 17 17:59:09.315804 systemd-logind[1468]: Removed session 7. Mar 17 17:59:10.121318 kubelet[2569]: E0317 17:59:10.121266 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 17:59:10.122619 kubelet[2569]: E0317 17:59:10.122576 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 17:59:11.123913 kubelet[2569]: E0317 17:59:11.123864 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 17:59:11.574444 kubelet[2569]: I0317 17:59:11.573685 2569 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 17 17:59:11.574598 containerd[1494]: time="2025-03-17T17:59:11.574245932Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 17 17:59:11.575566 kubelet[2569]: I0317 17:59:11.575528 2569 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 17 17:59:12.124955 kubelet[2569]: E0317 17:59:12.124880 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 17:59:12.432460 kubelet[2569]: I0317 17:59:12.432237 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230.1.0-6-847a660ba6" podStartSLOduration=7.432202249 podStartE2EDuration="7.432202249s" podCreationTimestamp="2025-03-17 17:59:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:59:09.256389176 +0000 UTC m=+1.344886415" watchObservedRunningTime="2025-03-17 17:59:12.432202249 +0000 UTC m=+4.520699505" Mar 17 17:59:12.464631 systemd[1]: Created slice kubepods-besteffort-pod3ffb33bd_b534_4e31_a769_10f115cb26e5.slice - libcontainer container kubepods-besteffort-pod3ffb33bd_b534_4e31_a769_10f115cb26e5.slice. Mar 17 17:59:12.481151 systemd[1]: Created slice kubepods-burstable-pod57c0384a_104d_4498_af14_3d50f7be0396.slice - libcontainer container kubepods-burstable-pod57c0384a_104d_4498_af14_3d50f7be0396.slice. Mar 17 17:59:12.487889 kubelet[2569]: I0317 17:59:12.487820 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/57c0384a-104d-4498-af14-3d50f7be0396-cni\") pod \"kube-flannel-ds-sbfgl\" (UID: \"57c0384a-104d-4498-af14-3d50f7be0396\") " pod="kube-flannel/kube-flannel-ds-sbfgl" Mar 17 17:59:12.488096 kubelet[2569]: I0317 17:59:12.487901 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3ffb33bd-b534-4e31-a769-10f115cb26e5-lib-modules\") pod \"kube-proxy-f2b5q\" (UID: \"3ffb33bd-b534-4e31-a769-10f115cb26e5\") " pod="kube-system/kube-proxy-f2b5q" Mar 17 17:59:12.488096 kubelet[2569]: I0317 17:59:12.487936 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/57c0384a-104d-4498-af14-3d50f7be0396-cni-plugin\") pod \"kube-flannel-ds-sbfgl\" (UID: \"57c0384a-104d-4498-af14-3d50f7be0396\") " pod="kube-flannel/kube-flannel-ds-sbfgl" Mar 17 17:59:12.488096 kubelet[2569]: I0317 17:59:12.487951 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3ffb33bd-b534-4e31-a769-10f115cb26e5-kube-proxy\") pod \"kube-proxy-f2b5q\" (UID: \"3ffb33bd-b534-4e31-a769-10f115cb26e5\") " pod="kube-system/kube-proxy-f2b5q" Mar 17 17:59:12.488096 kubelet[2569]: I0317 17:59:12.487970 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxmp7\" (UniqueName: \"kubernetes.io/projected/3ffb33bd-b534-4e31-a769-10f115cb26e5-kube-api-access-rxmp7\") pod \"kube-proxy-f2b5q\" (UID: \"3ffb33bd-b534-4e31-a769-10f115cb26e5\") " pod="kube-system/kube-proxy-f2b5q" Mar 17 17:59:12.488096 kubelet[2569]: I0317 17:59:12.488063 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/57c0384a-104d-4498-af14-3d50f7be0396-flannel-cfg\") pod \"kube-flannel-ds-sbfgl\" (UID: \"57c0384a-104d-4498-af14-3d50f7be0396\") " pod="kube-flannel/kube-flannel-ds-sbfgl" Mar 17 17:59:12.488237 kubelet[2569]: I0317 17:59:12.488087 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3ffb33bd-b534-4e31-a769-10f115cb26e5-xtables-lock\") pod \"kube-proxy-f2b5q\" (UID: \"3ffb33bd-b534-4e31-a769-10f115cb26e5\") " pod="kube-system/kube-proxy-f2b5q" Mar 17 17:59:12.488237 kubelet[2569]: I0317 17:59:12.488115 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/57c0384a-104d-4498-af14-3d50f7be0396-run\") pod \"kube-flannel-ds-sbfgl\" (UID: \"57c0384a-104d-4498-af14-3d50f7be0396\") " pod="kube-flannel/kube-flannel-ds-sbfgl" Mar 17 17:59:12.488237 kubelet[2569]: I0317 17:59:12.488138 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-598lt\" (UniqueName: \"kubernetes.io/projected/57c0384a-104d-4498-af14-3d50f7be0396-kube-api-access-598lt\") pod \"kube-flannel-ds-sbfgl\" (UID: \"57c0384a-104d-4498-af14-3d50f7be0396\") " pod="kube-flannel/kube-flannel-ds-sbfgl" Mar 17 17:59:12.488237 kubelet[2569]: I0317 17:59:12.488163 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/57c0384a-104d-4498-af14-3d50f7be0396-xtables-lock\") pod \"kube-flannel-ds-sbfgl\" (UID: \"57c0384a-104d-4498-af14-3d50f7be0396\") " pod="kube-flannel/kube-flannel-ds-sbfgl" Mar 17 17:59:12.777728 kubelet[2569]: E0317 17:59:12.777545 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 17:59:12.779108 containerd[1494]: time="2025-03-17T17:59:12.778716344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-f2b5q,Uid:3ffb33bd-b534-4e31-a769-10f115cb26e5,Namespace:kube-system,Attempt:0,}" Mar 17 17:59:12.789403 kubelet[2569]: E0317 17:59:12.788994 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 17:59:12.793380 containerd[1494]: time="2025-03-17T17:59:12.792910228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-sbfgl,Uid:57c0384a-104d-4498-af14-3d50f7be0396,Namespace:kube-flannel,Attempt:0,}" Mar 17 17:59:12.823648 containerd[1494]: time="2025-03-17T17:59:12.823261716Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:59:12.823648 containerd[1494]: time="2025-03-17T17:59:12.823339525Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:59:12.823648 containerd[1494]: time="2025-03-17T17:59:12.823353031Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:59:12.823648 containerd[1494]: time="2025-03-17T17:59:12.823445508Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:59:12.836130 containerd[1494]: time="2025-03-17T17:59:12.834879431Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:59:12.836130 containerd[1494]: time="2025-03-17T17:59:12.835052828Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:59:12.836130 containerd[1494]: time="2025-03-17T17:59:12.835068904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:59:12.836130 containerd[1494]: time="2025-03-17T17:59:12.835152053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:59:12.857049 systemd[1]: Started cri-containerd-93570fdd8be8c2a07facda01501aaf5d7aed9145855e98457a8b99c11bd22fe3.scope - libcontainer container 93570fdd8be8c2a07facda01501aaf5d7aed9145855e98457a8b99c11bd22fe3. Mar 17 17:59:12.873044 systemd[1]: Started cri-containerd-e046750449b31439d522f4ce344856e8e397829df0ffe9955fc296afdc31549b.scope - libcontainer container e046750449b31439d522f4ce344856e8e397829df0ffe9955fc296afdc31549b. Mar 17 17:59:12.904622 containerd[1494]: time="2025-03-17T17:59:12.904576911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-f2b5q,Uid:3ffb33bd-b534-4e31-a769-10f115cb26e5,Namespace:kube-system,Attempt:0,} returns sandbox id \"93570fdd8be8c2a07facda01501aaf5d7aed9145855e98457a8b99c11bd22fe3\"" Mar 17 17:59:12.907070 kubelet[2569]: E0317 17:59:12.905590 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 17:59:12.909942 containerd[1494]: time="2025-03-17T17:59:12.909841471Z" level=info msg="CreateContainer within sandbox \"93570fdd8be8c2a07facda01501aaf5d7aed9145855e98457a8b99c11bd22fe3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 17 17:59:12.927109 containerd[1494]: time="2025-03-17T17:59:12.926950837Z" level=info msg="CreateContainer within sandbox \"93570fdd8be8c2a07facda01501aaf5d7aed9145855e98457a8b99c11bd22fe3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2535d0dac2696c02d1cb8e77b0e7d6c3b7c726bf98ea966371a14905043a557f\"" Mar 17 17:59:12.930879 containerd[1494]: time="2025-03-17T17:59:12.930349258Z" level=info msg="StartContainer for \"2535d0dac2696c02d1cb8e77b0e7d6c3b7c726bf98ea966371a14905043a557f\"" Mar 17 17:59:12.940247 containerd[1494]: time="2025-03-17T17:59:12.940208644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-sbfgl,Uid:57c0384a-104d-4498-af14-3d50f7be0396,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"e046750449b31439d522f4ce344856e8e397829df0ffe9955fc296afdc31549b\"" Mar 17 17:59:12.941367 kubelet[2569]: E0317 17:59:12.941343 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 17:59:12.945516 containerd[1494]: time="2025-03-17T17:59:12.944953268Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Mar 17 17:59:12.975074 systemd[1]: Started cri-containerd-2535d0dac2696c02d1cb8e77b0e7d6c3b7c726bf98ea966371a14905043a557f.scope - libcontainer container 2535d0dac2696c02d1cb8e77b0e7d6c3b7c726bf98ea966371a14905043a557f. Mar 17 17:59:13.015841 containerd[1494]: time="2025-03-17T17:59:13.014317414Z" level=info msg="StartContainer for \"2535d0dac2696c02d1cb8e77b0e7d6c3b7c726bf98ea966371a14905043a557f\" returns successfully" Mar 17 17:59:13.132714 kubelet[2569]: E0317 17:59:13.132639 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 17:59:14.926729 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2604326216.mount: Deactivated successfully. Mar 17 17:59:14.937235 kubelet[2569]: E0317 17:59:14.936897 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 17:59:14.958786 kubelet[2569]: I0317 17:59:14.958710 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-f2b5q" podStartSLOduration=2.9586871070000003 podStartE2EDuration="2.958687107s" podCreationTimestamp="2025-03-17 17:59:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:59:13.151618705 +0000 UTC m=+5.240115913" watchObservedRunningTime="2025-03-17 17:59:14.958687107 +0000 UTC m=+7.047184317" Mar 17 17:59:14.979528 containerd[1494]: time="2025-03-17T17:59:14.978444710Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:59:14.979528 containerd[1494]: time="2025-03-17T17:59:14.979331904Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3852937" Mar 17 17:59:14.979528 containerd[1494]: time="2025-03-17T17:59:14.979402893Z" level=info msg="ImageCreate event name:\"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:59:14.988656 containerd[1494]: time="2025-03-17T17:59:14.988604854Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:59:14.990014 containerd[1494]: time="2025-03-17T17:59:14.989922477Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3842055\" in 2.044912095s" Mar 17 17:59:14.990014 containerd[1494]: time="2025-03-17T17:59:14.990012811Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\"" Mar 17 17:59:15.007546 containerd[1494]: time="2025-03-17T17:59:15.007463488Z" level=info msg="CreateContainer within sandbox \"e046750449b31439d522f4ce344856e8e397829df0ffe9955fc296afdc31549b\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Mar 17 17:59:15.018912 containerd[1494]: time="2025-03-17T17:59:15.018874049Z" level=info msg="CreateContainer within sandbox \"e046750449b31439d522f4ce344856e8e397829df0ffe9955fc296afdc31549b\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"aecba0372189c36d0d973cfc01ee6aa9a665e34ef6f16b494aa48d1c533ab90f\"" Mar 17 17:59:15.019914 containerd[1494]: time="2025-03-17T17:59:15.019872582Z" level=info msg="StartContainer for \"aecba0372189c36d0d973cfc01ee6aa9a665e34ef6f16b494aa48d1c533ab90f\"" Mar 17 17:59:15.021385 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2852648227.mount: Deactivated successfully. Mar 17 17:59:15.056998 systemd[1]: Started cri-containerd-aecba0372189c36d0d973cfc01ee6aa9a665e34ef6f16b494aa48d1c533ab90f.scope - libcontainer container aecba0372189c36d0d973cfc01ee6aa9a665e34ef6f16b494aa48d1c533ab90f. Mar 17 17:59:15.094155 containerd[1494]: time="2025-03-17T17:59:15.093511092Z" level=info msg="StartContainer for \"aecba0372189c36d0d973cfc01ee6aa9a665e34ef6f16b494aa48d1c533ab90f\" returns successfully" Mar 17 17:59:15.095364 systemd[1]: cri-containerd-aecba0372189c36d0d973cfc01ee6aa9a665e34ef6f16b494aa48d1c533ab90f.scope: Deactivated successfully. Mar 17 17:59:15.124694 containerd[1494]: time="2025-03-17T17:59:15.124615832Z" level=info msg="shim disconnected" id=aecba0372189c36d0d973cfc01ee6aa9a665e34ef6f16b494aa48d1c533ab90f namespace=k8s.io Mar 17 17:59:15.124694 containerd[1494]: time="2025-03-17T17:59:15.124673393Z" level=warning msg="cleaning up after shim disconnected" id=aecba0372189c36d0d973cfc01ee6aa9a665e34ef6f16b494aa48d1c533ab90f namespace=k8s.io Mar 17 17:59:15.124694 containerd[1494]: time="2025-03-17T17:59:15.124681124Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:59:15.141573 kubelet[2569]: E0317 17:59:15.139766 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 17:59:15.141573 kubelet[2569]: E0317 17:59:15.141215 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 17:59:15.848309 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aecba0372189c36d0d973cfc01ee6aa9a665e34ef6f16b494aa48d1c533ab90f-rootfs.mount: Deactivated successfully. Mar 17 17:59:16.143452 kubelet[2569]: E0317 17:59:16.143418 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 17:59:16.146104 containerd[1494]: time="2025-03-17T17:59:16.145623128Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Mar 17 17:59:17.547172 kubelet[2569]: E0317 17:59:17.547105 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 17:59:17.834420 update_engine[1469]: I20250317 17:59:17.833971 1469 update_attempter.cc:509] Updating boot flags... Mar 17 17:59:17.884017 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (2945) Mar 17 17:59:17.981864 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (2949) Mar 17 17:59:18.113890 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (2949) Mar 17 17:59:18.159041 kubelet[2569]: E0317 17:59:18.157748 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 17:59:18.308785 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3195750660.mount: Deactivated successfully. Mar 17 17:59:20.349158 kubelet[2569]: E0317 17:59:20.348246 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 17:59:21.061986 containerd[1494]: time="2025-03-17T17:59:21.061922217Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:59:21.065061 containerd[1494]: time="2025-03-17T17:59:21.064979161Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26866357" Mar 17 17:59:21.065837 containerd[1494]: time="2025-03-17T17:59:21.065714884Z" level=info msg="ImageCreate event name:\"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:59:21.070872 containerd[1494]: time="2025-03-17T17:59:21.069958604Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:59:21.071320 containerd[1494]: time="2025-03-17T17:59:21.071131810Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26855532\" in 4.925454317s" Mar 17 17:59:21.071320 containerd[1494]: time="2025-03-17T17:59:21.071185169Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\"" Mar 17 17:59:21.075178 containerd[1494]: time="2025-03-17T17:59:21.074886813Z" level=info msg="CreateContainer within sandbox \"e046750449b31439d522f4ce344856e8e397829df0ffe9955fc296afdc31549b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 17 17:59:21.097857 containerd[1494]: time="2025-03-17T17:59:21.097789498Z" level=info msg="CreateContainer within sandbox \"e046750449b31439d522f4ce344856e8e397829df0ffe9955fc296afdc31549b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"ff719493ebf0764f04e6d8010341ef1d25cdef68c85860ccaec4caf8af6569a2\"" Mar 17 17:59:21.100021 containerd[1494]: time="2025-03-17T17:59:21.098831382Z" level=info msg="StartContainer for \"ff719493ebf0764f04e6d8010341ef1d25cdef68c85860ccaec4caf8af6569a2\"" Mar 17 17:59:21.130199 systemd[1]: Started cri-containerd-ff719493ebf0764f04e6d8010341ef1d25cdef68c85860ccaec4caf8af6569a2.scope - libcontainer container ff719493ebf0764f04e6d8010341ef1d25cdef68c85860ccaec4caf8af6569a2. Mar 17 17:59:21.163753 systemd[1]: cri-containerd-ff719493ebf0764f04e6d8010341ef1d25cdef68c85860ccaec4caf8af6569a2.scope: Deactivated successfully. Mar 17 17:59:21.167430 containerd[1494]: time="2025-03-17T17:59:21.166596098Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57c0384a_104d_4498_af14_3d50f7be0396.slice/cri-containerd-ff719493ebf0764f04e6d8010341ef1d25cdef68c85860ccaec4caf8af6569a2.scope/memory.events\": no such file or directory" Mar 17 17:59:21.169690 containerd[1494]: time="2025-03-17T17:59:21.169547218Z" level=info msg="StartContainer for \"ff719493ebf0764f04e6d8010341ef1d25cdef68c85860ccaec4caf8af6569a2\" returns successfully" Mar 17 17:59:21.173482 kubelet[2569]: E0317 17:59:21.173452 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 17:59:21.193537 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ff719493ebf0764f04e6d8010341ef1d25cdef68c85860ccaec4caf8af6569a2-rootfs.mount: Deactivated successfully. Mar 17 17:59:21.201256 kubelet[2569]: I0317 17:59:21.201166 2569 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Mar 17 17:59:21.237563 containerd[1494]: time="2025-03-17T17:59:21.237444527Z" level=info msg="shim disconnected" id=ff719493ebf0764f04e6d8010341ef1d25cdef68c85860ccaec4caf8af6569a2 namespace=k8s.io Mar 17 17:59:21.237563 containerd[1494]: time="2025-03-17T17:59:21.237530642Z" level=warning msg="cleaning up after shim disconnected" id=ff719493ebf0764f04e6d8010341ef1d25cdef68c85860ccaec4caf8af6569a2 namespace=k8s.io Mar 17 17:59:21.237563 containerd[1494]: time="2025-03-17T17:59:21.237539840Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:59:21.270636 systemd[1]: Created slice kubepods-burstable-pod69a19db9_46f5_4a66_9af2_f4e41632f370.slice - libcontainer container kubepods-burstable-pod69a19db9_46f5_4a66_9af2_f4e41632f370.slice. Mar 17 17:59:21.292693 systemd[1]: Created slice kubepods-burstable-pod6a9b64d1_f92d_4117_bd6b_72ce07d99b5f.slice - libcontainer container kubepods-burstable-pod6a9b64d1_f92d_4117_bd6b_72ce07d99b5f.slice. Mar 17 17:59:21.344867 kubelet[2569]: I0317 17:59:21.344663 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ms2r\" (UniqueName: \"kubernetes.io/projected/6a9b64d1-f92d-4117-bd6b-72ce07d99b5f-kube-api-access-8ms2r\") pod \"coredns-668d6bf9bc-jxz2p\" (UID: \"6a9b64d1-f92d-4117-bd6b-72ce07d99b5f\") " pod="kube-system/coredns-668d6bf9bc-jxz2p" Mar 17 17:59:21.344867 kubelet[2569]: I0317 17:59:21.344756 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2njz9\" (UniqueName: \"kubernetes.io/projected/69a19db9-46f5-4a66-9af2-f4e41632f370-kube-api-access-2njz9\") pod \"coredns-668d6bf9bc-pv46h\" (UID: \"69a19db9-46f5-4a66-9af2-f4e41632f370\") " pod="kube-system/coredns-668d6bf9bc-pv46h" Mar 17 17:59:21.344867 kubelet[2569]: I0317 17:59:21.344791 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6a9b64d1-f92d-4117-bd6b-72ce07d99b5f-config-volume\") pod \"coredns-668d6bf9bc-jxz2p\" (UID: \"6a9b64d1-f92d-4117-bd6b-72ce07d99b5f\") " pod="kube-system/coredns-668d6bf9bc-jxz2p" Mar 17 17:59:21.344867 kubelet[2569]: I0317 17:59:21.344845 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/69a19db9-46f5-4a66-9af2-f4e41632f370-config-volume\") pod \"coredns-668d6bf9bc-pv46h\" (UID: \"69a19db9-46f5-4a66-9af2-f4e41632f370\") " pod="kube-system/coredns-668d6bf9bc-pv46h" Mar 17 17:59:21.578310 kubelet[2569]: E0317 17:59:21.578258 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 17:59:21.580435 containerd[1494]: time="2025-03-17T17:59:21.580377222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pv46h,Uid:69a19db9-46f5-4a66-9af2-f4e41632f370,Namespace:kube-system,Attempt:0,}" Mar 17 17:59:21.599571 kubelet[2569]: E0317 17:59:21.599424 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 17:59:21.607781 containerd[1494]: time="2025-03-17T17:59:21.607385014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jxz2p,Uid:6a9b64d1-f92d-4117-bd6b-72ce07d99b5f,Namespace:kube-system,Attempt:0,}" Mar 17 17:59:21.635358 containerd[1494]: time="2025-03-17T17:59:21.635173237Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pv46h,Uid:69a19db9-46f5-4a66-9af2-f4e41632f370,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"de093450695adc4d22bcf1087485fdf0c4eb9f5861e16f7b0390c0325b62e31c\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Mar 17 17:59:21.635988 kubelet[2569]: E0317 17:59:21.635680 2569 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de093450695adc4d22bcf1087485fdf0c4eb9f5861e16f7b0390c0325b62e31c\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Mar 17 17:59:21.635988 kubelet[2569]: E0317 17:59:21.635837 2569 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de093450695adc4d22bcf1087485fdf0c4eb9f5861e16f7b0390c0325b62e31c\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-pv46h" Mar 17 17:59:21.635988 kubelet[2569]: E0317 17:59:21.635872 2569 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de093450695adc4d22bcf1087485fdf0c4eb9f5861e16f7b0390c0325b62e31c\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-pv46h" Mar 17 17:59:21.637486 kubelet[2569]: E0317 17:59:21.636295 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-pv46h_kube-system(69a19db9-46f5-4a66-9af2-f4e41632f370)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-pv46h_kube-system(69a19db9-46f5-4a66-9af2-f4e41632f370)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"de093450695adc4d22bcf1087485fdf0c4eb9f5861e16f7b0390c0325b62e31c\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-668d6bf9bc-pv46h" podUID="69a19db9-46f5-4a66-9af2-f4e41632f370" Mar 17 17:59:21.656303 containerd[1494]: time="2025-03-17T17:59:21.656224389Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jxz2p,Uid:6a9b64d1-f92d-4117-bd6b-72ce07d99b5f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ea1baee55f212ec5c7272893ebf4d43f8db117ded1fe331c71c69868e461a962\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Mar 17 17:59:21.656568 kubelet[2569]: E0317 17:59:21.656526 2569 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea1baee55f212ec5c7272893ebf4d43f8db117ded1fe331c71c69868e461a962\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Mar 17 17:59:21.656659 kubelet[2569]: E0317 17:59:21.656597 2569 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea1baee55f212ec5c7272893ebf4d43f8db117ded1fe331c71c69868e461a962\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-jxz2p" Mar 17 17:59:21.656659 kubelet[2569]: E0317 17:59:21.656621 2569 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea1baee55f212ec5c7272893ebf4d43f8db117ded1fe331c71c69868e461a962\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-jxz2p" Mar 17 17:59:21.656765 kubelet[2569]: E0317 17:59:21.656683 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-jxz2p_kube-system(6a9b64d1-f92d-4117-bd6b-72ce07d99b5f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-jxz2p_kube-system(6a9b64d1-f92d-4117-bd6b-72ce07d99b5f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ea1baee55f212ec5c7272893ebf4d43f8db117ded1fe331c71c69868e461a962\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-668d6bf9bc-jxz2p" podUID="6a9b64d1-f92d-4117-bd6b-72ce07d99b5f" Mar 17 17:59:22.178564 kubelet[2569]: E0317 17:59:22.178066 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 17:59:22.182002 containerd[1494]: time="2025-03-17T17:59:22.181795152Z" level=info msg="CreateContainer within sandbox \"e046750449b31439d522f4ce344856e8e397829df0ffe9955fc296afdc31549b\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Mar 17 17:59:22.199916 containerd[1494]: time="2025-03-17T17:59:22.199420373Z" level=info msg="CreateContainer within sandbox \"e046750449b31439d522f4ce344856e8e397829df0ffe9955fc296afdc31549b\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"6efa047847cc4190cc0f3f29d4e1261f94e9c95f53db75bf2d057e94ffbc12be\"" Mar 17 17:59:22.201336 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1416052939.mount: Deactivated successfully. Mar 17 17:59:22.204094 containerd[1494]: time="2025-03-17T17:59:22.202771186Z" level=info msg="StartContainer for \"6efa047847cc4190cc0f3f29d4e1261f94e9c95f53db75bf2d057e94ffbc12be\"" Mar 17 17:59:22.253145 systemd[1]: Started cri-containerd-6efa047847cc4190cc0f3f29d4e1261f94e9c95f53db75bf2d057e94ffbc12be.scope - libcontainer container 6efa047847cc4190cc0f3f29d4e1261f94e9c95f53db75bf2d057e94ffbc12be. Mar 17 17:59:22.292173 containerd[1494]: time="2025-03-17T17:59:22.292099322Z" level=info msg="StartContainer for \"6efa047847cc4190cc0f3f29d4e1261f94e9c95f53db75bf2d057e94ffbc12be\" returns successfully" Mar 17 17:59:23.084460 systemd[1]: run-containerd-runc-k8s.io-6efa047847cc4190cc0f3f29d4e1261f94e9c95f53db75bf2d057e94ffbc12be-runc.z1IWf7.mount: Deactivated successfully. Mar 17 17:59:23.182836 kubelet[2569]: E0317 17:59:23.182338 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 17:59:23.198513 kubelet[2569]: I0317 17:59:23.198038 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-sbfgl" podStartSLOduration=3.068289124 podStartE2EDuration="11.198011843s" podCreationTimestamp="2025-03-17 17:59:12 +0000 UTC" firstStartedPulling="2025-03-17 17:59:12.943056476 +0000 UTC m=+5.031553663" lastFinishedPulling="2025-03-17 17:59:21.072779194 +0000 UTC m=+13.161276382" observedRunningTime="2025-03-17 17:59:23.197976393 +0000 UTC m=+15.286473602" watchObservedRunningTime="2025-03-17 17:59:23.198011843 +0000 UTC m=+15.286509052" Mar 17 17:59:23.371360 systemd-networkd[1387]: flannel.1: Link UP Mar 17 17:59:23.371371 systemd-networkd[1387]: flannel.1: Gained carrier Mar 17 17:59:24.184051 kubelet[2569]: E0317 17:59:24.183956 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 17:59:25.047172 systemd-networkd[1387]: flannel.1: Gained IPv6LL Mar 17 17:59:32.087916 kubelet[2569]: E0317 17:59:32.087868 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 17:59:32.090233 containerd[1494]: time="2025-03-17T17:59:32.089761875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pv46h,Uid:69a19db9-46f5-4a66-9af2-f4e41632f370,Namespace:kube-system,Attempt:0,}" Mar 17 17:59:32.134574 systemd-networkd[1387]: cni0: Link UP Mar 17 17:59:32.134585 systemd-networkd[1387]: cni0: Gained carrier Mar 17 17:59:32.140548 systemd-networkd[1387]: cni0: Lost carrier Mar 17 17:59:32.146024 systemd-networkd[1387]: vethba83cb33: Link UP Mar 17 17:59:32.147168 kernel: cni0: port 1(vethba83cb33) entered blocking state Mar 17 17:59:32.147272 kernel: cni0: port 1(vethba83cb33) entered disabled state Mar 17 17:59:32.147982 kernel: vethba83cb33: entered allmulticast mode Mar 17 17:59:32.148934 kernel: vethba83cb33: entered promiscuous mode Mar 17 17:59:32.151931 kernel: cni0: port 1(vethba83cb33) entered blocking state Mar 17 17:59:32.152073 kernel: cni0: port 1(vethba83cb33) entered forwarding state Mar 17 17:59:32.153970 kernel: cni0: port 1(vethba83cb33) entered disabled state Mar 17 17:59:32.166459 kernel: cni0: port 1(vethba83cb33) entered blocking state Mar 17 17:59:32.166584 kernel: cni0: port 1(vethba83cb33) entered forwarding state Mar 17 17:59:32.169909 systemd-networkd[1387]: vethba83cb33: Gained carrier Mar 17 17:59:32.170477 systemd-networkd[1387]: cni0: Gained carrier Mar 17 17:59:32.177901 containerd[1494]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00009a8e8), "name":"cbr0", "type":"bridge"} Mar 17 17:59:32.177901 containerd[1494]: delegateAdd: netconf sent to delegate plugin: Mar 17 17:59:32.215799 containerd[1494]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-03-17T17:59:32.215246711Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:59:32.215799 containerd[1494]: time="2025-03-17T17:59:32.215333291Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:59:32.215799 containerd[1494]: time="2025-03-17T17:59:32.215351864Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:59:32.215799 containerd[1494]: time="2025-03-17T17:59:32.215473005Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:59:32.249153 systemd[1]: Started cri-containerd-2faf481f556faa946986e2188d9fdc3cdadd978ba565899763bfd7bbc79e5fcc.scope - libcontainer container 2faf481f556faa946986e2188d9fdc3cdadd978ba565899763bfd7bbc79e5fcc. Mar 17 17:59:32.302009 containerd[1494]: time="2025-03-17T17:59:32.301893564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pv46h,Uid:69a19db9-46f5-4a66-9af2-f4e41632f370,Namespace:kube-system,Attempt:0,} returns sandbox id \"2faf481f556faa946986e2188d9fdc3cdadd978ba565899763bfd7bbc79e5fcc\"" Mar 17 17:59:32.303963 kubelet[2569]: E0317 17:59:32.302930 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 17:59:32.307886 containerd[1494]: time="2025-03-17T17:59:32.307480455Z" level=info msg="CreateContainer within sandbox \"2faf481f556faa946986e2188d9fdc3cdadd978ba565899763bfd7bbc79e5fcc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 17:59:32.322433 containerd[1494]: time="2025-03-17T17:59:32.322366735Z" level=info msg="CreateContainer within sandbox \"2faf481f556faa946986e2188d9fdc3cdadd978ba565899763bfd7bbc79e5fcc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b2d0dfc83db82dcb6ccd170f77a9b63ce37777a2e30724b311843836b19fe8fa\"" Mar 17 17:59:32.323255 containerd[1494]: time="2025-03-17T17:59:32.323141996Z" level=info msg="StartContainer for \"b2d0dfc83db82dcb6ccd170f77a9b63ce37777a2e30724b311843836b19fe8fa\"" Mar 17 17:59:32.361126 systemd[1]: Started cri-containerd-b2d0dfc83db82dcb6ccd170f77a9b63ce37777a2e30724b311843836b19fe8fa.scope - libcontainer container b2d0dfc83db82dcb6ccd170f77a9b63ce37777a2e30724b311843836b19fe8fa. Mar 17 17:59:32.392266 containerd[1494]: time="2025-03-17T17:59:32.392203133Z" level=info msg="StartContainer for \"b2d0dfc83db82dcb6ccd170f77a9b63ce37777a2e30724b311843836b19fe8fa\" returns successfully" Mar 17 17:59:33.100635 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3672044345.mount: Deactivated successfully. Mar 17 17:59:33.205638 kubelet[2569]: E0317 17:59:33.205594 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 17:59:33.234889 kubelet[2569]: I0317 17:59:33.234762 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-pv46h" podStartSLOduration=21.234737908 podStartE2EDuration="21.234737908s" podCreationTimestamp="2025-03-17 17:59:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:59:33.219084872 +0000 UTC m=+25.307582082" watchObservedRunningTime="2025-03-17 17:59:33.234737908 +0000 UTC m=+25.323235129" Mar 17 17:59:33.559152 systemd-networkd[1387]: cni0: Gained IPv6LL Mar 17 17:59:33.815131 systemd-networkd[1387]: vethba83cb33: Gained IPv6LL Mar 17 17:59:34.207860 kubelet[2569]: E0317 17:59:34.207790 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 17:59:35.209774 kubelet[2569]: E0317 17:59:35.209698 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 17:59:37.087870 kubelet[2569]: E0317 17:59:37.087527 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 17:59:37.088666 containerd[1494]: time="2025-03-17T17:59:37.087979131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jxz2p,Uid:6a9b64d1-f92d-4117-bd6b-72ce07d99b5f,Namespace:kube-system,Attempt:0,}" Mar 17 17:59:37.113504 systemd-networkd[1387]: vetha00ce25e: Link UP Mar 17 17:59:37.116647 kernel: cni0: port 2(vetha00ce25e) entered blocking state Mar 17 17:59:37.116770 kernel: cni0: port 2(vetha00ce25e) entered disabled state Mar 17 17:59:37.119988 kernel: vetha00ce25e: entered allmulticast mode Mar 17 17:59:37.120266 kernel: vetha00ce25e: entered promiscuous mode Mar 17 17:59:37.127772 kernel: cni0: port 2(vetha00ce25e) entered blocking state Mar 17 17:59:37.128145 kernel: cni0: port 2(vetha00ce25e) entered forwarding state Mar 17 17:59:37.127032 systemd-networkd[1387]: vetha00ce25e: Gained carrier Mar 17 17:59:37.132842 containerd[1494]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00009a8e8), "name":"cbr0", "type":"bridge"} Mar 17 17:59:37.132842 containerd[1494]: delegateAdd: netconf sent to delegate plugin: Mar 17 17:59:37.171205 containerd[1494]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-03-17T17:59:37.171034652Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:59:37.171205 containerd[1494]: time="2025-03-17T17:59:37.171133758Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:59:37.171205 containerd[1494]: time="2025-03-17T17:59:37.171153013Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:59:37.171615 containerd[1494]: time="2025-03-17T17:59:37.171554937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:59:37.199445 systemd[1]: run-containerd-runc-k8s.io-8961c2ea408c8157703324f46596a442916e8078f9d299eecab99eb6799f121f-runc.pFHMB2.mount: Deactivated successfully. Mar 17 17:59:37.207125 systemd[1]: Started cri-containerd-8961c2ea408c8157703324f46596a442916e8078f9d299eecab99eb6799f121f.scope - libcontainer container 8961c2ea408c8157703324f46596a442916e8078f9d299eecab99eb6799f121f. Mar 17 17:59:37.265964 containerd[1494]: time="2025-03-17T17:59:37.265770073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jxz2p,Uid:6a9b64d1-f92d-4117-bd6b-72ce07d99b5f,Namespace:kube-system,Attempt:0,} returns sandbox id \"8961c2ea408c8157703324f46596a442916e8078f9d299eecab99eb6799f121f\"" Mar 17 17:59:37.267499 kubelet[2569]: E0317 17:59:37.267297 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 17:59:37.270581 containerd[1494]: time="2025-03-17T17:59:37.270538643Z" level=info msg="CreateContainer within sandbox \"8961c2ea408c8157703324f46596a442916e8078f9d299eecab99eb6799f121f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 17:59:37.283588 containerd[1494]: time="2025-03-17T17:59:37.283526179Z" level=info msg="CreateContainer within sandbox \"8961c2ea408c8157703324f46596a442916e8078f9d299eecab99eb6799f121f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8001787502bd52affc9c84c5bd1de709afc62ca1c5d54c67707ebdf9fe17c1e3\"" Mar 17 17:59:37.285001 containerd[1494]: time="2025-03-17T17:59:37.284269457Z" level=info msg="StartContainer for \"8001787502bd52affc9c84c5bd1de709afc62ca1c5d54c67707ebdf9fe17c1e3\"" Mar 17 17:59:37.319161 systemd[1]: Started cri-containerd-8001787502bd52affc9c84c5bd1de709afc62ca1c5d54c67707ebdf9fe17c1e3.scope - libcontainer container 8001787502bd52affc9c84c5bd1de709afc62ca1c5d54c67707ebdf9fe17c1e3. Mar 17 17:59:37.355868 containerd[1494]: time="2025-03-17T17:59:37.354402811Z" level=info msg="StartContainer for \"8001787502bd52affc9c84c5bd1de709afc62ca1c5d54c67707ebdf9fe17c1e3\" returns successfully" Mar 17 17:59:38.218106 kubelet[2569]: E0317 17:59:38.217401 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 17:59:38.231996 kubelet[2569]: I0317 17:59:38.231737 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-jxz2p" podStartSLOduration=26.231717489 podStartE2EDuration="26.231717489s" podCreationTimestamp="2025-03-17 17:59:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:59:38.230317589 +0000 UTC m=+30.318814799" watchObservedRunningTime="2025-03-17 17:59:38.231717489 +0000 UTC m=+30.320214698" Mar 17 17:59:38.743989 systemd-networkd[1387]: vetha00ce25e: Gained IPv6LL Mar 17 17:59:39.220582 kubelet[2569]: E0317 17:59:39.220150 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 17:59:40.222550 kubelet[2569]: E0317 17:59:40.222466 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 17:59:56.842253 systemd[1]: Started sshd@7-159.223.200.207:22-139.178.68.195:33790.service - OpenSSH per-connection server daemon (139.178.68.195:33790). Mar 17 17:59:56.905068 sshd[3583]: Accepted publickey for core from 139.178.68.195 port 33790 ssh2: RSA SHA256:nAUKsK2l9wjXYeF+xS7MSq6cfWij0pIIBV4i7QqSfSE Mar 17 17:59:56.907569 sshd-session[3583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:59:56.913907 systemd-logind[1468]: New session 8 of user core. Mar 17 17:59:56.921156 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 17 17:59:57.071958 sshd[3585]: Connection closed by 139.178.68.195 port 33790 Mar 17 17:59:57.072658 sshd-session[3583]: pam_unix(sshd:session): session closed for user core Mar 17 17:59:57.076496 systemd[1]: sshd@7-159.223.200.207:22-139.178.68.195:33790.service: Deactivated successfully. Mar 17 17:59:57.078790 systemd[1]: session-8.scope: Deactivated successfully. Mar 17 17:59:57.081198 systemd-logind[1468]: Session 8 logged out. Waiting for processes to exit. Mar 17 17:59:57.082262 systemd-logind[1468]: Removed session 8. Mar 17 18:00:02.099376 systemd[1]: Started sshd@8-159.223.200.207:22-139.178.68.195:33794.service - OpenSSH per-connection server daemon (139.178.68.195:33794). Mar 17 18:00:02.152737 sshd[3621]: Accepted publickey for core from 139.178.68.195 port 33794 ssh2: RSA SHA256:nAUKsK2l9wjXYeF+xS7MSq6cfWij0pIIBV4i7QqSfSE Mar 17 18:00:02.154793 sshd-session[3621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:00:02.162472 systemd-logind[1468]: New session 9 of user core. Mar 17 18:00:02.169417 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 17 18:00:02.343625 sshd[3623]: Connection closed by 139.178.68.195 port 33794 Mar 17 18:00:02.344622 sshd-session[3621]: pam_unix(sshd:session): session closed for user core Mar 17 18:00:02.351508 systemd[1]: sshd@8-159.223.200.207:22-139.178.68.195:33794.service: Deactivated successfully. Mar 17 18:00:02.358688 systemd[1]: session-9.scope: Deactivated successfully. Mar 17 18:00:02.362491 systemd-logind[1468]: Session 9 logged out. Waiting for processes to exit. Mar 17 18:00:02.364665 systemd-logind[1468]: Removed session 9. Mar 17 18:00:07.369133 systemd[1]: Started sshd@9-159.223.200.207:22-139.178.68.195:42644.service - OpenSSH per-connection server daemon (139.178.68.195:42644). Mar 17 18:00:07.421858 sshd[3657]: Accepted publickey for core from 139.178.68.195 port 42644 ssh2: RSA SHA256:nAUKsK2l9wjXYeF+xS7MSq6cfWij0pIIBV4i7QqSfSE Mar 17 18:00:07.423479 sshd-session[3657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:00:07.431263 systemd-logind[1468]: New session 10 of user core. Mar 17 18:00:07.445103 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 17 18:00:07.611920 sshd[3659]: Connection closed by 139.178.68.195 port 42644 Mar 17 18:00:07.612739 sshd-session[3657]: pam_unix(sshd:session): session closed for user core Mar 17 18:00:07.617413 systemd[1]: sshd@9-159.223.200.207:22-139.178.68.195:42644.service: Deactivated successfully. Mar 17 18:00:07.621101 systemd[1]: session-10.scope: Deactivated successfully. Mar 17 18:00:07.625791 systemd-logind[1468]: Session 10 logged out. Waiting for processes to exit. Mar 17 18:00:07.628076 systemd-logind[1468]: Removed session 10. Mar 17 18:00:12.636408 systemd[1]: Started sshd@10-159.223.200.207:22-139.178.68.195:42648.service - OpenSSH per-connection server daemon (139.178.68.195:42648). Mar 17 18:00:12.721556 sshd[3695]: Accepted publickey for core from 139.178.68.195 port 42648 ssh2: RSA SHA256:nAUKsK2l9wjXYeF+xS7MSq6cfWij0pIIBV4i7QqSfSE Mar 17 18:00:12.724618 sshd-session[3695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:00:12.732526 systemd-logind[1468]: New session 11 of user core. Mar 17 18:00:12.744377 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 17 18:00:12.948027 sshd[3697]: Connection closed by 139.178.68.195 port 42648 Mar 17 18:00:12.947115 sshd-session[3695]: pam_unix(sshd:session): session closed for user core Mar 17 18:00:12.963475 systemd[1]: sshd@10-159.223.200.207:22-139.178.68.195:42648.service: Deactivated successfully. Mar 17 18:00:12.969476 systemd[1]: session-11.scope: Deactivated successfully. Mar 17 18:00:12.973234 systemd-logind[1468]: Session 11 logged out. Waiting for processes to exit. Mar 17 18:00:12.984297 systemd[1]: Started sshd@11-159.223.200.207:22-139.178.68.195:42658.service - OpenSSH per-connection server daemon (139.178.68.195:42658). Mar 17 18:00:12.988005 systemd-logind[1468]: Removed session 11. Mar 17 18:00:13.060596 sshd[3708]: Accepted publickey for core from 139.178.68.195 port 42658 ssh2: RSA SHA256:nAUKsK2l9wjXYeF+xS7MSq6cfWij0pIIBV4i7QqSfSE Mar 17 18:00:13.062927 sshd-session[3708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:00:13.072930 systemd-logind[1468]: New session 12 of user core. Mar 17 18:00:13.080144 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 17 18:00:13.338128 sshd[3711]: Connection closed by 139.178.68.195 port 42658 Mar 17 18:00:13.339235 sshd-session[3708]: pam_unix(sshd:session): session closed for user core Mar 17 18:00:13.352498 systemd[1]: sshd@11-159.223.200.207:22-139.178.68.195:42658.service: Deactivated successfully. Mar 17 18:00:13.357163 systemd[1]: session-12.scope: Deactivated successfully. Mar 17 18:00:13.361107 systemd-logind[1468]: Session 12 logged out. Waiting for processes to exit. Mar 17 18:00:13.374439 systemd[1]: Started sshd@12-159.223.200.207:22-139.178.68.195:42666.service - OpenSSH per-connection server daemon (139.178.68.195:42666). Mar 17 18:00:13.379056 systemd-logind[1468]: Removed session 12. Mar 17 18:00:13.448190 sshd[3722]: Accepted publickey for core from 139.178.68.195 port 42666 ssh2: RSA SHA256:nAUKsK2l9wjXYeF+xS7MSq6cfWij0pIIBV4i7QqSfSE Mar 17 18:00:13.449971 sshd-session[3722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:00:13.456952 systemd-logind[1468]: New session 13 of user core. Mar 17 18:00:13.465114 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 17 18:00:13.634377 sshd[3725]: Connection closed by 139.178.68.195 port 42666 Mar 17 18:00:13.635202 sshd-session[3722]: pam_unix(sshd:session): session closed for user core Mar 17 18:00:13.640465 systemd-logind[1468]: Session 13 logged out. Waiting for processes to exit. Mar 17 18:00:13.642026 systemd[1]: sshd@12-159.223.200.207:22-139.178.68.195:42666.service: Deactivated successfully. Mar 17 18:00:13.645522 systemd[1]: session-13.scope: Deactivated successfully. Mar 17 18:00:13.648194 systemd-logind[1468]: Removed session 13. Mar 17 18:00:18.655234 systemd[1]: Started sshd@13-159.223.200.207:22-139.178.68.195:56580.service - OpenSSH per-connection server daemon (139.178.68.195:56580). Mar 17 18:00:18.703772 sshd[3765]: Accepted publickey for core from 139.178.68.195 port 56580 ssh2: RSA SHA256:nAUKsK2l9wjXYeF+xS7MSq6cfWij0pIIBV4i7QqSfSE Mar 17 18:00:18.705854 sshd-session[3765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:00:18.713717 systemd-logind[1468]: New session 14 of user core. Mar 17 18:00:18.724193 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 17 18:00:18.880866 sshd[3767]: Connection closed by 139.178.68.195 port 56580 Mar 17 18:00:18.881708 sshd-session[3765]: pam_unix(sshd:session): session closed for user core Mar 17 18:00:18.887915 systemd[1]: sshd@13-159.223.200.207:22-139.178.68.195:56580.service: Deactivated successfully. Mar 17 18:00:18.890374 systemd[1]: session-14.scope: Deactivated successfully. Mar 17 18:00:18.891612 systemd-logind[1468]: Session 14 logged out. Waiting for processes to exit. Mar 17 18:00:18.893284 systemd-logind[1468]: Removed session 14. Mar 17 18:00:23.897121 systemd[1]: Started sshd@14-159.223.200.207:22-139.178.68.195:56592.service - OpenSSH per-connection server daemon (139.178.68.195:56592). Mar 17 18:00:23.958056 sshd[3815]: Accepted publickey for core from 139.178.68.195 port 56592 ssh2: RSA SHA256:nAUKsK2l9wjXYeF+xS7MSq6cfWij0pIIBV4i7QqSfSE Mar 17 18:00:23.960348 sshd-session[3815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:00:23.968087 systemd-logind[1468]: New session 15 of user core. Mar 17 18:00:23.974234 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 17 18:00:24.132894 sshd[3817]: Connection closed by 139.178.68.195 port 56592 Mar 17 18:00:24.133531 sshd-session[3815]: pam_unix(sshd:session): session closed for user core Mar 17 18:00:24.144147 systemd[1]: sshd@14-159.223.200.207:22-139.178.68.195:56592.service: Deactivated successfully. Mar 17 18:00:24.146891 systemd[1]: session-15.scope: Deactivated successfully. Mar 17 18:00:24.149304 systemd-logind[1468]: Session 15 logged out. Waiting for processes to exit. Mar 17 18:00:24.154164 systemd[1]: Started sshd@15-159.223.200.207:22-139.178.68.195:56602.service - OpenSSH per-connection server daemon (139.178.68.195:56602). Mar 17 18:00:24.156239 systemd-logind[1468]: Removed session 15. Mar 17 18:00:24.202288 sshd[3828]: Accepted publickey for core from 139.178.68.195 port 56602 ssh2: RSA SHA256:nAUKsK2l9wjXYeF+xS7MSq6cfWij0pIIBV4i7QqSfSE Mar 17 18:00:24.205047 sshd-session[3828]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:00:24.211980 systemd-logind[1468]: New session 16 of user core. Mar 17 18:00:24.219120 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 17 18:00:24.507170 sshd[3831]: Connection closed by 139.178.68.195 port 56602 Mar 17 18:00:24.508511 sshd-session[3828]: pam_unix(sshd:session): session closed for user core Mar 17 18:00:24.523247 systemd[1]: sshd@15-159.223.200.207:22-139.178.68.195:56602.service: Deactivated successfully. Mar 17 18:00:24.526456 systemd[1]: session-16.scope: Deactivated successfully. Mar 17 18:00:24.528486 systemd-logind[1468]: Session 16 logged out. Waiting for processes to exit. Mar 17 18:00:24.540601 systemd[1]: Started sshd@16-159.223.200.207:22-139.178.68.195:56618.service - OpenSSH per-connection server daemon (139.178.68.195:56618). Mar 17 18:00:24.543425 systemd-logind[1468]: Removed session 16. Mar 17 18:00:24.601919 sshd[3840]: Accepted publickey for core from 139.178.68.195 port 56618 ssh2: RSA SHA256:nAUKsK2l9wjXYeF+xS7MSq6cfWij0pIIBV4i7QqSfSE Mar 17 18:00:24.604084 sshd-session[3840]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:00:24.612066 systemd-logind[1468]: New session 17 of user core. Mar 17 18:00:24.618172 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 17 18:00:25.540452 sshd[3843]: Connection closed by 139.178.68.195 port 56618 Mar 17 18:00:25.541856 sshd-session[3840]: pam_unix(sshd:session): session closed for user core Mar 17 18:00:25.557990 systemd[1]: sshd@16-159.223.200.207:22-139.178.68.195:56618.service: Deactivated successfully. Mar 17 18:00:25.568423 systemd[1]: session-17.scope: Deactivated successfully. Mar 17 18:00:25.570640 systemd-logind[1468]: Session 17 logged out. Waiting for processes to exit. Mar 17 18:00:25.580662 systemd[1]: Started sshd@17-159.223.200.207:22-139.178.68.195:56632.service - OpenSSH per-connection server daemon (139.178.68.195:56632). Mar 17 18:00:25.586865 systemd-logind[1468]: Removed session 17. Mar 17 18:00:25.639861 sshd[3861]: Accepted publickey for core from 139.178.68.195 port 56632 ssh2: RSA SHA256:nAUKsK2l9wjXYeF+xS7MSq6cfWij0pIIBV4i7QqSfSE Mar 17 18:00:25.641896 sshd-session[3861]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:00:25.649468 systemd-logind[1468]: New session 18 of user core. Mar 17 18:00:25.660216 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 17 18:00:25.956935 sshd[3864]: Connection closed by 139.178.68.195 port 56632 Mar 17 18:00:25.957489 sshd-session[3861]: pam_unix(sshd:session): session closed for user core Mar 17 18:00:25.970213 systemd[1]: sshd@17-159.223.200.207:22-139.178.68.195:56632.service: Deactivated successfully. Mar 17 18:00:25.974330 systemd[1]: session-18.scope: Deactivated successfully. Mar 17 18:00:25.979088 systemd-logind[1468]: Session 18 logged out. Waiting for processes to exit. Mar 17 18:00:25.985559 systemd[1]: Started sshd@18-159.223.200.207:22-139.178.68.195:36022.service - OpenSSH per-connection server daemon (139.178.68.195:36022). Mar 17 18:00:25.990098 systemd-logind[1468]: Removed session 18. Mar 17 18:00:26.034938 sshd[3873]: Accepted publickey for core from 139.178.68.195 port 36022 ssh2: RSA SHA256:nAUKsK2l9wjXYeF+xS7MSq6cfWij0pIIBV4i7QqSfSE Mar 17 18:00:26.037273 sshd-session[3873]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:00:26.045473 systemd-logind[1468]: New session 19 of user core. Mar 17 18:00:26.063143 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 17 18:00:26.212930 sshd[3876]: Connection closed by 139.178.68.195 port 36022 Mar 17 18:00:26.213933 sshd-session[3873]: pam_unix(sshd:session): session closed for user core Mar 17 18:00:26.219237 systemd[1]: sshd@18-159.223.200.207:22-139.178.68.195:36022.service: Deactivated successfully. Mar 17 18:00:26.223201 systemd[1]: session-19.scope: Deactivated successfully. Mar 17 18:00:26.224843 systemd-logind[1468]: Session 19 logged out. Waiting for processes to exit. Mar 17 18:00:26.226253 systemd-logind[1468]: Removed session 19. Mar 17 18:00:27.110899 kubelet[2569]: E0317 18:00:27.110745 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 18:00:31.238277 systemd[1]: Started sshd@19-159.223.200.207:22-139.178.68.195:36024.service - OpenSSH per-connection server daemon (139.178.68.195:36024). Mar 17 18:00:31.305781 sshd[3908]: Accepted publickey for core from 139.178.68.195 port 36024 ssh2: RSA SHA256:nAUKsK2l9wjXYeF+xS7MSq6cfWij0pIIBV4i7QqSfSE Mar 17 18:00:31.308068 sshd-session[3908]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:00:31.315440 systemd-logind[1468]: New session 20 of user core. Mar 17 18:00:31.323220 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 17 18:00:31.499040 sshd[3910]: Connection closed by 139.178.68.195 port 36024 Mar 17 18:00:31.500377 sshd-session[3908]: pam_unix(sshd:session): session closed for user core Mar 17 18:00:31.512538 systemd[1]: sshd@19-159.223.200.207:22-139.178.68.195:36024.service: Deactivated successfully. Mar 17 18:00:31.517438 systemd[1]: session-20.scope: Deactivated successfully. Mar 17 18:00:31.519431 systemd-logind[1468]: Session 20 logged out. Waiting for processes to exit. Mar 17 18:00:31.522260 systemd-logind[1468]: Removed session 20. Mar 17 18:00:34.088856 kubelet[2569]: E0317 18:00:34.088632 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 18:00:36.523293 systemd[1]: Started sshd@20-159.223.200.207:22-139.178.68.195:37354.service - OpenSSH per-connection server daemon (139.178.68.195:37354). Mar 17 18:00:36.573551 sshd[3947]: Accepted publickey for core from 139.178.68.195 port 37354 ssh2: RSA SHA256:nAUKsK2l9wjXYeF+xS7MSq6cfWij0pIIBV4i7QqSfSE Mar 17 18:00:36.575695 sshd-session[3947]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:00:36.581987 systemd-logind[1468]: New session 21 of user core. Mar 17 18:00:36.593450 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 17 18:00:36.771733 sshd[3949]: Connection closed by 139.178.68.195 port 37354 Mar 17 18:00:36.773202 sshd-session[3947]: pam_unix(sshd:session): session closed for user core Mar 17 18:00:36.779858 systemd[1]: sshd@20-159.223.200.207:22-139.178.68.195:37354.service: Deactivated successfully. Mar 17 18:00:36.782579 systemd[1]: session-21.scope: Deactivated successfully. Mar 17 18:00:36.783765 systemd-logind[1468]: Session 21 logged out. Waiting for processes to exit. Mar 17 18:00:36.785386 systemd-logind[1468]: Removed session 21. Mar 17 18:00:39.088722 kubelet[2569]: E0317 18:00:39.088532 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 18:00:41.088184 kubelet[2569]: E0317 18:00:41.088055 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 18:00:41.803612 systemd[1]: Started sshd@21-159.223.200.207:22-139.178.68.195:37366.service - OpenSSH per-connection server daemon (139.178.68.195:37366). Mar 17 18:00:41.869787 sshd[3983]: Accepted publickey for core from 139.178.68.195 port 37366 ssh2: RSA SHA256:nAUKsK2l9wjXYeF+xS7MSq6cfWij0pIIBV4i7QqSfSE Mar 17 18:00:41.873314 sshd-session[3983]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:00:41.882098 systemd-logind[1468]: New session 22 of user core. Mar 17 18:00:41.886379 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 17 18:00:42.051589 sshd[3985]: Connection closed by 139.178.68.195 port 37366 Mar 17 18:00:42.052328 sshd-session[3983]: pam_unix(sshd:session): session closed for user core Mar 17 18:00:42.058054 systemd[1]: sshd@21-159.223.200.207:22-139.178.68.195:37366.service: Deactivated successfully. Mar 17 18:00:42.061836 systemd[1]: session-22.scope: Deactivated successfully. Mar 17 18:00:42.063366 systemd-logind[1468]: Session 22 logged out. Waiting for processes to exit. Mar 17 18:00:42.065438 systemd-logind[1468]: Removed session 22. Mar 17 18:00:42.089420 kubelet[2569]: E0317 18:00:42.088447 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 18:00:45.088721 kubelet[2569]: E0317 18:00:45.088635 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 18:00:47.074222 systemd[1]: Started sshd@22-159.223.200.207:22-139.178.68.195:46834.service - OpenSSH per-connection server daemon (139.178.68.195:46834). Mar 17 18:00:47.132805 sshd[4020]: Accepted publickey for core from 139.178.68.195 port 46834 ssh2: RSA SHA256:nAUKsK2l9wjXYeF+xS7MSq6cfWij0pIIBV4i7QqSfSE Mar 17 18:00:47.134534 sshd-session[4020]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:00:47.140985 systemd-logind[1468]: New session 23 of user core. Mar 17 18:00:47.150119 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 17 18:00:47.284846 sshd[4022]: Connection closed by 139.178.68.195 port 46834 Mar 17 18:00:47.285658 sshd-session[4020]: pam_unix(sshd:session): session closed for user core Mar 17 18:00:47.290962 systemd[1]: sshd@22-159.223.200.207:22-139.178.68.195:46834.service: Deactivated successfully. Mar 17 18:00:47.294478 systemd[1]: session-23.scope: Deactivated successfully. Mar 17 18:00:47.295671 systemd-logind[1468]: Session 23 logged out. Waiting for processes to exit. Mar 17 18:00:47.297551 systemd-logind[1468]: Removed session 23. Mar 17 18:00:48.088536 kubelet[2569]: E0317 18:00:48.088069 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2"