Dec 13 08:47:52.911544 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Dec 12 23:15:00 -00 2024 Dec 13 08:47:52.911582 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 08:47:52.911597 kernel: BIOS-provided physical RAM map: Dec 13 08:47:52.911604 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 08:47:52.911610 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 08:47:52.911617 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 08:47:52.911625 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Dec 13 08:47:52.911632 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Dec 13 08:47:52.911638 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 08:47:52.911648 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 08:47:52.911663 kernel: NX (Execute Disable) protection: active Dec 13 08:47:52.911670 kernel: APIC: Static calls initialized Dec 13 08:47:52.911676 kernel: SMBIOS 2.8 present. Dec 13 08:47:52.911684 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Dec 13 08:47:52.911692 kernel: Hypervisor detected: KVM Dec 13 08:47:52.911703 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 08:47:52.911714 kernel: kvm-clock: using sched offset of 2898907360 cycles Dec 13 08:47:52.911722 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 08:47:52.911730 kernel: tsc: Detected 2494.140 MHz processor Dec 13 08:47:52.911738 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 08:47:52.911746 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 08:47:52.911754 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Dec 13 08:47:52.911762 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 13 08:47:52.911770 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 08:47:52.911781 kernel: ACPI: Early table checksum verification disabled Dec 13 08:47:52.911789 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Dec 13 08:47:52.911796 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 08:47:52.911804 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 08:47:52.911812 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 08:47:52.911820 kernel: ACPI: FACS 0x000000007FFE0000 000040 Dec 13 08:47:52.911828 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 08:47:52.911836 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 08:47:52.911843 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 08:47:52.911854 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 08:47:52.911862 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Dec 13 08:47:52.911869 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Dec 13 08:47:52.911877 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Dec 13 08:47:52.911885 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Dec 13 08:47:52.911892 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Dec 13 08:47:52.911900 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Dec 13 08:47:52.911916 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Dec 13 08:47:52.911925 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 08:47:52.911933 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 08:47:52.911942 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Dec 13 08:47:52.911950 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Dec 13 08:47:52.911958 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Dec 13 08:47:52.911967 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Dec 13 08:47:52.911978 kernel: Zone ranges: Dec 13 08:47:52.911986 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 08:47:52.911994 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Dec 13 08:47:52.912002 kernel: Normal empty Dec 13 08:47:52.912010 kernel: Movable zone start for each node Dec 13 08:47:52.912018 kernel: Early memory node ranges Dec 13 08:47:52.912026 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 08:47:52.912034 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Dec 13 08:47:52.912042 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Dec 13 08:47:52.912053 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 08:47:52.912063 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 08:47:52.912071 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Dec 13 08:47:52.912079 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 08:47:52.912088 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 08:47:52.912096 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 08:47:52.912104 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 08:47:52.912112 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 08:47:52.912120 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 08:47:52.912131 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 08:47:52.912139 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 08:47:52.912147 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 08:47:52.912155 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 08:47:52.912163 kernel: TSC deadline timer available Dec 13 08:47:52.912171 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 08:47:52.912180 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 13 08:47:52.912188 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Dec 13 08:47:52.912197 kernel: Booting paravirtualized kernel on KVM Dec 13 08:47:52.912206 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 08:47:52.912217 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 13 08:47:52.912226 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Dec 13 08:47:52.912234 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Dec 13 08:47:52.912242 kernel: pcpu-alloc: [0] 0 1 Dec 13 08:47:52.912250 kernel: kvm-guest: PV spinlocks disabled, no host support Dec 13 08:47:52.912259 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 08:47:52.912267 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 08:47:52.912278 kernel: random: crng init done Dec 13 08:47:52.912286 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 08:47:52.912294 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 08:47:52.912302 kernel: Fallback order for Node 0: 0 Dec 13 08:47:52.912310 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Dec 13 08:47:52.912318 kernel: Policy zone: DMA32 Dec 13 08:47:52.912326 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 08:47:52.912335 kernel: Memory: 1971200K/2096612K available (12288K kernel code, 2299K rwdata, 22724K rodata, 42844K init, 2348K bss, 125152K reserved, 0K cma-reserved) Dec 13 08:47:52.912343 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 08:47:52.912354 kernel: Kernel/User page tables isolation: enabled Dec 13 08:47:52.912362 kernel: ftrace: allocating 37902 entries in 149 pages Dec 13 08:47:52.912370 kernel: ftrace: allocated 149 pages with 4 groups Dec 13 08:47:52.912378 kernel: Dynamic Preempt: voluntary Dec 13 08:47:52.912387 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 08:47:52.912396 kernel: rcu: RCU event tracing is enabled. Dec 13 08:47:52.912421 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 08:47:52.912430 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 08:47:52.912438 kernel: Rude variant of Tasks RCU enabled. Dec 13 08:47:52.912446 kernel: Tracing variant of Tasks RCU enabled. Dec 13 08:47:52.912458 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 08:47:52.912466 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 08:47:52.912478 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 13 08:47:52.912500 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 08:47:52.912518 kernel: Console: colour VGA+ 80x25 Dec 13 08:47:52.912562 kernel: printk: console [tty0] enabled Dec 13 08:47:52.912578 kernel: printk: console [ttyS0] enabled Dec 13 08:47:52.912586 kernel: ACPI: Core revision 20230628 Dec 13 08:47:52.912595 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 13 08:47:52.912606 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 08:47:52.912614 kernel: x2apic enabled Dec 13 08:47:52.912623 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 08:47:52.912631 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 08:47:52.912639 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Dec 13 08:47:52.912647 kernel: Calibrating delay loop (skipped) preset value.. 4988.28 BogoMIPS (lpj=2494140) Dec 13 08:47:52.913755 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Dec 13 08:47:52.913768 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Dec 13 08:47:52.913797 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 08:47:52.913806 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 08:47:52.913815 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 08:47:52.913827 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 08:47:52.913836 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Dec 13 08:47:52.913845 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 08:47:52.913854 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 13 08:47:52.913862 kernel: MDS: Mitigation: Clear CPU buffers Dec 13 08:47:52.913871 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 08:47:52.913905 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 08:47:52.913918 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 08:47:52.913931 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 08:47:52.913944 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 08:47:52.913953 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 13 08:47:52.913962 kernel: Freeing SMP alternatives memory: 32K Dec 13 08:47:52.913970 kernel: pid_max: default: 32768 minimum: 301 Dec 13 08:47:52.913979 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 08:47:52.913992 kernel: landlock: Up and running. Dec 13 08:47:52.914001 kernel: SELinux: Initializing. Dec 13 08:47:52.914009 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 08:47:52.914018 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 08:47:52.914027 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Dec 13 08:47:52.914036 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 08:47:52.914045 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 08:47:52.914054 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 08:47:52.914062 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Dec 13 08:47:52.914074 kernel: signal: max sigframe size: 1776 Dec 13 08:47:52.914083 kernel: rcu: Hierarchical SRCU implementation. Dec 13 08:47:52.914093 kernel: rcu: Max phase no-delay instances is 400. Dec 13 08:47:52.914101 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 08:47:52.914110 kernel: smp: Bringing up secondary CPUs ... Dec 13 08:47:52.914119 kernel: smpboot: x86: Booting SMP configuration: Dec 13 08:47:52.914130 kernel: .... node #0, CPUs: #1 Dec 13 08:47:52.914139 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 08:47:52.914148 kernel: smpboot: Max logical packages: 1 Dec 13 08:47:52.914159 kernel: smpboot: Total of 2 processors activated (9976.56 BogoMIPS) Dec 13 08:47:52.914168 kernel: devtmpfs: initialized Dec 13 08:47:52.914177 kernel: x86/mm: Memory block size: 128MB Dec 13 08:47:52.914186 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 08:47:52.914195 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 08:47:52.914203 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 08:47:52.914212 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 08:47:52.914221 kernel: audit: initializing netlink subsys (disabled) Dec 13 08:47:52.914230 kernel: audit: type=2000 audit(1734079672.124:1): state=initialized audit_enabled=0 res=1 Dec 13 08:47:52.914258 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 08:47:52.914267 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 08:47:52.914276 kernel: cpuidle: using governor menu Dec 13 08:47:52.914285 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 08:47:52.914294 kernel: dca service started, version 1.12.1 Dec 13 08:47:52.914302 kernel: PCI: Using configuration type 1 for base access Dec 13 08:47:52.914311 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 08:47:52.914329 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 08:47:52.914338 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 08:47:52.914356 kernel: ACPI: Added _OSI(Module Device) Dec 13 08:47:52.914365 kernel: ACPI: Added _OSI(Processor Device) Dec 13 08:47:52.914373 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 08:47:52.914382 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 08:47:52.914391 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 08:47:52.914422 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 08:47:52.915413 kernel: ACPI: Interpreter enabled Dec 13 08:47:52.915429 kernel: ACPI: PM: (supports S0 S5) Dec 13 08:47:52.915439 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 08:47:52.915454 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 08:47:52.915464 kernel: PCI: Using E820 reservations for host bridge windows Dec 13 08:47:52.915473 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Dec 13 08:47:52.915482 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 08:47:52.915682 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 13 08:47:52.915785 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Dec 13 08:47:52.915878 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Dec 13 08:47:52.915893 kernel: acpiphp: Slot [3] registered Dec 13 08:47:52.915903 kernel: acpiphp: Slot [4] registered Dec 13 08:47:52.915911 kernel: acpiphp: Slot [5] registered Dec 13 08:47:52.915920 kernel: acpiphp: Slot [6] registered Dec 13 08:47:52.915929 kernel: acpiphp: Slot [7] registered Dec 13 08:47:52.915938 kernel: acpiphp: Slot [8] registered Dec 13 08:47:52.915946 kernel: acpiphp: Slot [9] registered Dec 13 08:47:52.915955 kernel: acpiphp: Slot [10] registered Dec 13 08:47:52.915964 kernel: acpiphp: Slot [11] registered Dec 13 08:47:52.915976 kernel: acpiphp: Slot [12] registered Dec 13 08:47:52.915985 kernel: acpiphp: Slot [13] registered Dec 13 08:47:52.915993 kernel: acpiphp: Slot [14] registered Dec 13 08:47:52.916002 kernel: acpiphp: Slot [15] registered Dec 13 08:47:52.916011 kernel: acpiphp: Slot [16] registered Dec 13 08:47:52.916019 kernel: acpiphp: Slot [17] registered Dec 13 08:47:52.916028 kernel: acpiphp: Slot [18] registered Dec 13 08:47:52.916037 kernel: acpiphp: Slot [19] registered Dec 13 08:47:52.916045 kernel: acpiphp: Slot [20] registered Dec 13 08:47:52.916054 kernel: acpiphp: Slot [21] registered Dec 13 08:47:52.916066 kernel: acpiphp: Slot [22] registered Dec 13 08:47:52.916075 kernel: acpiphp: Slot [23] registered Dec 13 08:47:52.916084 kernel: acpiphp: Slot [24] registered Dec 13 08:47:52.916092 kernel: acpiphp: Slot [25] registered Dec 13 08:47:52.916101 kernel: acpiphp: Slot [26] registered Dec 13 08:47:52.916110 kernel: acpiphp: Slot [27] registered Dec 13 08:47:52.916118 kernel: acpiphp: Slot [28] registered Dec 13 08:47:52.916127 kernel: acpiphp: Slot [29] registered Dec 13 08:47:52.916135 kernel: acpiphp: Slot [30] registered Dec 13 08:47:52.916147 kernel: acpiphp: Slot [31] registered Dec 13 08:47:52.916156 kernel: PCI host bridge to bus 0000:00 Dec 13 08:47:52.916262 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 08:47:52.916349 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 08:47:52.917530 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 08:47:52.917637 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Dec 13 08:47:52.917733 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Dec 13 08:47:52.917817 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 08:47:52.917993 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Dec 13 08:47:52.918102 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Dec 13 08:47:52.918219 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Dec 13 08:47:52.918316 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Dec 13 08:47:52.919507 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Dec 13 08:47:52.919655 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Dec 13 08:47:52.919764 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Dec 13 08:47:52.919858 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Dec 13 08:47:52.919973 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Dec 13 08:47:52.920073 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Dec 13 08:47:52.920186 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Dec 13 08:47:52.920283 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Dec 13 08:47:52.922474 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Dec 13 08:47:52.922642 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Dec 13 08:47:52.922748 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Dec 13 08:47:52.922845 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Dec 13 08:47:52.922940 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Dec 13 08:47:52.923034 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Dec 13 08:47:52.923128 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 08:47:52.923257 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Dec 13 08:47:52.923392 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Dec 13 08:47:52.923549 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Dec 13 08:47:52.923647 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Dec 13 08:47:52.923752 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Dec 13 08:47:52.923849 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Dec 13 08:47:52.923954 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Dec 13 08:47:52.924046 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Dec 13 08:47:52.924154 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Dec 13 08:47:52.924247 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Dec 13 08:47:52.924344 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Dec 13 08:47:52.927601 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Dec 13 08:47:52.927801 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Dec 13 08:47:52.927909 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Dec 13 08:47:52.928020 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Dec 13 08:47:52.928157 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Dec 13 08:47:52.928334 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Dec 13 08:47:52.928520 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Dec 13 08:47:52.928672 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Dec 13 08:47:52.928778 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Dec 13 08:47:52.928958 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Dec 13 08:47:52.929089 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Dec 13 08:47:52.929193 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Dec 13 08:47:52.929213 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 08:47:52.929223 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 08:47:52.929232 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 08:47:52.929241 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 08:47:52.929255 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 13 08:47:52.929264 kernel: iommu: Default domain type: Translated Dec 13 08:47:52.929273 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 08:47:52.929282 kernel: PCI: Using ACPI for IRQ routing Dec 13 08:47:52.929291 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 08:47:52.929300 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 08:47:52.929310 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Dec 13 08:47:52.929551 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Dec 13 08:47:52.929660 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Dec 13 08:47:52.929762 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 08:47:52.929775 kernel: vgaarb: loaded Dec 13 08:47:52.929784 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 13 08:47:52.929793 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 13 08:47:52.929802 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 08:47:52.929812 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 08:47:52.929822 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 08:47:52.929831 kernel: pnp: PnP ACPI init Dec 13 08:47:52.929840 kernel: pnp: PnP ACPI: found 4 devices Dec 13 08:47:52.929852 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 08:47:52.929861 kernel: NET: Registered PF_INET protocol family Dec 13 08:47:52.929870 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 08:47:52.929941 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 08:47:52.929968 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 08:47:52.929979 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 08:47:52.929989 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 13 08:47:52.929998 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 08:47:52.930007 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 08:47:52.930022 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 08:47:52.930031 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 08:47:52.930040 kernel: NET: Registered PF_XDP protocol family Dec 13 08:47:52.930162 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 08:47:52.930252 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 08:47:52.930339 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 08:47:52.931503 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Dec 13 08:47:52.931613 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Dec 13 08:47:52.931733 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Dec 13 08:47:52.931838 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 13 08:47:52.931852 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Dec 13 08:47:52.931952 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 41063 usecs Dec 13 08:47:52.931964 kernel: PCI: CLS 0 bytes, default 64 Dec 13 08:47:52.931974 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 08:47:52.931983 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Dec 13 08:47:52.931993 kernel: Initialise system trusted keyrings Dec 13 08:47:52.932007 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 08:47:52.932016 kernel: Key type asymmetric registered Dec 13 08:47:52.932025 kernel: Asymmetric key parser 'x509' registered Dec 13 08:47:52.932034 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 08:47:52.932043 kernel: io scheduler mq-deadline registered Dec 13 08:47:52.932052 kernel: io scheduler kyber registered Dec 13 08:47:52.932067 kernel: io scheduler bfq registered Dec 13 08:47:52.932079 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 08:47:52.932088 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Dec 13 08:47:52.932101 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Dec 13 08:47:52.932110 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Dec 13 08:47:52.932119 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 08:47:52.932128 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 08:47:52.932137 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 08:47:52.932146 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 08:47:52.932155 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 08:47:52.932164 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 08:47:52.932282 kernel: rtc_cmos 00:03: RTC can wake from S4 Dec 13 08:47:52.932378 kernel: rtc_cmos 00:03: registered as rtc0 Dec 13 08:47:52.933546 kernel: rtc_cmos 00:03: setting system clock to 2024-12-13T08:47:52 UTC (1734079672) Dec 13 08:47:52.933650 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Dec 13 08:47:52.933662 kernel: intel_pstate: CPU model not supported Dec 13 08:47:52.933671 kernel: NET: Registered PF_INET6 protocol family Dec 13 08:47:52.933680 kernel: Segment Routing with IPv6 Dec 13 08:47:52.933689 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 08:47:52.933698 kernel: NET: Registered PF_PACKET protocol family Dec 13 08:47:52.933715 kernel: Key type dns_resolver registered Dec 13 08:47:52.933724 kernel: IPI shorthand broadcast: enabled Dec 13 08:47:52.933733 kernel: sched_clock: Marking stable (1073008313, 91489377)->(1181877613, -17379923) Dec 13 08:47:52.933742 kernel: registered taskstats version 1 Dec 13 08:47:52.933751 kernel: Loading compiled-in X.509 certificates Dec 13 08:47:52.933760 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: c82d546f528d79a5758dcebbc47fb6daf92836a0' Dec 13 08:47:52.933769 kernel: Key type .fscrypt registered Dec 13 08:47:52.933778 kernel: Key type fscrypt-provisioning registered Dec 13 08:47:52.933787 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 08:47:52.933799 kernel: ima: Allocated hash algorithm: sha1 Dec 13 08:47:52.933825 kernel: ima: No architecture policies found Dec 13 08:47:52.933834 kernel: clk: Disabling unused clocks Dec 13 08:47:52.933843 kernel: Freeing unused kernel image (initmem) memory: 42844K Dec 13 08:47:52.933852 kernel: Write protecting the kernel read-only data: 36864k Dec 13 08:47:52.933908 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Dec 13 08:47:52.933926 kernel: Run /init as init process Dec 13 08:47:52.933936 kernel: with arguments: Dec 13 08:47:52.933945 kernel: /init Dec 13 08:47:52.933958 kernel: with environment: Dec 13 08:47:52.933968 kernel: HOME=/ Dec 13 08:47:52.933977 kernel: TERM=linux Dec 13 08:47:52.933986 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 08:47:52.933998 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 08:47:52.934010 systemd[1]: Detected virtualization kvm. Dec 13 08:47:52.934020 systemd[1]: Detected architecture x86-64. Dec 13 08:47:52.934030 systemd[1]: Running in initrd. Dec 13 08:47:52.934043 systemd[1]: No hostname configured, using default hostname. Dec 13 08:47:52.934052 systemd[1]: Hostname set to . Dec 13 08:47:52.934062 systemd[1]: Initializing machine ID from VM UUID. Dec 13 08:47:52.934072 systemd[1]: Queued start job for default target initrd.target. Dec 13 08:47:52.934082 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 08:47:52.934091 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 08:47:52.934102 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 08:47:52.934112 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 08:47:52.934124 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 08:47:52.934134 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 08:47:52.934146 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 08:47:52.934156 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 08:47:52.934166 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 08:47:52.934176 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 08:47:52.934188 systemd[1]: Reached target paths.target - Path Units. Dec 13 08:47:52.934198 systemd[1]: Reached target slices.target - Slice Units. Dec 13 08:47:52.934208 systemd[1]: Reached target swap.target - Swaps. Dec 13 08:47:52.934220 systemd[1]: Reached target timers.target - Timer Units. Dec 13 08:47:52.934230 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 08:47:52.934240 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 08:47:52.934253 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 08:47:52.934263 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 08:47:52.934272 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 08:47:52.934282 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 08:47:52.934292 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 08:47:52.934302 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 08:47:52.934312 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 08:47:52.934322 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 08:47:52.934334 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 08:47:52.934343 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 08:47:52.934353 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 08:47:52.934363 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 08:47:52.934373 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 08:47:52.934382 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 08:47:52.934392 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 08:47:52.935656 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 08:47:52.935712 systemd-journald[184]: Collecting audit messages is disabled. Dec 13 08:47:52.935739 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 08:47:52.935750 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 08:47:52.935764 systemd-journald[184]: Journal started Dec 13 08:47:52.935785 systemd-journald[184]: Runtime Journal (/run/log/journal/af4c58e516904e34a4fb8bfdc08bab75) is 4.9M, max 39.3M, 34.4M free. Dec 13 08:47:52.912859 systemd-modules-load[185]: Inserted module 'overlay' Dec 13 08:47:52.965548 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 08:47:52.965578 kernel: Bridge firewalling registered Dec 13 08:47:52.965591 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 08:47:52.947318 systemd-modules-load[185]: Inserted module 'br_netfilter' Dec 13 08:47:52.964671 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 08:47:52.967990 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 08:47:52.974666 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 08:47:52.977656 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 08:47:52.978964 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 08:47:52.992646 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 08:47:53.005689 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 08:47:53.010468 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 08:47:53.015668 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 08:47:53.016357 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 08:47:53.016983 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 08:47:53.028670 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 08:47:53.046454 dracut-cmdline[215]: dracut-dracut-053 Dec 13 08:47:53.052293 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 08:47:53.064136 systemd-resolved[218]: Positive Trust Anchors: Dec 13 08:47:53.064155 systemd-resolved[218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 08:47:53.064200 systemd-resolved[218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 08:47:53.067875 systemd-resolved[218]: Defaulting to hostname 'linux'. Dec 13 08:47:53.069732 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 08:47:53.070795 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 08:47:53.152455 kernel: SCSI subsystem initialized Dec 13 08:47:53.162430 kernel: Loading iSCSI transport class v2.0-870. Dec 13 08:47:53.173433 kernel: iscsi: registered transport (tcp) Dec 13 08:47:53.196438 kernel: iscsi: registered transport (qla4xxx) Dec 13 08:47:53.196531 kernel: QLogic iSCSI HBA Driver Dec 13 08:47:53.249725 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 08:47:53.256658 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 08:47:53.286349 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 08:47:53.286465 kernel: device-mapper: uevent: version 1.0.3 Dec 13 08:47:53.286488 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 08:47:53.336448 kernel: raid6: avx2x4 gen() 17178 MB/s Dec 13 08:47:53.353477 kernel: raid6: avx2x2 gen() 17704 MB/s Dec 13 08:47:53.370607 kernel: raid6: avx2x1 gen() 13151 MB/s Dec 13 08:47:53.370724 kernel: raid6: using algorithm avx2x2 gen() 17704 MB/s Dec 13 08:47:53.388666 kernel: raid6: .... xor() 19046 MB/s, rmw enabled Dec 13 08:47:53.388790 kernel: raid6: using avx2x2 recovery algorithm Dec 13 08:47:53.410460 kernel: xor: automatically using best checksumming function avx Dec 13 08:47:53.620537 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 08:47:53.636525 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 08:47:53.643770 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 08:47:53.671799 systemd-udevd[401]: Using default interface naming scheme 'v255'. Dec 13 08:47:53.677541 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 08:47:53.685641 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 08:47:53.703873 dracut-pre-trigger[406]: rd.md=0: removing MD RAID activation Dec 13 08:47:53.747631 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 08:47:53.755715 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 08:47:53.830636 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 08:47:53.838672 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 08:47:53.854792 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 08:47:53.856088 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 08:47:53.857501 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 08:47:53.857864 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 08:47:53.865045 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 08:47:53.899146 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 08:47:53.947962 kernel: scsi host0: Virtio SCSI HBA Dec 13 08:47:53.963673 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Dec 13 08:47:54.017596 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Dec 13 08:47:54.017770 kernel: ACPI: bus type USB registered Dec 13 08:47:54.017785 kernel: usbcore: registered new interface driver usbfs Dec 13 08:47:54.017797 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 08:47:54.017809 kernel: usbcore: registered new interface driver hub Dec 13 08:47:54.017820 kernel: usbcore: registered new device driver usb Dec 13 08:47:54.017832 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 08:47:54.017844 kernel: GPT:9289727 != 125829119 Dec 13 08:47:54.017855 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Dec 13 08:47:54.023920 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 08:47:54.023949 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Dec 13 08:47:54.024110 kernel: GPT:9289727 != 125829119 Dec 13 08:47:54.024123 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 08:47:54.024135 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 08:47:54.024147 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Dec 13 08:47:54.042489 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Dec 13 08:47:54.042648 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Dec 13 08:47:54.042764 kernel: hub 1-0:1.0: USB hub found Dec 13 08:47:54.042932 kernel: hub 1-0:1.0: 2 ports detected Dec 13 08:47:54.043082 kernel: virtio_blk virtio5: [vdb] 952 512-byte logical blocks (487 kB/476 KiB) Dec 13 08:47:54.043237 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 08:47:54.043252 kernel: AES CTR mode by8 optimization enabled Dec 13 08:47:54.043264 kernel: libata version 3.00 loaded. Dec 13 08:47:54.017306 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 08:47:54.017464 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 08:47:54.018636 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 08:47:54.018967 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 08:47:54.019113 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 08:47:54.019512 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 08:47:54.050928 kernel: ata_piix 0000:00:01.1: version 2.13 Dec 13 08:47:54.065746 kernel: scsi host1: ata_piix Dec 13 08:47:54.066044 kernel: scsi host2: ata_piix Dec 13 08:47:54.066173 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Dec 13 08:47:54.066187 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Dec 13 08:47:54.029864 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 08:47:54.090607 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 13 08:47:54.119964 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (458) Dec 13 08:47:54.119998 kernel: BTRFS: device fsid c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (457) Dec 13 08:47:54.120705 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 08:47:54.129415 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 13 08:47:54.139553 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 13 08:47:54.139998 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 13 08:47:54.145603 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 08:47:54.155661 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 08:47:54.158605 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 08:47:54.164464 disk-uuid[531]: Primary Header is updated. Dec 13 08:47:54.164464 disk-uuid[531]: Secondary Entries is updated. Dec 13 08:47:54.164464 disk-uuid[531]: Secondary Header is updated. Dec 13 08:47:54.179599 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 08:47:54.183046 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 08:47:54.186424 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 08:47:54.200449 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 08:47:55.194256 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 08:47:55.194327 disk-uuid[532]: The operation has completed successfully. Dec 13 08:47:55.236742 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 08:47:55.236900 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 08:47:55.265730 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 08:47:55.269289 sh[565]: Success Dec 13 08:47:55.284102 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 08:47:55.343947 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 08:47:55.357213 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 08:47:55.360175 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 08:47:55.394752 kernel: BTRFS info (device dm-0): first mount of filesystem c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be Dec 13 08:47:55.394877 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 08:47:55.397753 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 08:47:55.397905 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 08:47:55.398745 kernel: BTRFS info (device dm-0): using free space tree Dec 13 08:47:55.410286 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 08:47:55.412255 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 08:47:55.419763 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 08:47:55.422685 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 08:47:55.444891 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 08:47:55.445027 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 08:47:55.445073 kernel: BTRFS info (device vda6): using free space tree Dec 13 08:47:55.451436 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 08:47:55.468465 kernel: BTRFS info (device vda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 08:47:55.468712 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 08:47:55.480355 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 08:47:55.491945 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 08:47:55.637214 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 08:47:55.646658 ignition[653]: Ignition 2.19.0 Dec 13 08:47:55.646670 ignition[653]: Stage: fetch-offline Dec 13 08:47:55.647681 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 08:47:55.646717 ignition[653]: no configs at "/usr/lib/ignition/base.d" Dec 13 08:47:55.649614 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 08:47:55.646729 ignition[653]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 13 08:47:55.646858 ignition[653]: parsed url from cmdline: "" Dec 13 08:47:55.646863 ignition[653]: no config URL provided Dec 13 08:47:55.646870 ignition[653]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 08:47:55.646879 ignition[653]: no config at "/usr/lib/ignition/user.ign" Dec 13 08:47:55.646886 ignition[653]: failed to fetch config: resource requires networking Dec 13 08:47:55.647116 ignition[653]: Ignition finished successfully Dec 13 08:47:55.682370 systemd-networkd[752]: lo: Link UP Dec 13 08:47:55.682381 systemd-networkd[752]: lo: Gained carrier Dec 13 08:47:55.684748 systemd-networkd[752]: Enumeration completed Dec 13 08:47:55.685190 systemd-networkd[752]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Dec 13 08:47:55.685194 systemd-networkd[752]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Dec 13 08:47:55.685436 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 08:47:55.686376 systemd[1]: Reached target network.target - Network. Dec 13 08:47:55.686389 systemd-networkd[752]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 08:47:55.686393 systemd-networkd[752]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 08:47:55.687068 systemd-networkd[752]: eth0: Link UP Dec 13 08:47:55.687073 systemd-networkd[752]: eth0: Gained carrier Dec 13 08:47:55.687082 systemd-networkd[752]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Dec 13 08:47:55.691355 systemd-networkd[752]: eth1: Link UP Dec 13 08:47:55.691359 systemd-networkd[752]: eth1: Gained carrier Dec 13 08:47:55.691373 systemd-networkd[752]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 08:47:55.694693 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 08:47:55.704515 systemd-networkd[752]: eth0: DHCPv4 address 64.23.129.27/20, gateway 64.23.128.1 acquired from 169.254.169.253 Dec 13 08:47:55.708523 systemd-networkd[752]: eth1: DHCPv4 address 10.124.0.2/20, gateway 10.124.0.1 acquired from 169.254.169.253 Dec 13 08:47:55.722193 ignition[755]: Ignition 2.19.0 Dec 13 08:47:55.722204 ignition[755]: Stage: fetch Dec 13 08:47:55.722394 ignition[755]: no configs at "/usr/lib/ignition/base.d" Dec 13 08:47:55.722420 ignition[755]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 13 08:47:55.722586 ignition[755]: parsed url from cmdline: "" Dec 13 08:47:55.722592 ignition[755]: no config URL provided Dec 13 08:47:55.722601 ignition[755]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 08:47:55.722614 ignition[755]: no config at "/usr/lib/ignition/user.ign" Dec 13 08:47:55.722642 ignition[755]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Dec 13 08:47:55.738727 ignition[755]: GET result: OK Dec 13 08:47:55.738857 ignition[755]: parsing config with SHA512: 6ac465e508206b859c1bc9e25c25db80011774a8d27fa08064ab158f46cdd5b911785a698d5f5c795f22a76dcc4fa37cd9d82d1f00f093ebfb95f2eca9e88be1 Dec 13 08:47:55.745122 unknown[755]: fetched base config from "system" Dec 13 08:47:55.745146 unknown[755]: fetched base config from "system" Dec 13 08:47:55.745156 unknown[755]: fetched user config from "digitalocean" Dec 13 08:47:55.746131 ignition[755]: fetch: fetch complete Dec 13 08:47:55.746140 ignition[755]: fetch: fetch passed Dec 13 08:47:55.746218 ignition[755]: Ignition finished successfully Dec 13 08:47:55.749148 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 08:47:55.752735 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 08:47:55.774851 ignition[762]: Ignition 2.19.0 Dec 13 08:47:55.774865 ignition[762]: Stage: kargs Dec 13 08:47:55.775085 ignition[762]: no configs at "/usr/lib/ignition/base.d" Dec 13 08:47:55.775099 ignition[762]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 13 08:47:55.776292 ignition[762]: kargs: kargs passed Dec 13 08:47:55.778124 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 08:47:55.776365 ignition[762]: Ignition finished successfully Dec 13 08:47:55.783660 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 08:47:55.803989 ignition[768]: Ignition 2.19.0 Dec 13 08:47:55.804009 ignition[768]: Stage: disks Dec 13 08:47:55.804267 ignition[768]: no configs at "/usr/lib/ignition/base.d" Dec 13 08:47:55.804286 ignition[768]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 13 08:47:55.806817 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 08:47:55.805472 ignition[768]: disks: disks passed Dec 13 08:47:55.805530 ignition[768]: Ignition finished successfully Dec 13 08:47:55.811105 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 08:47:55.811942 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 08:47:55.812578 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 08:47:55.813377 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 08:47:55.814482 systemd[1]: Reached target basic.target - Basic System. Dec 13 08:47:55.824706 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 08:47:55.841498 systemd-fsck[776]: ROOT: clean, 14/553520 files, 52654/553472 blocks Dec 13 08:47:55.846507 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 08:47:55.850692 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 08:47:55.962440 kernel: EXT4-fs (vda9): mounted filesystem 390119fa-ab9c-4f50-b046-3b5c76c46193 r/w with ordered data mode. Quota mode: none. Dec 13 08:47:55.962888 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 08:47:55.964038 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 08:47:55.970588 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 08:47:55.973642 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 08:47:55.977747 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Dec 13 08:47:55.984630 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 13 08:47:55.987106 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 08:47:55.987773 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 08:47:55.992433 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (784) Dec 13 08:47:55.995713 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 08:47:55.996487 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 08:47:56.004554 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 08:47:56.004627 kernel: BTRFS info (device vda6): using free space tree Dec 13 08:47:56.004843 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 08:47:56.016571 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 08:47:56.025306 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 08:47:56.086310 coreos-metadata[787]: Dec 13 08:47:56.086 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Dec 13 08:47:56.093558 coreos-metadata[786]: Dec 13 08:47:56.093 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Dec 13 08:47:56.099053 initrd-setup-root[817]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 08:47:56.101579 coreos-metadata[787]: Dec 13 08:47:56.099 INFO Fetch successful Dec 13 08:47:56.104436 coreos-metadata[787]: Dec 13 08:47:56.104 INFO wrote hostname ci-4081.2.1-7-437820f1b8 to /sysroot/etc/hostname Dec 13 08:47:56.106544 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 08:47:56.109009 coreos-metadata[786]: Dec 13 08:47:56.108 INFO Fetch successful Dec 13 08:47:56.116013 initrd-setup-root[825]: cut: /sysroot/etc/group: No such file or directory Dec 13 08:47:56.116115 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Dec 13 08:47:56.116292 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Dec 13 08:47:56.125147 initrd-setup-root[833]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 08:47:56.131790 initrd-setup-root[840]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 08:47:56.251521 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 08:47:56.258643 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 08:47:56.263745 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 08:47:56.280457 kernel: BTRFS info (device vda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 08:47:56.306262 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 08:47:56.322887 ignition[909]: INFO : Ignition 2.19.0 Dec 13 08:47:56.323824 ignition[909]: INFO : Stage: mount Dec 13 08:47:56.324298 ignition[909]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 08:47:56.324298 ignition[909]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 13 08:47:56.325619 ignition[909]: INFO : mount: mount passed Dec 13 08:47:56.326128 ignition[909]: INFO : Ignition finished successfully Dec 13 08:47:56.327132 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 08:47:56.333622 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 08:47:56.393164 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 08:47:56.399778 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 08:47:56.429472 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (919) Dec 13 08:47:56.432753 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 08:47:56.432846 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 08:47:56.434811 kernel: BTRFS info (device vda6): using free space tree Dec 13 08:47:56.439468 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 08:47:56.443443 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 08:47:56.488442 ignition[936]: INFO : Ignition 2.19.0 Dec 13 08:47:56.488442 ignition[936]: INFO : Stage: files Dec 13 08:47:56.488442 ignition[936]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 08:47:56.488442 ignition[936]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 13 08:47:56.491934 ignition[936]: DEBUG : files: compiled without relabeling support, skipping Dec 13 08:47:56.491934 ignition[936]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 08:47:56.491934 ignition[936]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 08:47:56.496074 ignition[936]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 08:47:56.496934 ignition[936]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 08:47:56.496934 ignition[936]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 08:47:56.496879 unknown[936]: wrote ssh authorized keys file for user: core Dec 13 08:47:56.499522 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 08:47:56.500318 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 08:47:56.556080 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 08:47:56.642435 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 08:47:56.642435 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 08:47:56.642435 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 08:47:56.642435 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 08:47:56.642435 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 08:47:56.642435 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 08:47:56.642435 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 08:47:56.642435 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 08:47:56.642435 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 08:47:56.650300 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 08:47:56.650300 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 08:47:56.650300 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 08:47:56.650300 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 08:47:56.650300 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 08:47:56.650300 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Dec 13 08:47:57.209662 systemd-networkd[752]: eth0: Gained IPv6LL Dec 13 08:47:57.235215 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 13 08:47:57.337948 systemd-networkd[752]: eth1: Gained IPv6LL Dec 13 08:47:58.168337 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 08:47:58.168337 ignition[936]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 13 08:47:58.170197 ignition[936]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 08:47:58.170197 ignition[936]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 08:47:58.170197 ignition[936]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 13 08:47:58.170197 ignition[936]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 13 08:47:58.170197 ignition[936]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 08:47:58.170197 ignition[936]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 08:47:58.170197 ignition[936]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 08:47:58.170197 ignition[936]: INFO : files: files passed Dec 13 08:47:58.170197 ignition[936]: INFO : Ignition finished successfully Dec 13 08:47:58.174110 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 08:47:58.180820 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 08:47:58.183625 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 08:47:58.190806 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 08:47:58.191593 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 08:47:58.201308 initrd-setup-root-after-ignition[965]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 08:47:58.201308 initrd-setup-root-after-ignition[965]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 08:47:58.203395 initrd-setup-root-after-ignition[969]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 08:47:58.205315 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 08:47:58.206124 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 08:47:58.212773 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 08:47:58.255047 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 08:47:58.255168 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 08:47:58.256302 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 08:47:58.256907 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 08:47:58.257746 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 08:47:58.270841 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 08:47:58.291701 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 08:47:58.297788 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 08:47:58.312532 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 08:47:58.313118 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 08:47:58.313998 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 08:47:58.314805 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 08:47:58.315013 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 08:47:58.316278 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 08:47:58.317436 systemd[1]: Stopped target basic.target - Basic System. Dec 13 08:47:58.318383 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 08:47:58.319279 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 08:47:58.320120 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 08:47:58.321098 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 08:47:58.322077 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 08:47:58.322928 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 08:47:58.323753 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 08:47:58.324710 systemd[1]: Stopped target swap.target - Swaps. Dec 13 08:47:58.325484 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 08:47:58.325631 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 08:47:58.326632 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 08:47:58.327153 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 08:47:58.327974 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 08:47:58.328197 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 08:47:58.328966 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 08:47:58.329129 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 08:47:58.330381 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 08:47:58.330655 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 08:47:58.331508 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 08:47:58.331690 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 08:47:58.332311 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 13 08:47:58.332437 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 08:47:58.339741 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 08:47:58.340374 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 08:47:58.340622 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 08:47:58.343175 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 08:47:58.345591 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 08:47:58.346384 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 08:47:58.346923 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 08:47:58.347034 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 08:47:58.354804 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 08:47:58.355383 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 08:47:58.374295 ignition[990]: INFO : Ignition 2.19.0 Dec 13 08:47:58.376441 ignition[990]: INFO : Stage: umount Dec 13 08:47:58.376441 ignition[990]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 08:47:58.376441 ignition[990]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 13 08:47:58.378728 ignition[990]: INFO : umount: umount passed Dec 13 08:47:58.378728 ignition[990]: INFO : Ignition finished successfully Dec 13 08:47:58.382161 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 08:47:58.383269 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 08:47:58.390000 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 08:47:58.390909 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 08:47:58.391229 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 08:47:58.393039 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 08:47:58.393160 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 08:47:58.394226 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 08:47:58.394302 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 08:47:58.394997 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 08:47:58.395064 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 08:47:58.395758 systemd[1]: Stopped target network.target - Network. Dec 13 08:47:58.396682 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 08:47:58.396800 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 08:47:58.397669 systemd[1]: Stopped target paths.target - Path Units. Dec 13 08:47:58.398378 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 08:47:58.398527 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 08:47:58.399135 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 08:47:58.400122 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 08:47:58.400987 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 08:47:58.401104 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 08:47:58.402142 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 08:47:58.402214 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 08:47:58.403350 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 08:47:58.403469 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 08:47:58.404094 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 08:47:58.404176 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 08:47:58.404885 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 08:47:58.404970 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 08:47:58.405935 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 08:47:58.407234 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 08:47:58.412511 systemd-networkd[752]: eth0: DHCPv6 lease lost Dec 13 08:47:58.417310 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 08:47:58.417479 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 08:47:58.418524 systemd-networkd[752]: eth1: DHCPv6 lease lost Dec 13 08:47:58.421955 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 08:47:58.422122 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 08:47:58.424845 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 08:47:58.424909 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 08:47:58.437665 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 08:47:58.438323 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 08:47:58.438428 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 08:47:58.438860 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 08:47:58.438907 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 08:47:58.439352 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 08:47:58.439508 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 08:47:58.440031 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 08:47:58.440093 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 08:47:58.440973 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 08:47:58.456919 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 08:47:58.457148 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 08:47:58.458642 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 08:47:58.458768 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 08:47:58.459988 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 08:47:58.460058 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 08:47:58.461085 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 08:47:58.461138 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 08:47:58.462029 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 08:47:58.462099 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 08:47:58.463305 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 08:47:58.463372 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 08:47:58.464128 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 08:47:58.464193 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 08:47:58.471646 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 08:47:58.473136 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 08:47:58.473238 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 08:47:58.474694 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 08:47:58.474787 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 08:47:58.480455 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 08:47:58.480625 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 08:47:58.481797 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 08:47:58.487699 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 08:47:58.498349 systemd[1]: Switching root. Dec 13 08:47:58.533695 systemd-journald[184]: Journal stopped Dec 13 08:47:59.945781 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Dec 13 08:47:59.945973 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 08:47:59.946011 kernel: SELinux: policy capability open_perms=1 Dec 13 08:47:59.946039 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 08:47:59.946059 kernel: SELinux: policy capability always_check_network=0 Dec 13 08:47:59.946079 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 08:47:59.946100 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 08:47:59.946129 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 08:47:59.946149 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 08:47:59.946176 kernel: audit: type=1403 audit(1734079678.710:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 08:47:59.946206 systemd[1]: Successfully loaded SELinux policy in 38.510ms. Dec 13 08:47:59.946240 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.570ms. Dec 13 08:47:59.946263 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 08:47:59.946284 systemd[1]: Detected virtualization kvm. Dec 13 08:47:59.946307 systemd[1]: Detected architecture x86-64. Dec 13 08:47:59.946328 systemd[1]: Detected first boot. Dec 13 08:47:59.946351 systemd[1]: Hostname set to . Dec 13 08:47:59.946381 systemd[1]: Initializing machine ID from VM UUID. Dec 13 08:47:59.948439 zram_generator::config[1033]: No configuration found. Dec 13 08:47:59.948488 systemd[1]: Populated /etc with preset unit settings. Dec 13 08:47:59.948514 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 08:47:59.948538 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 08:47:59.948562 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 08:47:59.948586 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 08:47:59.948621 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 08:47:59.948652 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 08:47:59.948675 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 08:47:59.948697 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 08:47:59.948723 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 08:47:59.948746 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 08:47:59.948769 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 08:47:59.948792 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 08:47:59.948814 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 08:47:59.948836 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 08:47:59.948861 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 08:47:59.948882 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 08:47:59.948910 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 08:47:59.948930 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 08:47:59.948947 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 08:47:59.948967 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 08:47:59.948997 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 08:47:59.949032 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 08:47:59.949051 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 08:47:59.949069 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 08:47:59.949087 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 08:47:59.949107 systemd[1]: Reached target slices.target - Slice Units. Dec 13 08:47:59.949125 systemd[1]: Reached target swap.target - Swaps. Dec 13 08:47:59.949146 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 08:47:59.949168 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 08:47:59.949195 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 08:47:59.949215 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 08:47:59.949235 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 08:47:59.949254 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 08:47:59.949272 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 08:47:59.949290 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 08:47:59.949309 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 08:47:59.949333 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 08:47:59.949353 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 08:47:59.949384 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 08:47:59.952770 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 08:47:59.952837 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 08:47:59.952863 systemd[1]: Reached target machines.target - Containers. Dec 13 08:47:59.952886 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 08:47:59.952907 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 08:47:59.952926 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 08:47:59.952948 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 08:47:59.952971 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 08:47:59.953006 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 08:47:59.953030 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 08:47:59.953053 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 08:47:59.953076 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 08:47:59.953099 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 08:47:59.953122 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 08:47:59.953143 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 08:47:59.953165 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 08:47:59.953194 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 08:47:59.953215 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 08:47:59.953239 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 08:47:59.953260 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 08:47:59.953285 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 08:47:59.953309 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 08:47:59.953329 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 08:47:59.953353 systemd[1]: Stopped verity-setup.service. Dec 13 08:47:59.953374 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 08:47:59.953417 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 08:47:59.953445 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 08:47:59.953468 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 08:47:59.953491 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 08:47:59.953514 kernel: fuse: init (API version 7.39) Dec 13 08:47:59.953546 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 08:47:59.953569 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 08:47:59.953591 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 08:47:59.953613 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 08:47:59.953635 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 08:47:59.953658 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 08:47:59.953685 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 08:47:59.953709 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 08:47:59.953731 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 08:47:59.953753 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 08:47:59.953776 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 08:47:59.953798 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 08:47:59.953820 kernel: loop: module loaded Dec 13 08:47:59.953856 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 08:47:59.953883 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 08:47:59.953903 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 08:47:59.953925 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 08:47:59.953948 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 08:47:59.953969 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 08:47:59.954058 systemd-journald[1109]: Collecting audit messages is disabled. Dec 13 08:47:59.954112 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 08:47:59.954143 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 08:47:59.954166 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 08:47:59.954191 systemd-journald[1109]: Journal started Dec 13 08:47:59.954234 systemd-journald[1109]: Runtime Journal (/run/log/journal/af4c58e516904e34a4fb8bfdc08bab75) is 4.9M, max 39.3M, 34.4M free. Dec 13 08:47:59.958845 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 08:47:59.494756 systemd[1]: Queued start job for default target multi-user.target. Dec 13 08:47:59.520707 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 13 08:47:59.521522 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 08:47:59.968703 kernel: ACPI: bus type drm_connector registered Dec 13 08:47:59.968782 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 08:47:59.973606 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 08:47:59.973732 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 08:47:59.986852 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 08:47:59.986958 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 08:47:59.997004 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 08:47:59.997144 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 08:48:00.007026 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 08:48:00.013570 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 08:48:00.018441 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 08:48:00.021727 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 08:48:00.022893 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 08:48:00.023199 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 08:48:00.024943 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 08:48:00.025801 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 08:48:00.030515 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 08:48:00.100038 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 08:48:00.113585 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 08:48:00.115979 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 08:48:00.118372 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 08:48:00.139628 kernel: loop0: detected capacity change from 0 to 142488 Dec 13 08:48:00.134466 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 08:48:00.177169 systemd-journald[1109]: Time spent on flushing to /var/log/journal/af4c58e516904e34a4fb8bfdc08bab75 is 48.134ms for 991 entries. Dec 13 08:48:00.177169 systemd-journald[1109]: System Journal (/var/log/journal/af4c58e516904e34a4fb8bfdc08bab75) is 8.0M, max 195.6M, 187.6M free. Dec 13 08:48:00.234008 systemd-journald[1109]: Received client request to flush runtime journal. Dec 13 08:48:00.234103 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 08:48:00.207680 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 08:48:00.226277 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 08:48:00.242473 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 08:48:00.244580 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 08:48:00.248740 kernel: loop1: detected capacity change from 0 to 140768 Dec 13 08:48:00.245578 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 08:48:00.247524 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 08:48:00.295896 kernel: loop2: detected capacity change from 0 to 210664 Dec 13 08:48:00.303045 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 08:48:00.315282 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 08:48:00.366140 kernel: loop3: detected capacity change from 0 to 8 Dec 13 08:48:00.385764 udevadm[1173]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 08:48:00.410460 kernel: loop4: detected capacity change from 0 to 142488 Dec 13 08:48:00.419896 systemd-tmpfiles[1168]: ACLs are not supported, ignoring. Dec 13 08:48:00.419930 systemd-tmpfiles[1168]: ACLs are not supported, ignoring. Dec 13 08:48:00.439123 kernel: loop5: detected capacity change from 0 to 140768 Dec 13 08:48:00.456449 kernel: loop6: detected capacity change from 0 to 210664 Dec 13 08:48:00.460025 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 08:48:00.507827 kernel: loop7: detected capacity change from 0 to 8 Dec 13 08:48:00.510544 (sd-merge)[1177]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Dec 13 08:48:00.511370 (sd-merge)[1177]: Merged extensions into '/usr'. Dec 13 08:48:00.519288 systemd[1]: Reloading requested from client PID 1135 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 08:48:00.519320 systemd[1]: Reloading... Dec 13 08:48:00.668462 zram_generator::config[1202]: No configuration found. Dec 13 08:48:00.991630 ldconfig[1131]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 08:48:01.068032 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 08:48:01.137606 systemd[1]: Reloading finished in 617 ms. Dec 13 08:48:01.172251 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 08:48:01.174655 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 08:48:01.190031 systemd[1]: Starting ensure-sysext.service... Dec 13 08:48:01.203831 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 08:48:01.225650 systemd[1]: Reloading requested from client PID 1248 ('systemctl') (unit ensure-sysext.service)... Dec 13 08:48:01.225685 systemd[1]: Reloading... Dec 13 08:48:01.292141 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 08:48:01.292748 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 08:48:01.297065 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 08:48:01.298586 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. Dec 13 08:48:01.298688 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. Dec 13 08:48:01.307164 systemd-tmpfiles[1249]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 08:48:01.307182 systemd-tmpfiles[1249]: Skipping /boot Dec 13 08:48:01.338384 systemd-tmpfiles[1249]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 08:48:01.340493 systemd-tmpfiles[1249]: Skipping /boot Dec 13 08:48:01.445436 zram_generator::config[1285]: No configuration found. Dec 13 08:48:01.593326 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 08:48:01.663208 systemd[1]: Reloading finished in 436 ms. Dec 13 08:48:01.683620 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 08:48:01.685025 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 08:48:01.715842 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 08:48:01.720694 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 08:48:01.733455 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 08:48:01.738224 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 08:48:01.741537 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 08:48:01.745791 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 08:48:01.753281 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 08:48:01.753614 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 08:48:01.760893 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 08:48:01.765847 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 08:48:01.769784 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 08:48:01.770422 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 08:48:01.770598 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 08:48:01.777053 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 08:48:01.777265 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 08:48:01.777556 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 08:48:01.777720 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 08:48:01.786355 systemd[1]: Finished ensure-sysext.service. Dec 13 08:48:01.790289 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 08:48:01.791851 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 08:48:01.795578 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 08:48:01.796936 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 08:48:01.807751 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 08:48:01.818829 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 08:48:01.820541 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 08:48:01.859528 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 08:48:01.875311 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 08:48:01.876051 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 08:48:01.880854 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 08:48:01.883707 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 08:48:01.886181 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 08:48:01.887579 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 08:48:01.896879 systemd-udevd[1332]: Using default interface naming scheme 'v255'. Dec 13 08:48:01.897275 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 08:48:01.912283 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 08:48:01.913076 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 08:48:01.915839 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 08:48:01.917273 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 08:48:01.922361 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 08:48:01.939236 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 08:48:01.947792 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 08:48:01.954527 augenrules[1356]: No rules Dec 13 08:48:01.957240 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 08:48:01.972557 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 08:48:01.987804 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 08:48:01.992555 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 08:48:02.047817 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 08:48:02.163060 systemd-networkd[1365]: lo: Link UP Dec 13 08:48:02.163861 systemd-networkd[1365]: lo: Gained carrier Dec 13 08:48:02.166447 systemd-networkd[1365]: Enumeration completed Dec 13 08:48:02.166795 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 08:48:02.174744 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 08:48:02.182505 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1370) Dec 13 08:48:02.198718 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 13 08:48:02.215430 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1370) Dec 13 08:48:02.229863 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Dec 13 08:48:02.230487 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 08:48:02.230752 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 08:48:02.237741 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 08:48:02.250806 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 08:48:02.254701 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 08:48:02.256639 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 08:48:02.256700 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 08:48:02.256719 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 08:48:02.275796 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 08:48:02.275980 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 08:48:02.276760 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 08:48:02.297444 kernel: ISO 9660 Extensions: RRIP_1991A Dec 13 08:48:02.298253 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Dec 13 08:48:02.305852 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 08:48:02.306082 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 08:48:02.318481 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1374) Dec 13 08:48:02.319221 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 08:48:02.321390 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 08:48:02.327731 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 08:48:02.329721 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 08:48:02.332043 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 08:48:02.334552 systemd-resolved[1331]: Positive Trust Anchors: Dec 13 08:48:02.334569 systemd-resolved[1331]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 08:48:02.334607 systemd-resolved[1331]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 08:48:02.342584 systemd-resolved[1331]: Using system hostname 'ci-4081.2.1-7-437820f1b8'. Dec 13 08:48:02.345250 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 08:48:02.346725 systemd[1]: Reached target network.target - Network. Dec 13 08:48:02.347045 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 08:48:02.440744 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Dec 13 08:48:02.461164 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Dec 13 08:48:02.461210 kernel: ACPI: button: Power Button [PWRF] Dec 13 08:48:02.453282 systemd-networkd[1365]: eth1: Configuring with /run/systemd/network/10-b2:59:aa:d1:d1:57.network. Dec 13 08:48:02.457018 systemd-networkd[1365]: eth0: Configuring with /run/systemd/network/10-e2:d5:76:d4:d4:f4.network. Dec 13 08:48:02.459105 systemd-networkd[1365]: eth1: Link UP Dec 13 08:48:02.459112 systemd-networkd[1365]: eth1: Gained carrier Dec 13 08:48:02.464892 systemd-networkd[1365]: eth0: Link UP Dec 13 08:48:02.464901 systemd-networkd[1365]: eth0: Gained carrier Dec 13 08:48:02.472049 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 08:48:02.476088 systemd-timesyncd[1340]: Network configuration changed, trying to establish connection. Dec 13 08:48:02.480866 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 08:48:02.505596 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Dec 13 08:48:02.541718 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 08:48:03.262830 systemd-resolved[1331]: Clock change detected. Flushing caches. Dec 13 08:48:03.263275 systemd-timesyncd[1340]: Contacted time server 172.234.37.140:123 (0.flatcar.pool.ntp.org). Dec 13 08:48:03.263517 systemd-timesyncd[1340]: Initial clock synchronization to Fri 2024-12-13 08:48:03.262534 UTC. Dec 13 08:48:03.279039 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 08:48:03.303181 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Dec 13 08:48:03.303297 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Dec 13 08:48:03.303714 kernel: Console: switching to colour dummy device 80x25 Dec 13 08:48:03.303744 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Dec 13 08:48:03.303766 kernel: [drm] features: -context_init Dec 13 08:48:03.303787 kernel: [drm] number of scanouts: 1 Dec 13 08:48:03.304254 kernel: [drm] number of cap sets: 0 Dec 13 08:48:03.308468 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 08:48:03.321035 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Dec 13 08:48:03.340824 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 08:48:03.341220 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 08:48:03.342058 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Dec 13 08:48:03.342116 kernel: Console: switching to colour frame buffer device 128x48 Dec 13 08:48:03.352222 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Dec 13 08:48:03.356482 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 08:48:03.410100 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 08:48:03.410507 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 08:48:03.427278 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 08:48:03.474046 kernel: EDAC MC: Ver: 3.0.0 Dec 13 08:48:03.511151 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 08:48:03.521492 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 08:48:03.531991 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 08:48:03.547728 lvm[1429]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 08:48:03.588252 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 08:48:03.591392 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 08:48:03.591604 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 08:48:03.592160 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 08:48:03.594023 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 08:48:03.594551 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 08:48:03.594865 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 08:48:03.595019 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 08:48:03.595139 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 08:48:03.595183 systemd[1]: Reached target paths.target - Path Units. Dec 13 08:48:03.595273 systemd[1]: Reached target timers.target - Timer Units. Dec 13 08:48:03.598198 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 08:48:03.602525 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 08:48:03.617837 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 08:48:03.627403 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 08:48:03.630887 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 08:48:03.632966 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 08:48:03.635465 systemd[1]: Reached target basic.target - Basic System. Dec 13 08:48:03.638169 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 08:48:03.638217 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 08:48:03.645465 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 08:48:03.646447 lvm[1434]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 08:48:03.661081 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 13 08:48:03.676461 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 08:48:03.688965 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 08:48:03.695474 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 08:48:03.697380 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 08:48:03.712373 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 08:48:03.717265 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 08:48:03.726336 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 08:48:03.733153 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 08:48:03.748367 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 08:48:03.759399 jq[1438]: false Dec 13 08:48:03.749878 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 08:48:03.752273 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 08:48:03.759486 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 08:48:03.773391 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 08:48:03.778082 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 08:48:03.795089 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 08:48:03.795451 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 08:48:03.824718 dbus-daemon[1437]: [system] SELinux support is enabled Dec 13 08:48:03.829843 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 08:48:03.836279 jq[1448]: true Dec 13 08:48:03.851332 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 08:48:03.851679 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 08:48:03.857689 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 08:48:03.857788 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 08:48:03.863679 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 08:48:03.863859 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Dec 13 08:48:03.863906 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 08:48:03.873106 coreos-metadata[1436]: Dec 13 08:48:03.861 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Dec 13 08:48:03.892780 coreos-metadata[1436]: Dec 13 08:48:03.892 INFO Fetch successful Dec 13 08:48:03.916056 extend-filesystems[1439]: Found loop4 Dec 13 08:48:03.916056 extend-filesystems[1439]: Found loop5 Dec 13 08:48:03.916056 extend-filesystems[1439]: Found loop6 Dec 13 08:48:03.916056 extend-filesystems[1439]: Found loop7 Dec 13 08:48:03.916056 extend-filesystems[1439]: Found vda Dec 13 08:48:03.916056 extend-filesystems[1439]: Found vda1 Dec 13 08:48:03.916056 extend-filesystems[1439]: Found vda2 Dec 13 08:48:03.916056 extend-filesystems[1439]: Found vda3 Dec 13 08:48:03.916056 extend-filesystems[1439]: Found usr Dec 13 08:48:03.916056 extend-filesystems[1439]: Found vda4 Dec 13 08:48:03.916056 extend-filesystems[1439]: Found vda6 Dec 13 08:48:03.916056 extend-filesystems[1439]: Found vda7 Dec 13 08:48:03.916056 extend-filesystems[1439]: Found vda9 Dec 13 08:48:03.916056 extend-filesystems[1439]: Checking size of /dev/vda9 Dec 13 08:48:04.051941 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Dec 13 08:48:04.054187 jq[1462]: true Dec 13 08:48:03.928769 (ntainerd)[1461]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 08:48:04.054995 update_engine[1447]: I20241213 08:48:03.918121 1447 main.cc:92] Flatcar Update Engine starting Dec 13 08:48:04.054995 update_engine[1447]: I20241213 08:48:03.938161 1447 update_check_scheduler.cc:74] Next update check in 2m44s Dec 13 08:48:04.067200 extend-filesystems[1439]: Resized partition /dev/vda9 Dec 13 08:48:03.932500 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 08:48:04.068993 extend-filesystems[1481]: resize2fs 1.47.1 (20-May-2024) Dec 13 08:48:03.932827 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 08:48:04.094786 tar[1457]: linux-amd64/helm Dec 13 08:48:03.968669 systemd[1]: Started update-engine.service - Update Engine. Dec 13 08:48:03.995246 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 08:48:04.076234 systemd-logind[1446]: New seat seat0. Dec 13 08:48:04.088157 systemd-logind[1446]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 08:48:04.088187 systemd-logind[1446]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 08:48:04.088631 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 08:48:04.146915 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Dec 13 08:48:04.151159 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1378) Dec 13 08:48:04.164780 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 13 08:48:04.169112 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 08:48:04.174075 extend-filesystems[1481]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 08:48:04.174075 extend-filesystems[1481]: old_desc_blocks = 1, new_desc_blocks = 8 Dec 13 08:48:04.174075 extend-filesystems[1481]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Dec 13 08:48:04.197578 extend-filesystems[1439]: Resized filesystem in /dev/vda9 Dec 13 08:48:04.197578 extend-filesystems[1439]: Found vdb Dec 13 08:48:04.179408 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 08:48:04.179654 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 08:48:04.225691 sshd_keygen[1469]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 08:48:04.237033 bash[1500]: Updated "/home/core/.ssh/authorized_keys" Dec 13 08:48:04.263319 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 08:48:04.289791 systemd[1]: Starting sshkeys.service... Dec 13 08:48:04.420492 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 13 08:48:04.432585 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 13 08:48:04.440261 systemd-networkd[1365]: eth0: Gained IPv6LL Dec 13 08:48:04.448585 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 08:48:04.456099 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 08:48:04.465203 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 08:48:04.492158 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 08:48:04.507924 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 08:48:04.516419 locksmithd[1480]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 08:48:04.522517 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 08:48:04.568853 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 08:48:04.574328 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 08:48:04.606549 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 08:48:04.650062 coreos-metadata[1517]: Dec 13 08:48:04.649 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Dec 13 08:48:04.667362 coreos-metadata[1517]: Dec 13 08:48:04.666 INFO Fetch successful Dec 13 08:48:04.690988 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 08:48:04.696918 unknown[1517]: wrote ssh authorized keys file for user: core Dec 13 08:48:04.743846 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 08:48:04.748650 update-ssh-keys[1542]: Updated "/home/core/.ssh/authorized_keys" Dec 13 08:48:04.751247 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 13 08:48:04.764217 systemd[1]: Finished sshkeys.service. Dec 13 08:48:04.790047 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 08:48:04.805772 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 08:48:04.810817 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 08:48:04.825572 containerd[1461]: time="2024-12-13T08:48:04.825397283Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 08:48:04.916960 containerd[1461]: time="2024-12-13T08:48:04.916866213Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 08:48:04.920844 containerd[1461]: time="2024-12-13T08:48:04.920719586Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 08:48:04.920844 containerd[1461]: time="2024-12-13T08:48:04.920796750Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 08:48:04.920844 containerd[1461]: time="2024-12-13T08:48:04.920819695Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 08:48:04.921187 containerd[1461]: time="2024-12-13T08:48:04.921046783Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 08:48:04.921187 containerd[1461]: time="2024-12-13T08:48:04.921065521Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 08:48:04.921187 containerd[1461]: time="2024-12-13T08:48:04.921135461Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 08:48:04.921187 containerd[1461]: time="2024-12-13T08:48:04.921148043Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 08:48:04.923228 containerd[1461]: time="2024-12-13T08:48:04.921986047Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 08:48:04.923228 containerd[1461]: time="2024-12-13T08:48:04.922075671Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 08:48:04.923228 containerd[1461]: time="2024-12-13T08:48:04.922140492Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 08:48:04.923228 containerd[1461]: time="2024-12-13T08:48:04.922162843Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 08:48:04.923228 containerd[1461]: time="2024-12-13T08:48:04.922346638Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 08:48:04.923228 containerd[1461]: time="2024-12-13T08:48:04.922851385Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 08:48:04.924116 containerd[1461]: time="2024-12-13T08:48:04.924081807Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 08:48:04.924212 containerd[1461]: time="2024-12-13T08:48:04.924199835Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 08:48:04.924557 containerd[1461]: time="2024-12-13T08:48:04.924529793Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 08:48:04.924758 containerd[1461]: time="2024-12-13T08:48:04.924730357Z" level=info msg="metadata content store policy set" policy=shared Dec 13 08:48:04.930836 containerd[1461]: time="2024-12-13T08:48:04.930770765Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 08:48:04.931128 containerd[1461]: time="2024-12-13T08:48:04.931111617Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 08:48:04.931241 containerd[1461]: time="2024-12-13T08:48:04.931217543Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 08:48:04.931298 containerd[1461]: time="2024-12-13T08:48:04.931287927Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 08:48:04.931352 containerd[1461]: time="2024-12-13T08:48:04.931341835Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 08:48:04.931648 containerd[1461]: time="2024-12-13T08:48:04.931623850Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 08:48:04.932151 containerd[1461]: time="2024-12-13T08:48:04.932118532Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 08:48:04.932547 containerd[1461]: time="2024-12-13T08:48:04.932523519Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 08:48:04.932732 containerd[1461]: time="2024-12-13T08:48:04.932708455Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 08:48:04.932921 containerd[1461]: time="2024-12-13T08:48:04.932817685Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 08:48:04.933016 containerd[1461]: time="2024-12-13T08:48:04.932987427Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 08:48:04.933236 containerd[1461]: time="2024-12-13T08:48:04.933217734Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 08:48:04.933303 containerd[1461]: time="2024-12-13T08:48:04.933288529Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 08:48:04.933361 containerd[1461]: time="2024-12-13T08:48:04.933345809Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 08:48:04.933459 containerd[1461]: time="2024-12-13T08:48:04.933442380Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 08:48:04.933514 containerd[1461]: time="2024-12-13T08:48:04.933505120Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 08:48:04.933722 containerd[1461]: time="2024-12-13T08:48:04.933701859Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 08:48:04.934023 containerd[1461]: time="2024-12-13T08:48:04.933782078Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 08:48:04.934023 containerd[1461]: time="2024-12-13T08:48:04.933815404Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 08:48:04.934023 containerd[1461]: time="2024-12-13T08:48:04.933838882Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 08:48:04.934023 containerd[1461]: time="2024-12-13T08:48:04.933860194Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 08:48:04.934023 containerd[1461]: time="2024-12-13T08:48:04.933903069Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 08:48:04.934023 containerd[1461]: time="2024-12-13T08:48:04.933920537Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 08:48:04.934023 containerd[1461]: time="2024-12-13T08:48:04.933939601Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 08:48:04.934023 containerd[1461]: time="2024-12-13T08:48:04.933958482Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 08:48:04.934023 containerd[1461]: time="2024-12-13T08:48:04.933978440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 08:48:04.934495 containerd[1461]: time="2024-12-13T08:48:04.933998308Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 08:48:04.935144 containerd[1461]: time="2024-12-13T08:48:04.934610148Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 08:48:04.935144 containerd[1461]: time="2024-12-13T08:48:04.934637788Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 08:48:04.935144 containerd[1461]: time="2024-12-13T08:48:04.934661907Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 08:48:04.935144 containerd[1461]: time="2024-12-13T08:48:04.934683790Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 08:48:04.935144 containerd[1461]: time="2024-12-13T08:48:04.934735610Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 08:48:04.935144 containerd[1461]: time="2024-12-13T08:48:04.934781976Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 08:48:04.935144 containerd[1461]: time="2024-12-13T08:48:04.934821571Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 08:48:04.935144 containerd[1461]: time="2024-12-13T08:48:04.934842693Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 08:48:04.936545 containerd[1461]: time="2024-12-13T08:48:04.936500823Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 08:48:04.937961 containerd[1461]: time="2024-12-13T08:48:04.936725199Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 08:48:04.937961 containerd[1461]: time="2024-12-13T08:48:04.936745850Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 08:48:04.937961 containerd[1461]: time="2024-12-13T08:48:04.936759890Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 08:48:04.937961 containerd[1461]: time="2024-12-13T08:48:04.936771596Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 08:48:04.937961 containerd[1461]: time="2024-12-13T08:48:04.936787151Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 08:48:04.937961 containerd[1461]: time="2024-12-13T08:48:04.936798704Z" level=info msg="NRI interface is disabled by configuration." Dec 13 08:48:04.937961 containerd[1461]: time="2024-12-13T08:48:04.936809364Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 08:48:04.938305 containerd[1461]: time="2024-12-13T08:48:04.937152812Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 08:48:04.938305 containerd[1461]: time="2024-12-13T08:48:04.937214273Z" level=info msg="Connect containerd service" Dec 13 08:48:04.938305 containerd[1461]: time="2024-12-13T08:48:04.937256571Z" level=info msg="using legacy CRI server" Dec 13 08:48:04.938305 containerd[1461]: time="2024-12-13T08:48:04.937265318Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 08:48:04.938305 containerd[1461]: time="2024-12-13T08:48:04.937420878Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 08:48:04.941342 containerd[1461]: time="2024-12-13T08:48:04.940774312Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 08:48:04.941342 containerd[1461]: time="2024-12-13T08:48:04.941177373Z" level=info msg="Start subscribing containerd event" Dec 13 08:48:04.942298 containerd[1461]: time="2024-12-13T08:48:04.941542766Z" level=info msg="Start recovering state" Dec 13 08:48:04.942298 containerd[1461]: time="2024-12-13T08:48:04.942057769Z" level=info msg="Start event monitor" Dec 13 08:48:04.942298 containerd[1461]: time="2024-12-13T08:48:04.942091109Z" level=info msg="Start snapshots syncer" Dec 13 08:48:04.942298 containerd[1461]: time="2024-12-13T08:48:04.942103045Z" level=info msg="Start cni network conf syncer for default" Dec 13 08:48:04.942298 containerd[1461]: time="2024-12-13T08:48:04.942112374Z" level=info msg="Start streaming server" Dec 13 08:48:04.942909 containerd[1461]: time="2024-12-13T08:48:04.942869883Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 08:48:04.944986 containerd[1461]: time="2024-12-13T08:48:04.944937280Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 08:48:04.947115 containerd[1461]: time="2024-12-13T08:48:04.945378951Z" level=info msg="containerd successfully booted in 0.123538s" Dec 13 08:48:04.945587 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 08:48:05.078252 systemd-networkd[1365]: eth1: Gained IPv6LL Dec 13 08:48:05.192304 tar[1457]: linux-amd64/LICENSE Dec 13 08:48:05.192934 tar[1457]: linux-amd64/README.md Dec 13 08:48:05.217337 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 08:48:05.907679 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 08:48:05.907866 (kubelet)[1561]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 08:48:05.912798 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 08:48:05.917478 systemd[1]: Startup finished in 1.211s (kernel) + 6.011s (initrd) + 6.544s (userspace) = 13.767s. Dec 13 08:48:06.236117 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 08:48:06.244534 systemd[1]: Started sshd@0-64.23.129.27:22-147.75.109.163:48796.service - OpenSSH per-connection server daemon (147.75.109.163:48796). Dec 13 08:48:06.338363 sshd[1571]: Accepted publickey for core from 147.75.109.163 port 48796 ssh2: RSA SHA256:GmRBCjv5DLbtT++ktFQz5R9M6+onrAQ9dTcgZ+NRPZM Dec 13 08:48:06.341422 sshd[1571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:48:06.356510 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 08:48:06.365649 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 08:48:06.372755 systemd-logind[1446]: New session 1 of user core. Dec 13 08:48:06.399957 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 08:48:06.412800 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 08:48:06.432624 (systemd)[1575]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 08:48:06.633587 systemd[1575]: Queued start job for default target default.target. Dec 13 08:48:06.640550 systemd[1575]: Created slice app.slice - User Application Slice. Dec 13 08:48:06.640626 systemd[1575]: Reached target paths.target - Paths. Dec 13 08:48:06.640652 systemd[1575]: Reached target timers.target - Timers. Dec 13 08:48:06.651456 systemd[1575]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 08:48:06.667706 systemd[1575]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 08:48:06.668862 systemd[1575]: Reached target sockets.target - Sockets. Dec 13 08:48:06.668901 systemd[1575]: Reached target basic.target - Basic System. Dec 13 08:48:06.668976 systemd[1575]: Reached target default.target - Main User Target. Dec 13 08:48:06.669033 systemd[1575]: Startup finished in 224ms. Dec 13 08:48:06.669909 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 08:48:06.677757 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 08:48:06.760636 systemd[1]: Started sshd@1-64.23.129.27:22-147.75.109.163:48806.service - OpenSSH per-connection server daemon (147.75.109.163:48806). Dec 13 08:48:06.779739 kubelet[1561]: E1213 08:48:06.779666 1561 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 08:48:06.785219 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 08:48:06.785468 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 08:48:06.786310 systemd[1]: kubelet.service: Consumed 1.341s CPU time. Dec 13 08:48:06.833516 sshd[1588]: Accepted publickey for core from 147.75.109.163 port 48806 ssh2: RSA SHA256:GmRBCjv5DLbtT++ktFQz5R9M6+onrAQ9dTcgZ+NRPZM Dec 13 08:48:06.836275 sshd[1588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:48:06.844668 systemd-logind[1446]: New session 2 of user core. Dec 13 08:48:06.853450 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 08:48:06.922300 sshd[1588]: pam_unix(sshd:session): session closed for user core Dec 13 08:48:06.938966 systemd[1]: sshd@1-64.23.129.27:22-147.75.109.163:48806.service: Deactivated successfully. Dec 13 08:48:06.946552 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 08:48:06.949475 systemd-logind[1446]: Session 2 logged out. Waiting for processes to exit. Dec 13 08:48:06.955733 systemd[1]: Started sshd@2-64.23.129.27:22-147.75.109.163:48818.service - OpenSSH per-connection server daemon (147.75.109.163:48818). Dec 13 08:48:06.958783 systemd-logind[1446]: Removed session 2. Dec 13 08:48:07.019141 sshd[1596]: Accepted publickey for core from 147.75.109.163 port 48818 ssh2: RSA SHA256:GmRBCjv5DLbtT++ktFQz5R9M6+onrAQ9dTcgZ+NRPZM Dec 13 08:48:07.021523 sshd[1596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:48:07.028256 systemd-logind[1446]: New session 3 of user core. Dec 13 08:48:07.034316 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 08:48:07.092043 sshd[1596]: pam_unix(sshd:session): session closed for user core Dec 13 08:48:07.104930 systemd[1]: sshd@2-64.23.129.27:22-147.75.109.163:48818.service: Deactivated successfully. Dec 13 08:48:07.107224 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 08:48:07.108103 systemd-logind[1446]: Session 3 logged out. Waiting for processes to exit. Dec 13 08:48:07.116549 systemd[1]: Started sshd@3-64.23.129.27:22-147.75.109.163:48822.service - OpenSSH per-connection server daemon (147.75.109.163:48822). Dec 13 08:48:07.117776 systemd-logind[1446]: Removed session 3. Dec 13 08:48:07.167684 sshd[1603]: Accepted publickey for core from 147.75.109.163 port 48822 ssh2: RSA SHA256:GmRBCjv5DLbtT++ktFQz5R9M6+onrAQ9dTcgZ+NRPZM Dec 13 08:48:07.169798 sshd[1603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:48:07.176858 systemd-logind[1446]: New session 4 of user core. Dec 13 08:48:07.182376 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 08:48:07.251827 sshd[1603]: pam_unix(sshd:session): session closed for user core Dec 13 08:48:07.262561 systemd[1]: sshd@3-64.23.129.27:22-147.75.109.163:48822.service: Deactivated successfully. Dec 13 08:48:07.264877 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 08:48:07.265690 systemd-logind[1446]: Session 4 logged out. Waiting for processes to exit. Dec 13 08:48:07.275682 systemd[1]: Started sshd@4-64.23.129.27:22-147.75.109.163:48830.service - OpenSSH per-connection server daemon (147.75.109.163:48830). Dec 13 08:48:07.277738 systemd-logind[1446]: Removed session 4. Dec 13 08:48:07.325195 sshd[1610]: Accepted publickey for core from 147.75.109.163 port 48830 ssh2: RSA SHA256:GmRBCjv5DLbtT++ktFQz5R9M6+onrAQ9dTcgZ+NRPZM Dec 13 08:48:07.327744 sshd[1610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:48:07.335733 systemd-logind[1446]: New session 5 of user core. Dec 13 08:48:07.353379 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 08:48:07.426265 sudo[1613]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 08:48:07.426686 sudo[1613]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 08:48:07.926645 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 08:48:07.940856 (dockerd)[1628]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 08:48:08.481439 dockerd[1628]: time="2024-12-13T08:48:08.481100370Z" level=info msg="Starting up" Dec 13 08:48:08.592747 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2917330365-merged.mount: Deactivated successfully. Dec 13 08:48:08.671773 dockerd[1628]: time="2024-12-13T08:48:08.670624276Z" level=info msg="Loading containers: start." Dec 13 08:48:08.825049 kernel: Initializing XFRM netlink socket Dec 13 08:48:08.933113 systemd-networkd[1365]: docker0: Link UP Dec 13 08:48:08.959244 dockerd[1628]: time="2024-12-13T08:48:08.959154664Z" level=info msg="Loading containers: done." Dec 13 08:48:08.981857 dockerd[1628]: time="2024-12-13T08:48:08.981232304Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 08:48:08.981857 dockerd[1628]: time="2024-12-13T08:48:08.981418932Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Dec 13 08:48:08.981857 dockerd[1628]: time="2024-12-13T08:48:08.981580760Z" level=info msg="Daemon has completed initialization" Dec 13 08:48:09.035725 dockerd[1628]: time="2024-12-13T08:48:09.035640443Z" level=info msg="API listen on /run/docker.sock" Dec 13 08:48:09.035783 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 08:48:09.589440 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2216230734-merged.mount: Deactivated successfully. Dec 13 08:48:10.088321 containerd[1461]: time="2024-12-13T08:48:10.087902217Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\"" Dec 13 08:48:10.693409 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1633118817.mount: Deactivated successfully. Dec 13 08:48:12.426864 containerd[1461]: time="2024-12-13T08:48:12.426780955Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:48:12.428519 containerd[1461]: time="2024-12-13T08:48:12.428455011Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.8: active requests=0, bytes read=32675642" Dec 13 08:48:12.429940 containerd[1461]: time="2024-12-13T08:48:12.428651238Z" level=info msg="ImageCreate event name:\"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:48:12.432911 containerd[1461]: time="2024-12-13T08:48:12.432111003Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:48:12.434199 containerd[1461]: time="2024-12-13T08:48:12.433840753Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.8\" with image id \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\", size \"32672442\" in 2.345883455s" Dec 13 08:48:12.434199 containerd[1461]: time="2024-12-13T08:48:12.433919629Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\" returns image reference \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\"" Dec 13 08:48:12.480331 containerd[1461]: time="2024-12-13T08:48:12.479833391Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\"" Dec 13 08:48:14.425732 containerd[1461]: time="2024-12-13T08:48:14.424673306Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:48:14.425732 containerd[1461]: time="2024-12-13T08:48:14.425608070Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.8: active requests=0, bytes read=29606409" Dec 13 08:48:14.426692 containerd[1461]: time="2024-12-13T08:48:14.426634663Z" level=info msg="ImageCreate event name:\"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:48:14.432017 containerd[1461]: time="2024-12-13T08:48:14.431926535Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:48:14.434512 containerd[1461]: time="2024-12-13T08:48:14.434295775Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.8\" with image id \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\", size \"31051521\" in 1.954401104s" Dec 13 08:48:14.434985 containerd[1461]: time="2024-12-13T08:48:14.434751068Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\" returns image reference \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\"" Dec 13 08:48:14.485024 containerd[1461]: time="2024-12-13T08:48:14.484757957Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\"" Dec 13 08:48:15.859279 containerd[1461]: time="2024-12-13T08:48:15.859184364Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:48:15.861104 containerd[1461]: time="2024-12-13T08:48:15.860957654Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.8: active requests=0, bytes read=17783035" Dec 13 08:48:15.861648 containerd[1461]: time="2024-12-13T08:48:15.861550246Z" level=info msg="ImageCreate event name:\"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:48:15.866490 containerd[1461]: time="2024-12-13T08:48:15.866414420Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:48:15.868791 containerd[1461]: time="2024-12-13T08:48:15.868523149Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.8\" with image id \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\", size \"19228165\" in 1.383683599s" Dec 13 08:48:15.868791 containerd[1461]: time="2024-12-13T08:48:15.868603816Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\" returns image reference \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\"" Dec 13 08:48:15.909302 containerd[1461]: time="2024-12-13T08:48:15.908842258Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Dec 13 08:48:16.273124 systemd-resolved[1331]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Dec 13 08:48:17.036065 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 08:48:17.045572 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 08:48:17.051461 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3523062015.mount: Deactivated successfully. Dec 13 08:48:17.227803 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 08:48:17.236128 (kubelet)[1872]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 08:48:17.327947 kubelet[1872]: E1213 08:48:17.327693 1872 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 08:48:17.335811 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 08:48:17.336088 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 08:48:17.692863 containerd[1461]: time="2024-12-13T08:48:17.692657599Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:48:17.695086 containerd[1461]: time="2024-12-13T08:48:17.694696287Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.8: active requests=0, bytes read=29057470" Dec 13 08:48:17.696219 containerd[1461]: time="2024-12-13T08:48:17.696154793Z" level=info msg="ImageCreate event name:\"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:48:17.699874 containerd[1461]: time="2024-12-13T08:48:17.699808626Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:48:17.701431 containerd[1461]: time="2024-12-13T08:48:17.701320092Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.8\" with image id \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\", size \"29056489\" in 1.792389836s" Dec 13 08:48:17.702256 containerd[1461]: time="2024-12-13T08:48:17.701611020Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\"" Dec 13 08:48:17.759108 containerd[1461]: time="2024-12-13T08:48:17.758951802Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 08:48:18.269448 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3187350092.mount: Deactivated successfully. Dec 13 08:48:19.296962 containerd[1461]: time="2024-12-13T08:48:19.296872244Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:48:19.297971 containerd[1461]: time="2024-12-13T08:48:19.297880034Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Dec 13 08:48:19.299484 containerd[1461]: time="2024-12-13T08:48:19.299377994Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:48:19.304758 containerd[1461]: time="2024-12-13T08:48:19.304655770Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:48:19.307700 containerd[1461]: time="2024-12-13T08:48:19.306947277Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.54790142s" Dec 13 08:48:19.307700 containerd[1461]: time="2024-12-13T08:48:19.307048328Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 08:48:19.350251 systemd-resolved[1331]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Dec 13 08:48:19.352559 containerd[1461]: time="2024-12-13T08:48:19.352499697Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 08:48:19.827214 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3157175965.mount: Deactivated successfully. Dec 13 08:48:19.831933 containerd[1461]: time="2024-12-13T08:48:19.831855136Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:48:19.833340 containerd[1461]: time="2024-12-13T08:48:19.833170956Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Dec 13 08:48:19.833340 containerd[1461]: time="2024-12-13T08:48:19.833274205Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:48:19.836841 containerd[1461]: time="2024-12-13T08:48:19.836759461Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:48:19.838662 containerd[1461]: time="2024-12-13T08:48:19.837858645Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 485.308572ms" Dec 13 08:48:19.838662 containerd[1461]: time="2024-12-13T08:48:19.837909674Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 08:48:19.869667 containerd[1461]: time="2024-12-13T08:48:19.869601163Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Dec 13 08:48:20.428242 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1027878571.mount: Deactivated successfully. Dec 13 08:48:22.391043 containerd[1461]: time="2024-12-13T08:48:22.389240443Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:48:22.391043 containerd[1461]: time="2024-12-13T08:48:22.390644241Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Dec 13 08:48:22.392053 containerd[1461]: time="2024-12-13T08:48:22.391945124Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:48:22.396578 containerd[1461]: time="2024-12-13T08:48:22.396495608Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:48:22.398774 containerd[1461]: time="2024-12-13T08:48:22.398671598Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.528744627s" Dec 13 08:48:22.399083 containerd[1461]: time="2024-12-13T08:48:22.399042831Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Dec 13 08:48:25.402840 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 08:48:25.411469 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 08:48:25.448043 systemd[1]: Reloading requested from client PID 2050 ('systemctl') (unit session-5.scope)... Dec 13 08:48:25.448070 systemd[1]: Reloading... Dec 13 08:48:25.609086 zram_generator::config[2095]: No configuration found. Dec 13 08:48:25.787193 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 08:48:25.884629 systemd[1]: Reloading finished in 435 ms. Dec 13 08:48:25.945344 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 08:48:25.945485 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 08:48:25.946151 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 08:48:25.952702 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 08:48:26.119408 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 08:48:26.122704 (kubelet)[2143]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 08:48:26.198681 kubelet[2143]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 08:48:26.199396 kubelet[2143]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 08:48:26.199534 kubelet[2143]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 08:48:26.201345 kubelet[2143]: I1213 08:48:26.201226 2143 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 08:48:26.748418 kubelet[2143]: I1213 08:48:26.748348 2143 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 08:48:26.749312 kubelet[2143]: I1213 08:48:26.748751 2143 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 08:48:26.749312 kubelet[2143]: I1213 08:48:26.749139 2143 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 08:48:26.777687 kubelet[2143]: E1213 08:48:26.776589 2143 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://64.23.129.27:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 64.23.129.27:6443: connect: connection refused Dec 13 08:48:26.777687 kubelet[2143]: I1213 08:48:26.776849 2143 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 08:48:26.800122 kubelet[2143]: I1213 08:48:26.800083 2143 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 08:48:26.800788 kubelet[2143]: I1213 08:48:26.800721 2143 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 08:48:26.801352 kubelet[2143]: I1213 08:48:26.800943 2143 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.2.1-7-437820f1b8","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 08:48:26.802567 kubelet[2143]: I1213 08:48:26.802529 2143 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 08:48:26.802757 kubelet[2143]: I1213 08:48:26.802735 2143 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 08:48:26.804152 kubelet[2143]: I1213 08:48:26.804115 2143 state_mem.go:36] "Initialized new in-memory state store" Dec 13 08:48:26.805912 kubelet[2143]: I1213 08:48:26.805468 2143 kubelet.go:400] "Attempting to sync node with API server" Dec 13 08:48:26.805912 kubelet[2143]: I1213 08:48:26.805516 2143 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 08:48:26.805912 kubelet[2143]: I1213 08:48:26.805565 2143 kubelet.go:312] "Adding apiserver pod source" Dec 13 08:48:26.805912 kubelet[2143]: I1213 08:48:26.805609 2143 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 08:48:26.811042 kubelet[2143]: W1213 08:48:26.810478 2143 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://64.23.129.27:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.1-7-437820f1b8&limit=500&resourceVersion=0": dial tcp 64.23.129.27:6443: connect: connection refused Dec 13 08:48:26.811042 kubelet[2143]: E1213 08:48:26.810621 2143 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://64.23.129.27:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.1-7-437820f1b8&limit=500&resourceVersion=0": dial tcp 64.23.129.27:6443: connect: connection refused Dec 13 08:48:26.811042 kubelet[2143]: W1213 08:48:26.810718 2143 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://64.23.129.27:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 64.23.129.27:6443: connect: connection refused Dec 13 08:48:26.811042 kubelet[2143]: E1213 08:48:26.810762 2143 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://64.23.129.27:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 64.23.129.27:6443: connect: connection refused Dec 13 08:48:26.811042 kubelet[2143]: I1213 08:48:26.810898 2143 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 08:48:26.814136 kubelet[2143]: I1213 08:48:26.814018 2143 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 08:48:26.814368 kubelet[2143]: W1213 08:48:26.814184 2143 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 08:48:26.816390 kubelet[2143]: I1213 08:48:26.816203 2143 server.go:1264] "Started kubelet" Dec 13 08:48:26.821775 kubelet[2143]: I1213 08:48:26.821683 2143 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 08:48:26.823183 kubelet[2143]: I1213 08:48:26.822829 2143 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 08:48:26.823582 kubelet[2143]: I1213 08:48:26.823550 2143 server.go:455] "Adding debug handlers to kubelet server" Dec 13 08:48:26.823746 kubelet[2143]: I1213 08:48:26.823726 2143 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 08:48:26.824200 kubelet[2143]: E1213 08:48:26.824071 2143 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://64.23.129.27:6443/api/v1/namespaces/default/events\": dial tcp 64.23.129.27:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.2.1-7-437820f1b8.1810b04e48fdc766 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.2.1-7-437820f1b8,UID:ci-4081.2.1-7-437820f1b8,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.2.1-7-437820f1b8,},FirstTimestamp:2024-12-13 08:48:26.81616983 +0000 UTC m=+0.686584015,LastTimestamp:2024-12-13 08:48:26.81616983 +0000 UTC m=+0.686584015,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.2.1-7-437820f1b8,}" Dec 13 08:48:26.826559 kubelet[2143]: I1213 08:48:26.826527 2143 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 08:48:26.829426 kubelet[2143]: I1213 08:48:26.829398 2143 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 08:48:26.829771 kubelet[2143]: I1213 08:48:26.829752 2143 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 08:48:26.829985 kubelet[2143]: I1213 08:48:26.829971 2143 reconciler.go:26] "Reconciler: start to sync state" Dec 13 08:48:26.830714 kubelet[2143]: W1213 08:48:26.830661 2143 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://64.23.129.27:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.23.129.27:6443: connect: connection refused Dec 13 08:48:26.830834 kubelet[2143]: E1213 08:48:26.830823 2143 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://64.23.129.27:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.23.129.27:6443: connect: connection refused Dec 13 08:48:26.831247 kubelet[2143]: E1213 08:48:26.831213 2143 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.129.27:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.1-7-437820f1b8?timeout=10s\": dial tcp 64.23.129.27:6443: connect: connection refused" interval="200ms" Dec 13 08:48:26.835429 kubelet[2143]: I1213 08:48:26.835389 2143 factory.go:221] Registration of the systemd container factory successfully Dec 13 08:48:26.835615 kubelet[2143]: I1213 08:48:26.835582 2143 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 08:48:26.839787 kubelet[2143]: E1213 08:48:26.839742 2143 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 08:48:26.841234 kubelet[2143]: I1213 08:48:26.839985 2143 factory.go:221] Registration of the containerd container factory successfully Dec 13 08:48:26.869782 kubelet[2143]: I1213 08:48:26.869698 2143 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 08:48:26.878122 kubelet[2143]: I1213 08:48:26.878046 2143 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 08:48:26.878122 kubelet[2143]: I1213 08:48:26.878120 2143 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 08:48:26.878481 kubelet[2143]: I1213 08:48:26.878158 2143 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 08:48:26.878481 kubelet[2143]: E1213 08:48:26.878252 2143 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 08:48:26.886060 kubelet[2143]: W1213 08:48:26.885902 2143 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://64.23.129.27:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.23.129.27:6443: connect: connection refused Dec 13 08:48:26.886060 kubelet[2143]: E1213 08:48:26.886030 2143 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://64.23.129.27:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.23.129.27:6443: connect: connection refused Dec 13 08:48:26.890659 kubelet[2143]: I1213 08:48:26.890619 2143 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 08:48:26.892453 kubelet[2143]: I1213 08:48:26.891779 2143 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 08:48:26.892453 kubelet[2143]: I1213 08:48:26.891812 2143 state_mem.go:36] "Initialized new in-memory state store" Dec 13 08:48:26.894811 kubelet[2143]: I1213 08:48:26.894745 2143 policy_none.go:49] "None policy: Start" Dec 13 08:48:26.896534 kubelet[2143]: I1213 08:48:26.896081 2143 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 08:48:26.896534 kubelet[2143]: I1213 08:48:26.896124 2143 state_mem.go:35] "Initializing new in-memory state store" Dec 13 08:48:26.904286 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 08:48:26.918596 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 08:48:26.931449 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 08:48:26.932532 kubelet[2143]: I1213 08:48:26.931514 2143 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.1-7-437820f1b8" Dec 13 08:48:26.932532 kubelet[2143]: E1213 08:48:26.931951 2143 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://64.23.129.27:6443/api/v1/nodes\": dial tcp 64.23.129.27:6443: connect: connection refused" node="ci-4081.2.1-7-437820f1b8" Dec 13 08:48:26.933308 kubelet[2143]: I1213 08:48:26.933282 2143 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 08:48:26.933937 kubelet[2143]: I1213 08:48:26.933516 2143 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 08:48:26.933937 kubelet[2143]: I1213 08:48:26.933666 2143 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 08:48:26.936457 kubelet[2143]: E1213 08:48:26.936432 2143 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.2.1-7-437820f1b8\" not found" Dec 13 08:48:26.978699 kubelet[2143]: I1213 08:48:26.978577 2143 topology_manager.go:215] "Topology Admit Handler" podUID="a4a97ee1c3ddfa3e7eb4ff9c066fae50" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.2.1-7-437820f1b8" Dec 13 08:48:26.980049 kubelet[2143]: I1213 08:48:26.979888 2143 topology_manager.go:215] "Topology Admit Handler" podUID="87d25eb12d0b53059fe5d566bd7df922" podNamespace="kube-system" podName="kube-scheduler-ci-4081.2.1-7-437820f1b8" Dec 13 08:48:26.981709 kubelet[2143]: I1213 08:48:26.981597 2143 topology_manager.go:215] "Topology Admit Handler" podUID="b6a50a32f98e94b7f9b414ca3b1d804c" podNamespace="kube-system" podName="kube-apiserver-ci-4081.2.1-7-437820f1b8" Dec 13 08:48:26.991071 systemd[1]: Created slice kubepods-burstable-poda4a97ee1c3ddfa3e7eb4ff9c066fae50.slice - libcontainer container kubepods-burstable-poda4a97ee1c3ddfa3e7eb4ff9c066fae50.slice. Dec 13 08:48:27.013884 systemd[1]: Created slice kubepods-burstable-pod87d25eb12d0b53059fe5d566bd7df922.slice - libcontainer container kubepods-burstable-pod87d25eb12d0b53059fe5d566bd7df922.slice. Dec 13 08:48:27.033414 kubelet[2143]: E1213 08:48:27.032219 2143 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.129.27:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.1-7-437820f1b8?timeout=10s\": dial tcp 64.23.129.27:6443: connect: connection refused" interval="400ms" Dec 13 08:48:27.032824 systemd[1]: Created slice kubepods-burstable-podb6a50a32f98e94b7f9b414ca3b1d804c.slice - libcontainer container kubepods-burstable-podb6a50a32f98e94b7f9b414ca3b1d804c.slice. Dec 13 08:48:27.131668 kubelet[2143]: I1213 08:48:27.131588 2143 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b6a50a32f98e94b7f9b414ca3b1d804c-k8s-certs\") pod \"kube-apiserver-ci-4081.2.1-7-437820f1b8\" (UID: \"b6a50a32f98e94b7f9b414ca3b1d804c\") " pod="kube-system/kube-apiserver-ci-4081.2.1-7-437820f1b8" Dec 13 08:48:27.131668 kubelet[2143]: I1213 08:48:27.131664 2143 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a4a97ee1c3ddfa3e7eb4ff9c066fae50-kubeconfig\") pod \"kube-controller-manager-ci-4081.2.1-7-437820f1b8\" (UID: \"a4a97ee1c3ddfa3e7eb4ff9c066fae50\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-7-437820f1b8" Dec 13 08:48:27.131668 kubelet[2143]: I1213 08:48:27.131697 2143 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b6a50a32f98e94b7f9b414ca3b1d804c-ca-certs\") pod \"kube-apiserver-ci-4081.2.1-7-437820f1b8\" (UID: \"b6a50a32f98e94b7f9b414ca3b1d804c\") " pod="kube-system/kube-apiserver-ci-4081.2.1-7-437820f1b8" Dec 13 08:48:27.131986 kubelet[2143]: I1213 08:48:27.131733 2143 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a4a97ee1c3ddfa3e7eb4ff9c066fae50-k8s-certs\") pod \"kube-controller-manager-ci-4081.2.1-7-437820f1b8\" (UID: \"a4a97ee1c3ddfa3e7eb4ff9c066fae50\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-7-437820f1b8" Dec 13 08:48:27.131986 kubelet[2143]: I1213 08:48:27.131769 2143 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a4a97ee1c3ddfa3e7eb4ff9c066fae50-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.2.1-7-437820f1b8\" (UID: \"a4a97ee1c3ddfa3e7eb4ff9c066fae50\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-7-437820f1b8" Dec 13 08:48:27.131986 kubelet[2143]: I1213 08:48:27.131805 2143 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/87d25eb12d0b53059fe5d566bd7df922-kubeconfig\") pod \"kube-scheduler-ci-4081.2.1-7-437820f1b8\" (UID: \"87d25eb12d0b53059fe5d566bd7df922\") " pod="kube-system/kube-scheduler-ci-4081.2.1-7-437820f1b8" Dec 13 08:48:27.131986 kubelet[2143]: I1213 08:48:27.131839 2143 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b6a50a32f98e94b7f9b414ca3b1d804c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.2.1-7-437820f1b8\" (UID: \"b6a50a32f98e94b7f9b414ca3b1d804c\") " pod="kube-system/kube-apiserver-ci-4081.2.1-7-437820f1b8" Dec 13 08:48:27.131986 kubelet[2143]: I1213 08:48:27.131867 2143 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a4a97ee1c3ddfa3e7eb4ff9c066fae50-ca-certs\") pod \"kube-controller-manager-ci-4081.2.1-7-437820f1b8\" (UID: \"a4a97ee1c3ddfa3e7eb4ff9c066fae50\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-7-437820f1b8" Dec 13 08:48:27.132246 kubelet[2143]: I1213 08:48:27.131899 2143 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a4a97ee1c3ddfa3e7eb4ff9c066fae50-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.2.1-7-437820f1b8\" (UID: \"a4a97ee1c3ddfa3e7eb4ff9c066fae50\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-7-437820f1b8" Dec 13 08:48:27.133695 kubelet[2143]: I1213 08:48:27.133649 2143 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.1-7-437820f1b8" Dec 13 08:48:27.134468 kubelet[2143]: E1213 08:48:27.134219 2143 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://64.23.129.27:6443/api/v1/nodes\": dial tcp 64.23.129.27:6443: connect: connection refused" node="ci-4081.2.1-7-437820f1b8" Dec 13 08:48:27.324640 kubelet[2143]: E1213 08:48:27.323230 2143 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:48:27.324640 kubelet[2143]: E1213 08:48:27.323555 2143 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:48:27.331109 containerd[1461]: time="2024-12-13T08:48:27.329692896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.2.1-7-437820f1b8,Uid:a4a97ee1c3ddfa3e7eb4ff9c066fae50,Namespace:kube-system,Attempt:0,}" Dec 13 08:48:27.339083 kubelet[2143]: E1213 08:48:27.337669 2143 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:48:27.342078 containerd[1461]: time="2024-12-13T08:48:27.342022376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.2.1-7-437820f1b8,Uid:b6a50a32f98e94b7f9b414ca3b1d804c,Namespace:kube-system,Attempt:0,}" Dec 13 08:48:27.344698 systemd-resolved[1331]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. Dec 13 08:48:27.346197 containerd[1461]: time="2024-12-13T08:48:27.344882668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.2.1-7-437820f1b8,Uid:87d25eb12d0b53059fe5d566bd7df922,Namespace:kube-system,Attempt:0,}" Dec 13 08:48:27.435256 kubelet[2143]: E1213 08:48:27.435151 2143 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.129.27:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.1-7-437820f1b8?timeout=10s\": dial tcp 64.23.129.27:6443: connect: connection refused" interval="800ms" Dec 13 08:48:27.537033 kubelet[2143]: I1213 08:48:27.536970 2143 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.1-7-437820f1b8" Dec 13 08:48:27.537924 kubelet[2143]: E1213 08:48:27.537860 2143 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://64.23.129.27:6443/api/v1/nodes\": dial tcp 64.23.129.27:6443: connect: connection refused" node="ci-4081.2.1-7-437820f1b8" Dec 13 08:48:27.653951 kubelet[2143]: W1213 08:48:27.653672 2143 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://64.23.129.27:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 64.23.129.27:6443: connect: connection refused Dec 13 08:48:27.653951 kubelet[2143]: E1213 08:48:27.653812 2143 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://64.23.129.27:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 64.23.129.27:6443: connect: connection refused Dec 13 08:48:27.842461 kubelet[2143]: W1213 08:48:27.842203 2143 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://64.23.129.27:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.23.129.27:6443: connect: connection refused Dec 13 08:48:27.842461 kubelet[2143]: E1213 08:48:27.842448 2143 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://64.23.129.27:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.23.129.27:6443: connect: connection refused Dec 13 08:48:27.883148 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount90913006.mount: Deactivated successfully. Dec 13 08:48:27.890665 containerd[1461]: time="2024-12-13T08:48:27.890608656Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 08:48:27.892500 containerd[1461]: time="2024-12-13T08:48:27.892381623Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Dec 13 08:48:27.893676 containerd[1461]: time="2024-12-13T08:48:27.893623503Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 08:48:27.895288 containerd[1461]: time="2024-12-13T08:48:27.895108447Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 08:48:27.895413 containerd[1461]: time="2024-12-13T08:48:27.895288090Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 08:48:27.896744 containerd[1461]: time="2024-12-13T08:48:27.896403196Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 08:48:27.896744 containerd[1461]: time="2024-12-13T08:48:27.896661468Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 08:48:27.900356 containerd[1461]: time="2024-12-13T08:48:27.900297743Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 08:48:27.903561 containerd[1461]: time="2024-12-13T08:48:27.902889195Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 570.883049ms" Dec 13 08:48:27.904837 containerd[1461]: time="2024-12-13T08:48:27.904663109Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 558.041326ms" Dec 13 08:48:27.908316 containerd[1461]: time="2024-12-13T08:48:27.908265525Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 565.32409ms" Dec 13 08:48:27.992294 kubelet[2143]: W1213 08:48:27.992222 2143 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://64.23.129.27:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.23.129.27:6443: connect: connection refused Dec 13 08:48:27.992294 kubelet[2143]: E1213 08:48:27.992292 2143 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://64.23.129.27:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.23.129.27:6443: connect: connection refused Dec 13 08:48:28.120414 containerd[1461]: time="2024-12-13T08:48:28.120175112Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 08:48:28.120806 containerd[1461]: time="2024-12-13T08:48:28.120381579Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 08:48:28.120806 containerd[1461]: time="2024-12-13T08:48:28.120758428Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:48:28.121655 containerd[1461]: time="2024-12-13T08:48:28.121518434Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:48:28.123823 containerd[1461]: time="2024-12-13T08:48:28.122032149Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 08:48:28.123823 containerd[1461]: time="2024-12-13T08:48:28.122117325Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 08:48:28.123823 containerd[1461]: time="2024-12-13T08:48:28.122140666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:48:28.123823 containerd[1461]: time="2024-12-13T08:48:28.122279741Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:48:28.128818 containerd[1461]: time="2024-12-13T08:48:28.128670341Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 08:48:28.128818 containerd[1461]: time="2024-12-13T08:48:28.128766348Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 08:48:28.129133 containerd[1461]: time="2024-12-13T08:48:28.128779808Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:48:28.129133 containerd[1461]: time="2024-12-13T08:48:28.128921359Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:48:28.168062 systemd[1]: Started cri-containerd-9ea7ed7efb4cec929af62659c52d99c2dcb59aab5badca49e291467f125788ae.scope - libcontainer container 9ea7ed7efb4cec929af62659c52d99c2dcb59aab5badca49e291467f125788ae. Dec 13 08:48:28.183378 systemd[1]: Started cri-containerd-8c9a7ea0ff39d08618ab788451bc225d6bca1db291827553b8289cfbe1e2fc60.scope - libcontainer container 8c9a7ea0ff39d08618ab788451bc225d6bca1db291827553b8289cfbe1e2fc60. Dec 13 08:48:28.198740 systemd[1]: Started cri-containerd-f1d7189a5a28581345ec0cf99b16c2274e20644930b5e6bf4af9ac33019eb7aa.scope - libcontainer container f1d7189a5a28581345ec0cf99b16c2274e20644930b5e6bf4af9ac33019eb7aa. Dec 13 08:48:28.239039 kubelet[2143]: E1213 08:48:28.238767 2143 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.129.27:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.1-7-437820f1b8?timeout=10s\": dial tcp 64.23.129.27:6443: connect: connection refused" interval="1.6s" Dec 13 08:48:28.277531 containerd[1461]: time="2024-12-13T08:48:28.277429503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.2.1-7-437820f1b8,Uid:a4a97ee1c3ddfa3e7eb4ff9c066fae50,Namespace:kube-system,Attempt:0,} returns sandbox id \"9ea7ed7efb4cec929af62659c52d99c2dcb59aab5badca49e291467f125788ae\"" Dec 13 08:48:28.286298 kubelet[2143]: E1213 08:48:28.286080 2143 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:48:28.306059 containerd[1461]: time="2024-12-13T08:48:28.304970011Z" level=info msg="CreateContainer within sandbox \"9ea7ed7efb4cec929af62659c52d99c2dcb59aab5badca49e291467f125788ae\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 08:48:28.317543 containerd[1461]: time="2024-12-13T08:48:28.317487349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.2.1-7-437820f1b8,Uid:87d25eb12d0b53059fe5d566bd7df922,Namespace:kube-system,Attempt:0,} returns sandbox id \"8c9a7ea0ff39d08618ab788451bc225d6bca1db291827553b8289cfbe1e2fc60\"" Dec 13 08:48:28.321414 kubelet[2143]: E1213 08:48:28.321337 2143 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:48:28.344623 containerd[1461]: time="2024-12-13T08:48:28.329923418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.2.1-7-437820f1b8,Uid:b6a50a32f98e94b7f9b414ca3b1d804c,Namespace:kube-system,Attempt:0,} returns sandbox id \"f1d7189a5a28581345ec0cf99b16c2274e20644930b5e6bf4af9ac33019eb7aa\"" Dec 13 08:48:28.344623 containerd[1461]: time="2024-12-13T08:48:28.332214001Z" level=info msg="CreateContainer within sandbox \"8c9a7ea0ff39d08618ab788451bc225d6bca1db291827553b8289cfbe1e2fc60\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 08:48:28.344623 containerd[1461]: time="2024-12-13T08:48:28.338720121Z" level=info msg="CreateContainer within sandbox \"f1d7189a5a28581345ec0cf99b16c2274e20644930b5e6bf4af9ac33019eb7aa\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 08:48:28.345292 kubelet[2143]: E1213 08:48:28.333059 2143 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:48:28.345292 kubelet[2143]: I1213 08:48:28.340850 2143 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.1-7-437820f1b8" Dec 13 08:48:28.345292 kubelet[2143]: E1213 08:48:28.341242 2143 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://64.23.129.27:6443/api/v1/nodes\": dial tcp 64.23.129.27:6443: connect: connection refused" node="ci-4081.2.1-7-437820f1b8" Dec 13 08:48:28.346991 kubelet[2143]: W1213 08:48:28.346665 2143 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://64.23.129.27:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.1-7-437820f1b8&limit=500&resourceVersion=0": dial tcp 64.23.129.27:6443: connect: connection refused Dec 13 08:48:28.346991 kubelet[2143]: E1213 08:48:28.346769 2143 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://64.23.129.27:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.1-7-437820f1b8&limit=500&resourceVersion=0": dial tcp 64.23.129.27:6443: connect: connection refused Dec 13 08:48:28.354506 containerd[1461]: time="2024-12-13T08:48:28.353508928Z" level=info msg="CreateContainer within sandbox \"9ea7ed7efb4cec929af62659c52d99c2dcb59aab5badca49e291467f125788ae\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8b5c3332bb81ada8ed770f7e386c1cbe87871944b8b49db7ea499d51c4f0e6cc\"" Dec 13 08:48:28.355478 containerd[1461]: time="2024-12-13T08:48:28.355439331Z" level=info msg="StartContainer for \"8b5c3332bb81ada8ed770f7e386c1cbe87871944b8b49db7ea499d51c4f0e6cc\"" Dec 13 08:48:28.364333 containerd[1461]: time="2024-12-13T08:48:28.364079974Z" level=info msg="CreateContainer within sandbox \"8c9a7ea0ff39d08618ab788451bc225d6bca1db291827553b8289cfbe1e2fc60\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9a84d3440006cda37a3af1a610564ce31467fcf7b6c4b64d51bede9e5baa943b\"" Dec 13 08:48:28.365062 containerd[1461]: time="2024-12-13T08:48:28.364813666Z" level=info msg="StartContainer for \"9a84d3440006cda37a3af1a610564ce31467fcf7b6c4b64d51bede9e5baa943b\"" Dec 13 08:48:28.372530 containerd[1461]: time="2024-12-13T08:48:28.372458985Z" level=info msg="CreateContainer within sandbox \"f1d7189a5a28581345ec0cf99b16c2274e20644930b5e6bf4af9ac33019eb7aa\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"76ed4ab34f8c46392a0d86644003904104c0ac300b4d23e05f026d60b9f7d111\"" Dec 13 08:48:28.374078 containerd[1461]: time="2024-12-13T08:48:28.373733555Z" level=info msg="StartContainer for \"76ed4ab34f8c46392a0d86644003904104c0ac300b4d23e05f026d60b9f7d111\"" Dec 13 08:48:28.413408 systemd[1]: Started cri-containerd-8b5c3332bb81ada8ed770f7e386c1cbe87871944b8b49db7ea499d51c4f0e6cc.scope - libcontainer container 8b5c3332bb81ada8ed770f7e386c1cbe87871944b8b49db7ea499d51c4f0e6cc. Dec 13 08:48:28.450038 systemd[1]: Started cri-containerd-9a84d3440006cda37a3af1a610564ce31467fcf7b6c4b64d51bede9e5baa943b.scope - libcontainer container 9a84d3440006cda37a3af1a610564ce31467fcf7b6c4b64d51bede9e5baa943b. Dec 13 08:48:28.464579 systemd[1]: Started cri-containerd-76ed4ab34f8c46392a0d86644003904104c0ac300b4d23e05f026d60b9f7d111.scope - libcontainer container 76ed4ab34f8c46392a0d86644003904104c0ac300b4d23e05f026d60b9f7d111. Dec 13 08:48:28.569778 containerd[1461]: time="2024-12-13T08:48:28.569706412Z" level=info msg="StartContainer for \"8b5c3332bb81ada8ed770f7e386c1cbe87871944b8b49db7ea499d51c4f0e6cc\" returns successfully" Dec 13 08:48:28.576843 containerd[1461]: time="2024-12-13T08:48:28.576762959Z" level=info msg="StartContainer for \"76ed4ab34f8c46392a0d86644003904104c0ac300b4d23e05f026d60b9f7d111\" returns successfully" Dec 13 08:48:28.594178 containerd[1461]: time="2024-12-13T08:48:28.594090746Z" level=info msg="StartContainer for \"9a84d3440006cda37a3af1a610564ce31467fcf7b6c4b64d51bede9e5baa943b\" returns successfully" Dec 13 08:48:28.868490 kubelet[2143]: E1213 08:48:28.868335 2143 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://64.23.129.27:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 64.23.129.27:6443: connect: connection refused Dec 13 08:48:28.902549 kubelet[2143]: E1213 08:48:28.902459 2143 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:48:28.905758 kubelet[2143]: E1213 08:48:28.905712 2143 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:48:28.915529 kubelet[2143]: E1213 08:48:28.915143 2143 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:48:29.914858 kubelet[2143]: E1213 08:48:29.914807 2143 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:48:29.943314 kubelet[2143]: I1213 08:48:29.943267 2143 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.1-7-437820f1b8" Dec 13 08:48:31.365913 kubelet[2143]: E1213 08:48:31.365840 2143 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.2.1-7-437820f1b8\" not found" node="ci-4081.2.1-7-437820f1b8" Dec 13 08:48:31.445479 kubelet[2143]: I1213 08:48:31.445420 2143 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.2.1-7-437820f1b8" Dec 13 08:48:31.808543 kubelet[2143]: I1213 08:48:31.808206 2143 apiserver.go:52] "Watching apiserver" Dec 13 08:48:31.831165 kubelet[2143]: I1213 08:48:31.831075 2143 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 08:48:33.284426 kubelet[2143]: W1213 08:48:33.284321 2143 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 08:48:33.286860 kubelet[2143]: E1213 08:48:33.286813 2143 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:48:33.583763 systemd[1]: Reloading requested from client PID 2417 ('systemctl') (unit session-5.scope)... Dec 13 08:48:33.583787 systemd[1]: Reloading... Dec 13 08:48:33.744093 zram_generator::config[2456]: No configuration found. Dec 13 08:48:33.924755 kubelet[2143]: E1213 08:48:33.924706 2143 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:48:34.005214 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 08:48:34.165825 systemd[1]: Reloading finished in 581 ms. Dec 13 08:48:34.234798 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 08:48:34.252616 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 08:48:34.252963 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 08:48:34.253083 systemd[1]: kubelet.service: Consumed 1.253s CPU time, 110.4M memory peak, 0B memory swap peak. Dec 13 08:48:34.272574 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 08:48:34.416333 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 08:48:34.420119 (kubelet)[2507]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 08:48:34.514041 kubelet[2507]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 08:48:34.514041 kubelet[2507]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 08:48:34.514041 kubelet[2507]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 08:48:34.514609 kubelet[2507]: I1213 08:48:34.514543 2507 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 08:48:34.527951 kubelet[2507]: I1213 08:48:34.527874 2507 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 08:48:34.527951 kubelet[2507]: I1213 08:48:34.527908 2507 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 08:48:34.528273 kubelet[2507]: I1213 08:48:34.528169 2507 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 08:48:34.531317 kubelet[2507]: I1213 08:48:34.531208 2507 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 08:48:34.532962 kubelet[2507]: I1213 08:48:34.532910 2507 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 08:48:34.544514 kubelet[2507]: I1213 08:48:34.544468 2507 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 08:48:34.546036 kubelet[2507]: I1213 08:48:34.545919 2507 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 08:48:34.546432 kubelet[2507]: I1213 08:48:34.546040 2507 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.2.1-7-437820f1b8","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 08:48:34.546630 kubelet[2507]: I1213 08:48:34.546486 2507 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 08:48:34.546630 kubelet[2507]: I1213 08:48:34.546502 2507 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 08:48:34.546630 kubelet[2507]: I1213 08:48:34.546582 2507 state_mem.go:36] "Initialized new in-memory state store" Dec 13 08:48:34.546777 kubelet[2507]: I1213 08:48:34.546733 2507 kubelet.go:400] "Attempting to sync node with API server" Dec 13 08:48:34.546777 kubelet[2507]: I1213 08:48:34.546750 2507 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 08:48:34.546911 kubelet[2507]: I1213 08:48:34.546884 2507 kubelet.go:312] "Adding apiserver pod source" Dec 13 08:48:34.546964 kubelet[2507]: I1213 08:48:34.546919 2507 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 08:48:34.553203 kubelet[2507]: I1213 08:48:34.553165 2507 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 08:48:34.553440 kubelet[2507]: I1213 08:48:34.553421 2507 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 08:48:34.554125 kubelet[2507]: I1213 08:48:34.554097 2507 server.go:1264] "Started kubelet" Dec 13 08:48:34.564245 kubelet[2507]: I1213 08:48:34.564197 2507 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 08:48:34.573627 kubelet[2507]: I1213 08:48:34.572457 2507 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 08:48:34.583207 kubelet[2507]: I1213 08:48:34.582129 2507 server.go:455] "Adding debug handlers to kubelet server" Dec 13 08:48:34.590161 kubelet[2507]: I1213 08:48:34.590080 2507 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 08:48:34.592155 kubelet[2507]: I1213 08:48:34.592060 2507 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 08:48:34.596598 kubelet[2507]: I1213 08:48:34.596277 2507 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 08:48:34.600594 kubelet[2507]: I1213 08:48:34.600561 2507 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 08:48:34.604476 kubelet[2507]: I1213 08:48:34.604352 2507 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 08:48:34.605118 kubelet[2507]: I1213 08:48:34.605094 2507 reconciler.go:26] "Reconciler: start to sync state" Dec 13 08:48:34.606250 kubelet[2507]: I1213 08:48:34.606110 2507 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 08:48:34.606711 kubelet[2507]: I1213 08:48:34.606693 2507 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 08:48:34.607867 kubelet[2507]: I1213 08:48:34.606824 2507 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 08:48:34.607867 kubelet[2507]: E1213 08:48:34.606893 2507 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 08:48:34.624801 kubelet[2507]: I1213 08:48:34.624760 2507 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 08:48:34.627452 kubelet[2507]: E1213 08:48:34.627411 2507 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 08:48:34.631543 kubelet[2507]: I1213 08:48:34.631086 2507 factory.go:221] Registration of the containerd container factory successfully Dec 13 08:48:34.631543 kubelet[2507]: I1213 08:48:34.631119 2507 factory.go:221] Registration of the systemd container factory successfully Dec 13 08:48:34.707611 kubelet[2507]: I1213 08:48:34.704472 2507 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.1-7-437820f1b8" Dec 13 08:48:34.708727 kubelet[2507]: E1213 08:48:34.708170 2507 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 08:48:34.724994 kubelet[2507]: I1213 08:48:34.724939 2507 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081.2.1-7-437820f1b8" Dec 13 08:48:34.726991 kubelet[2507]: I1213 08:48:34.726640 2507 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.2.1-7-437820f1b8" Dec 13 08:48:34.732523 kubelet[2507]: I1213 08:48:34.732412 2507 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 08:48:34.732523 kubelet[2507]: I1213 08:48:34.732438 2507 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 08:48:34.732523 kubelet[2507]: I1213 08:48:34.732509 2507 state_mem.go:36] "Initialized new in-memory state store" Dec 13 08:48:34.733534 kubelet[2507]: I1213 08:48:34.733047 2507 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 08:48:34.733534 kubelet[2507]: I1213 08:48:34.733081 2507 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 08:48:34.733534 kubelet[2507]: I1213 08:48:34.733249 2507 policy_none.go:49] "None policy: Start" Dec 13 08:48:34.739424 kubelet[2507]: I1213 08:48:34.739379 2507 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 08:48:34.739424 kubelet[2507]: I1213 08:48:34.739439 2507 state_mem.go:35] "Initializing new in-memory state store" Dec 13 08:48:34.740745 kubelet[2507]: I1213 08:48:34.739750 2507 state_mem.go:75] "Updated machine memory state" Dec 13 08:48:34.770513 kubelet[2507]: I1213 08:48:34.769257 2507 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 08:48:34.770513 kubelet[2507]: I1213 08:48:34.769494 2507 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 08:48:34.770513 kubelet[2507]: I1213 08:48:34.769836 2507 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 08:48:34.910070 kubelet[2507]: I1213 08:48:34.909889 2507 topology_manager.go:215] "Topology Admit Handler" podUID="b6a50a32f98e94b7f9b414ca3b1d804c" podNamespace="kube-system" podName="kube-apiserver-ci-4081.2.1-7-437820f1b8" Dec 13 08:48:34.911679 kubelet[2507]: I1213 08:48:34.910198 2507 topology_manager.go:215] "Topology Admit Handler" podUID="a4a97ee1c3ddfa3e7eb4ff9c066fae50" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.2.1-7-437820f1b8" Dec 13 08:48:34.911679 kubelet[2507]: I1213 08:48:34.910929 2507 topology_manager.go:215] "Topology Admit Handler" podUID="87d25eb12d0b53059fe5d566bd7df922" podNamespace="kube-system" podName="kube-scheduler-ci-4081.2.1-7-437820f1b8" Dec 13 08:48:34.920606 kubelet[2507]: W1213 08:48:34.920340 2507 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 08:48:34.923427 kubelet[2507]: W1213 08:48:34.922384 2507 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 08:48:34.923427 kubelet[2507]: E1213 08:48:34.922483 2507 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.2.1-7-437820f1b8\" already exists" pod="kube-system/kube-apiserver-ci-4081.2.1-7-437820f1b8" Dec 13 08:48:34.929517 kubelet[2507]: W1213 08:48:34.929464 2507 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 08:48:35.011678 kubelet[2507]: I1213 08:48:35.011469 2507 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b6a50a32f98e94b7f9b414ca3b1d804c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.2.1-7-437820f1b8\" (UID: \"b6a50a32f98e94b7f9b414ca3b1d804c\") " pod="kube-system/kube-apiserver-ci-4081.2.1-7-437820f1b8" Dec 13 08:48:35.012842 kubelet[2507]: I1213 08:48:35.011892 2507 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/87d25eb12d0b53059fe5d566bd7df922-kubeconfig\") pod \"kube-scheduler-ci-4081.2.1-7-437820f1b8\" (UID: \"87d25eb12d0b53059fe5d566bd7df922\") " pod="kube-system/kube-scheduler-ci-4081.2.1-7-437820f1b8" Dec 13 08:48:35.015225 kubelet[2507]: I1213 08:48:35.014917 2507 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a4a97ee1c3ddfa3e7eb4ff9c066fae50-kubeconfig\") pod \"kube-controller-manager-ci-4081.2.1-7-437820f1b8\" (UID: \"a4a97ee1c3ddfa3e7eb4ff9c066fae50\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-7-437820f1b8" Dec 13 08:48:35.015225 kubelet[2507]: I1213 08:48:35.015070 2507 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a4a97ee1c3ddfa3e7eb4ff9c066fae50-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.2.1-7-437820f1b8\" (UID: \"a4a97ee1c3ddfa3e7eb4ff9c066fae50\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-7-437820f1b8" Dec 13 08:48:35.015225 kubelet[2507]: I1213 08:48:35.015167 2507 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b6a50a32f98e94b7f9b414ca3b1d804c-ca-certs\") pod \"kube-apiserver-ci-4081.2.1-7-437820f1b8\" (UID: \"b6a50a32f98e94b7f9b414ca3b1d804c\") " pod="kube-system/kube-apiserver-ci-4081.2.1-7-437820f1b8" Dec 13 08:48:35.017090 kubelet[2507]: I1213 08:48:35.016763 2507 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b6a50a32f98e94b7f9b414ca3b1d804c-k8s-certs\") pod \"kube-apiserver-ci-4081.2.1-7-437820f1b8\" (UID: \"b6a50a32f98e94b7f9b414ca3b1d804c\") " pod="kube-system/kube-apiserver-ci-4081.2.1-7-437820f1b8" Dec 13 08:48:35.017090 kubelet[2507]: I1213 08:48:35.016901 2507 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a4a97ee1c3ddfa3e7eb4ff9c066fae50-ca-certs\") pod \"kube-controller-manager-ci-4081.2.1-7-437820f1b8\" (UID: \"a4a97ee1c3ddfa3e7eb4ff9c066fae50\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-7-437820f1b8" Dec 13 08:48:35.017090 kubelet[2507]: I1213 08:48:35.016976 2507 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a4a97ee1c3ddfa3e7eb4ff9c066fae50-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.2.1-7-437820f1b8\" (UID: \"a4a97ee1c3ddfa3e7eb4ff9c066fae50\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-7-437820f1b8" Dec 13 08:48:35.017090 kubelet[2507]: I1213 08:48:35.017035 2507 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a4a97ee1c3ddfa3e7eb4ff9c066fae50-k8s-certs\") pod \"kube-controller-manager-ci-4081.2.1-7-437820f1b8\" (UID: \"a4a97ee1c3ddfa3e7eb4ff9c066fae50\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-7-437820f1b8" Dec 13 08:48:35.221922 kubelet[2507]: E1213 08:48:35.221762 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:48:35.224710 kubelet[2507]: E1213 08:48:35.224239 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:48:35.230995 kubelet[2507]: E1213 08:48:35.230928 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:48:35.549621 kubelet[2507]: I1213 08:48:35.549395 2507 apiserver.go:52] "Watching apiserver" Dec 13 08:48:35.607053 kubelet[2507]: I1213 08:48:35.605775 2507 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 08:48:35.681029 kubelet[2507]: E1213 08:48:35.679882 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:48:35.686100 kubelet[2507]: E1213 08:48:35.682622 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:48:35.696080 kubelet[2507]: W1213 08:48:35.696032 2507 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 08:48:35.696257 kubelet[2507]: E1213 08:48:35.696146 2507 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.2.1-7-437820f1b8\" already exists" pod="kube-system/kube-apiserver-ci-4081.2.1-7-437820f1b8" Dec 13 08:48:35.697112 kubelet[2507]: E1213 08:48:35.696907 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:48:35.745352 kubelet[2507]: I1213 08:48:35.744651 2507 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.2.1-7-437820f1b8" podStartSLOduration=2.744608988 podStartE2EDuration="2.744608988s" podCreationTimestamp="2024-12-13 08:48:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 08:48:35.732247876 +0000 UTC m=+1.303179025" watchObservedRunningTime="2024-12-13 08:48:35.744608988 +0000 UTC m=+1.315540116" Dec 13 08:48:35.773095 kubelet[2507]: I1213 08:48:35.772978 2507 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.2.1-7-437820f1b8" podStartSLOduration=1.7729536320000001 podStartE2EDuration="1.772953632s" podCreationTimestamp="2024-12-13 08:48:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 08:48:35.745225996 +0000 UTC m=+1.316157147" watchObservedRunningTime="2024-12-13 08:48:35.772953632 +0000 UTC m=+1.343884760" Dec 13 08:48:36.012696 sudo[1613]: pam_unix(sudo:session): session closed for user root Dec 13 08:48:36.020498 sshd[1610]: pam_unix(sshd:session): session closed for user core Dec 13 08:48:36.027776 systemd[1]: sshd@4-64.23.129.27:22-147.75.109.163:48830.service: Deactivated successfully. Dec 13 08:48:36.031808 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 08:48:36.032326 systemd[1]: session-5.scope: Consumed 5.036s CPU time, 189.3M memory peak, 0B memory swap peak. Dec 13 08:48:36.033416 systemd-logind[1446]: Session 5 logged out. Waiting for processes to exit. Dec 13 08:48:36.035587 systemd-logind[1446]: Removed session 5. Dec 13 08:48:36.681968 kubelet[2507]: E1213 08:48:36.681903 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:48:36.845338 kubelet[2507]: E1213 08:48:36.845220 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:48:37.683596 kubelet[2507]: E1213 08:48:37.683533 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:48:43.292704 kubelet[2507]: E1213 08:48:43.292572 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:48:43.309757 kubelet[2507]: I1213 08:48:43.309632 2507 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.2.1-7-437820f1b8" podStartSLOduration=9.309607739 podStartE2EDuration="9.309607739s" podCreationTimestamp="2024-12-13 08:48:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 08:48:35.774427348 +0000 UTC m=+1.345358506" watchObservedRunningTime="2024-12-13 08:48:43.309607739 +0000 UTC m=+8.880538889" Dec 13 08:48:43.698528 kubelet[2507]: E1213 08:48:43.698269 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:48:45.703642 kubelet[2507]: E1213 08:48:45.701162 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:48:46.708574 kubelet[2507]: E1213 08:48:46.708515 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:48:46.851094 kubelet[2507]: E1213 08:48:46.850625 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:48:47.261967 kubelet[2507]: I1213 08:48:47.261926 2507 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 08:48:47.262786 containerd[1461]: time="2024-12-13T08:48:47.262603721Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 08:48:47.263902 kubelet[2507]: I1213 08:48:47.263074 2507 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 08:48:48.041133 kubelet[2507]: I1213 08:48:48.041066 2507 topology_manager.go:215] "Topology Admit Handler" podUID="fb2c64a8-f4de-41b7-87b4-565eb9dcf143" podNamespace="kube-system" podName="kube-proxy-22f52" Dec 13 08:48:48.057478 kubelet[2507]: I1213 08:48:48.055733 2507 topology_manager.go:215] "Topology Admit Handler" podUID="8025359c-9398-41df-a52e-fc98f7150dbf" podNamespace="kube-flannel" podName="kube-flannel-ds-cxk48" Dec 13 08:48:48.056600 systemd[1]: Created slice kubepods-besteffort-podfb2c64a8_f4de_41b7_87b4_565eb9dcf143.slice - libcontainer container kubepods-besteffort-podfb2c64a8_f4de_41b7_87b4_565eb9dcf143.slice. Dec 13 08:48:48.078627 systemd[1]: Created slice kubepods-burstable-pod8025359c_9398_41df_a52e_fc98f7150dbf.slice - libcontainer container kubepods-burstable-pod8025359c_9398_41df_a52e_fc98f7150dbf.slice. Dec 13 08:48:48.109038 kubelet[2507]: I1213 08:48:48.108471 2507 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fb2c64a8-f4de-41b7-87b4-565eb9dcf143-lib-modules\") pod \"kube-proxy-22f52\" (UID: \"fb2c64a8-f4de-41b7-87b4-565eb9dcf143\") " pod="kube-system/kube-proxy-22f52" Dec 13 08:48:48.109038 kubelet[2507]: I1213 08:48:48.108573 2507 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/8025359c-9398-41df-a52e-fc98f7150dbf-flannel-cfg\") pod \"kube-flannel-ds-cxk48\" (UID: \"8025359c-9398-41df-a52e-fc98f7150dbf\") " pod="kube-flannel/kube-flannel-ds-cxk48" Dec 13 08:48:48.109038 kubelet[2507]: I1213 08:48:48.108638 2507 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8025359c-9398-41df-a52e-fc98f7150dbf-xtables-lock\") pod \"kube-flannel-ds-cxk48\" (UID: \"8025359c-9398-41df-a52e-fc98f7150dbf\") " pod="kube-flannel/kube-flannel-ds-cxk48" Dec 13 08:48:48.109038 kubelet[2507]: I1213 08:48:48.108697 2507 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fb2c64a8-f4de-41b7-87b4-565eb9dcf143-kube-proxy\") pod \"kube-proxy-22f52\" (UID: \"fb2c64a8-f4de-41b7-87b4-565eb9dcf143\") " pod="kube-system/kube-proxy-22f52" Dec 13 08:48:48.109038 kubelet[2507]: I1213 08:48:48.108778 2507 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fb2c64a8-f4de-41b7-87b4-565eb9dcf143-xtables-lock\") pod \"kube-proxy-22f52\" (UID: \"fb2c64a8-f4de-41b7-87b4-565eb9dcf143\") " pod="kube-system/kube-proxy-22f52" Dec 13 08:48:48.109408 kubelet[2507]: I1213 08:48:48.108813 2507 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/8025359c-9398-41df-a52e-fc98f7150dbf-cni\") pod \"kube-flannel-ds-cxk48\" (UID: \"8025359c-9398-41df-a52e-fc98f7150dbf\") " pod="kube-flannel/kube-flannel-ds-cxk48" Dec 13 08:48:48.109408 kubelet[2507]: I1213 08:48:48.108862 2507 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngtp6\" (UniqueName: \"kubernetes.io/projected/8025359c-9398-41df-a52e-fc98f7150dbf-kube-api-access-ngtp6\") pod \"kube-flannel-ds-cxk48\" (UID: \"8025359c-9398-41df-a52e-fc98f7150dbf\") " pod="kube-flannel/kube-flannel-ds-cxk48" Dec 13 08:48:48.109408 kubelet[2507]: I1213 08:48:48.108889 2507 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpfr9\" (UniqueName: \"kubernetes.io/projected/fb2c64a8-f4de-41b7-87b4-565eb9dcf143-kube-api-access-cpfr9\") pod \"kube-proxy-22f52\" (UID: \"fb2c64a8-f4de-41b7-87b4-565eb9dcf143\") " pod="kube-system/kube-proxy-22f52" Dec 13 08:48:48.109408 kubelet[2507]: I1213 08:48:48.108913 2507 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/8025359c-9398-41df-a52e-fc98f7150dbf-run\") pod \"kube-flannel-ds-cxk48\" (UID: \"8025359c-9398-41df-a52e-fc98f7150dbf\") " pod="kube-flannel/kube-flannel-ds-cxk48" Dec 13 08:48:48.109408 kubelet[2507]: I1213 08:48:48.108952 2507 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/8025359c-9398-41df-a52e-fc98f7150dbf-cni-plugin\") pod \"kube-flannel-ds-cxk48\" (UID: \"8025359c-9398-41df-a52e-fc98f7150dbf\") " pod="kube-flannel/kube-flannel-ds-cxk48" Dec 13 08:48:48.369573 kubelet[2507]: E1213 08:48:48.369112 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:48:48.371028 containerd[1461]: time="2024-12-13T08:48:48.370614203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-22f52,Uid:fb2c64a8-f4de-41b7-87b4-565eb9dcf143,Namespace:kube-system,Attempt:0,}" Dec 13 08:48:48.385831 kubelet[2507]: E1213 08:48:48.383041 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:48:48.385996 containerd[1461]: time="2024-12-13T08:48:48.385289935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-cxk48,Uid:8025359c-9398-41df-a52e-fc98f7150dbf,Namespace:kube-flannel,Attempt:0,}" Dec 13 08:48:48.420606 containerd[1461]: time="2024-12-13T08:48:48.420475905Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 08:48:48.420606 containerd[1461]: time="2024-12-13T08:48:48.420546168Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 08:48:48.420858 containerd[1461]: time="2024-12-13T08:48:48.420562921Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:48:48.420858 containerd[1461]: time="2024-12-13T08:48:48.420658169Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:48:48.444076 containerd[1461]: time="2024-12-13T08:48:48.443409367Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 08:48:48.444076 containerd[1461]: time="2024-12-13T08:48:48.443509232Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 08:48:48.444076 containerd[1461]: time="2024-12-13T08:48:48.443532682Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:48:48.444076 containerd[1461]: time="2024-12-13T08:48:48.443652210Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:48:48.470722 systemd[1]: Started cri-containerd-f1501c5b3eccde199785a852ada5da97c5706b944008f8fadbfc423913a4e253.scope - libcontainer container f1501c5b3eccde199785a852ada5da97c5706b944008f8fadbfc423913a4e253. Dec 13 08:48:48.491379 systemd[1]: Started cri-containerd-09505b599c898e9f71425dc17a458ee65a8e7a4bdd08d51686af4c96c239c098.scope - libcontainer container 09505b599c898e9f71425dc17a458ee65a8e7a4bdd08d51686af4c96c239c098. Dec 13 08:48:48.535582 containerd[1461]: time="2024-12-13T08:48:48.535320009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-22f52,Uid:fb2c64a8-f4de-41b7-87b4-565eb9dcf143,Namespace:kube-system,Attempt:0,} returns sandbox id \"f1501c5b3eccde199785a852ada5da97c5706b944008f8fadbfc423913a4e253\"" Dec 13 08:48:48.539092 kubelet[2507]: E1213 08:48:48.538505 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:48:48.545382 containerd[1461]: time="2024-12-13T08:48:48.545286718Z" level=info msg="CreateContainer within sandbox \"f1501c5b3eccde199785a852ada5da97c5706b944008f8fadbfc423913a4e253\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 08:48:48.569665 containerd[1461]: time="2024-12-13T08:48:48.568877501Z" level=info msg="CreateContainer within sandbox \"f1501c5b3eccde199785a852ada5da97c5706b944008f8fadbfc423913a4e253\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"dab66f96251108eb383471ce21fbaf3609e297b167b984e57b4ef92971534fb0\"" Dec 13 08:48:48.571046 containerd[1461]: time="2024-12-13T08:48:48.570962408Z" level=info msg="StartContainer for \"dab66f96251108eb383471ce21fbaf3609e297b167b984e57b4ef92971534fb0\"" Dec 13 08:48:48.592401 containerd[1461]: time="2024-12-13T08:48:48.592207056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-cxk48,Uid:8025359c-9398-41df-a52e-fc98f7150dbf,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"09505b599c898e9f71425dc17a458ee65a8e7a4bdd08d51686af4c96c239c098\"" Dec 13 08:48:48.593717 kubelet[2507]: E1213 08:48:48.593677 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:48:48.597298 containerd[1461]: time="2024-12-13T08:48:48.597025844Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Dec 13 08:48:48.631763 systemd[1]: Started cri-containerd-dab66f96251108eb383471ce21fbaf3609e297b167b984e57b4ef92971534fb0.scope - libcontainer container dab66f96251108eb383471ce21fbaf3609e297b167b984e57b4ef92971534fb0. Dec 13 08:48:48.672662 containerd[1461]: time="2024-12-13T08:48:48.672598612Z" level=info msg="StartContainer for \"dab66f96251108eb383471ce21fbaf3609e297b167b984e57b4ef92971534fb0\" returns successfully" Dec 13 08:48:48.720080 kubelet[2507]: E1213 08:48:48.719851 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:48:49.150629 update_engine[1447]: I20241213 08:48:49.150488 1447 update_attempter.cc:509] Updating boot flags... Dec 13 08:48:49.193144 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2812) Dec 13 08:48:49.287524 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2760) Dec 13 08:48:50.613198 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2556007544.mount: Deactivated successfully. Dec 13 08:48:50.661155 containerd[1461]: time="2024-12-13T08:48:50.661084061Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:48:50.662679 containerd[1461]: time="2024-12-13T08:48:50.662191234Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3852937" Dec 13 08:48:50.663562 containerd[1461]: time="2024-12-13T08:48:50.663498601Z" level=info msg="ImageCreate event name:\"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:48:50.667082 containerd[1461]: time="2024-12-13T08:48:50.667020660Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:48:50.668587 containerd[1461]: time="2024-12-13T08:48:50.668520700Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3842055\" in 2.071444714s" Dec 13 08:48:50.668947 containerd[1461]: time="2024-12-13T08:48:50.668787020Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\"" Dec 13 08:48:50.673598 containerd[1461]: time="2024-12-13T08:48:50.673176792Z" level=info msg="CreateContainer within sandbox \"09505b599c898e9f71425dc17a458ee65a8e7a4bdd08d51686af4c96c239c098\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Dec 13 08:48:50.697671 containerd[1461]: time="2024-12-13T08:48:50.697527709Z" level=info msg="CreateContainer within sandbox \"09505b599c898e9f71425dc17a458ee65a8e7a4bdd08d51686af4c96c239c098\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"f858e312f2c216511d1991d6085247e69f17bfd5581d872acad0429f89543428\"" Dec 13 08:48:50.699116 containerd[1461]: time="2024-12-13T08:48:50.698371869Z" level=info msg="StartContainer for \"f858e312f2c216511d1991d6085247e69f17bfd5581d872acad0429f89543428\"" Dec 13 08:48:50.739327 systemd[1]: Started cri-containerd-f858e312f2c216511d1991d6085247e69f17bfd5581d872acad0429f89543428.scope - libcontainer container f858e312f2c216511d1991d6085247e69f17bfd5581d872acad0429f89543428. Dec 13 08:48:50.784799 systemd[1]: cri-containerd-f858e312f2c216511d1991d6085247e69f17bfd5581d872acad0429f89543428.scope: Deactivated successfully. Dec 13 08:48:50.787183 containerd[1461]: time="2024-12-13T08:48:50.786823605Z" level=info msg="StartContainer for \"f858e312f2c216511d1991d6085247e69f17bfd5581d872acad0429f89543428\" returns successfully" Dec 13 08:48:50.832357 containerd[1461]: time="2024-12-13T08:48:50.832251301Z" level=info msg="shim disconnected" id=f858e312f2c216511d1991d6085247e69f17bfd5581d872acad0429f89543428 namespace=k8s.io Dec 13 08:48:50.832357 containerd[1461]: time="2024-12-13T08:48:50.832348565Z" level=warning msg="cleaning up after shim disconnected" id=f858e312f2c216511d1991d6085247e69f17bfd5581d872acad0429f89543428 namespace=k8s.io Dec 13 08:48:50.832357 containerd[1461]: time="2024-12-13T08:48:50.832358598Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 08:48:51.497915 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f858e312f2c216511d1991d6085247e69f17bfd5581d872acad0429f89543428-rootfs.mount: Deactivated successfully. Dec 13 08:48:51.730585 kubelet[2507]: E1213 08:48:51.730533 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:48:51.736038 containerd[1461]: time="2024-12-13T08:48:51.735919734Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Dec 13 08:48:51.751582 kubelet[2507]: I1213 08:48:51.750544 2507 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-22f52" podStartSLOduration=3.750515172 podStartE2EDuration="3.750515172s" podCreationTimestamp="2024-12-13 08:48:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 08:48:48.737604205 +0000 UTC m=+14.308535354" watchObservedRunningTime="2024-12-13 08:48:51.750515172 +0000 UTC m=+17.321446325" Dec 13 08:48:53.797764 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1974532324.mount: Deactivated successfully. Dec 13 08:48:55.408783 containerd[1461]: time="2024-12-13T08:48:55.408348095Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:48:55.409259 containerd[1461]: time="2024-12-13T08:48:55.409162286Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26866358" Dec 13 08:48:55.409988 containerd[1461]: time="2024-12-13T08:48:55.409949092Z" level=info msg="ImageCreate event name:\"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:48:55.414479 containerd[1461]: time="2024-12-13T08:48:55.414112246Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:48:55.415846 containerd[1461]: time="2024-12-13T08:48:55.415774330Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26855532\" in 3.67978928s" Dec 13 08:48:55.415846 containerd[1461]: time="2024-12-13T08:48:55.415842920Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\"" Dec 13 08:48:55.461185 containerd[1461]: time="2024-12-13T08:48:55.461093967Z" level=info msg="CreateContainer within sandbox \"09505b599c898e9f71425dc17a458ee65a8e7a4bdd08d51686af4c96c239c098\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 08:48:55.475304 containerd[1461]: time="2024-12-13T08:48:55.475084425Z" level=info msg="CreateContainer within sandbox \"09505b599c898e9f71425dc17a458ee65a8e7a4bdd08d51686af4c96c239c098\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"71aa59a52a323c46af912ba46cc8d67b94c693901fed36e11bd201fc539beeff\"" Dec 13 08:48:55.477339 containerd[1461]: time="2024-12-13T08:48:55.477280659Z" level=info msg="StartContainer for \"71aa59a52a323c46af912ba46cc8d67b94c693901fed36e11bd201fc539beeff\"" Dec 13 08:48:55.516256 systemd[1]: run-containerd-runc-k8s.io-71aa59a52a323c46af912ba46cc8d67b94c693901fed36e11bd201fc539beeff-runc.QsOXVw.mount: Deactivated successfully. Dec 13 08:48:55.526227 systemd[1]: Started cri-containerd-71aa59a52a323c46af912ba46cc8d67b94c693901fed36e11bd201fc539beeff.scope - libcontainer container 71aa59a52a323c46af912ba46cc8d67b94c693901fed36e11bd201fc539beeff. Dec 13 08:48:55.568262 systemd[1]: cri-containerd-71aa59a52a323c46af912ba46cc8d67b94c693901fed36e11bd201fc539beeff.scope: Deactivated successfully. Dec 13 08:48:55.570819 containerd[1461]: time="2024-12-13T08:48:55.570616479Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8025359c_9398_41df_a52e_fc98f7150dbf.slice/cri-containerd-71aa59a52a323c46af912ba46cc8d67b94c693901fed36e11bd201fc539beeff.scope/memory.events\": no such file or directory" Dec 13 08:48:55.575246 containerd[1461]: time="2024-12-13T08:48:55.574671628Z" level=info msg="StartContainer for \"71aa59a52a323c46af912ba46cc8d67b94c693901fed36e11bd201fc539beeff\" returns successfully" Dec 13 08:48:55.590909 kubelet[2507]: I1213 08:48:55.590680 2507 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 08:48:55.618143 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-71aa59a52a323c46af912ba46cc8d67b94c693901fed36e11bd201fc539beeff-rootfs.mount: Deactivated successfully. Dec 13 08:48:55.649201 kubelet[2507]: I1213 08:48:55.649033 2507 topology_manager.go:215] "Topology Admit Handler" podUID="f8a70286-08e7-416f-a17e-c2e5d78474d2" podNamespace="kube-system" podName="coredns-7db6d8ff4d-xw5jj" Dec 13 08:48:55.649594 kubelet[2507]: I1213 08:48:55.649268 2507 topology_manager.go:215] "Topology Admit Handler" podUID="91b190c2-7675-4b8c-890c-5284a189bcb0" podNamespace="kube-system" podName="coredns-7db6d8ff4d-wdj2t" Dec 13 08:48:55.665270 containerd[1461]: time="2024-12-13T08:48:55.662159079Z" level=info msg="shim disconnected" id=71aa59a52a323c46af912ba46cc8d67b94c693901fed36e11bd201fc539beeff namespace=k8s.io Dec 13 08:48:55.665270 containerd[1461]: time="2024-12-13T08:48:55.662226964Z" level=warning msg="cleaning up after shim disconnected" id=71aa59a52a323c46af912ba46cc8d67b94c693901fed36e11bd201fc539beeff namespace=k8s.io Dec 13 08:48:55.665270 containerd[1461]: time="2024-12-13T08:48:55.662237075Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 08:48:55.668686 systemd[1]: Created slice kubepods-burstable-pod91b190c2_7675_4b8c_890c_5284a189bcb0.slice - libcontainer container kubepods-burstable-pod91b190c2_7675_4b8c_890c_5284a189bcb0.slice. Dec 13 08:48:55.680953 systemd[1]: Created slice kubepods-burstable-podf8a70286_08e7_416f_a17e_c2e5d78474d2.slice - libcontainer container kubepods-burstable-podf8a70286_08e7_416f_a17e_c2e5d78474d2.slice. Dec 13 08:48:55.748977 kubelet[2507]: E1213 08:48:55.748912 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:48:55.753793 containerd[1461]: time="2024-12-13T08:48:55.752145429Z" level=info msg="CreateContainer within sandbox \"09505b599c898e9f71425dc17a458ee65a8e7a4bdd08d51686af4c96c239c098\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Dec 13 08:48:55.764063 kubelet[2507]: I1213 08:48:55.763987 2507 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f8a70286-08e7-416f-a17e-c2e5d78474d2-config-volume\") pod \"coredns-7db6d8ff4d-xw5jj\" (UID: \"f8a70286-08e7-416f-a17e-c2e5d78474d2\") " pod="kube-system/coredns-7db6d8ff4d-xw5jj" Dec 13 08:48:55.764063 kubelet[2507]: I1213 08:48:55.764074 2507 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/91b190c2-7675-4b8c-890c-5284a189bcb0-config-volume\") pod \"coredns-7db6d8ff4d-wdj2t\" (UID: \"91b190c2-7675-4b8c-890c-5284a189bcb0\") " pod="kube-system/coredns-7db6d8ff4d-wdj2t" Dec 13 08:48:55.764671 kubelet[2507]: I1213 08:48:55.764610 2507 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lb29x\" (UniqueName: \"kubernetes.io/projected/91b190c2-7675-4b8c-890c-5284a189bcb0-kube-api-access-lb29x\") pod \"coredns-7db6d8ff4d-wdj2t\" (UID: \"91b190c2-7675-4b8c-890c-5284a189bcb0\") " pod="kube-system/coredns-7db6d8ff4d-wdj2t" Dec 13 08:48:55.764671 kubelet[2507]: I1213 08:48:55.764671 2507 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khj99\" (UniqueName: \"kubernetes.io/projected/f8a70286-08e7-416f-a17e-c2e5d78474d2-kube-api-access-khj99\") pod \"coredns-7db6d8ff4d-xw5jj\" (UID: \"f8a70286-08e7-416f-a17e-c2e5d78474d2\") " pod="kube-system/coredns-7db6d8ff4d-xw5jj" Dec 13 08:48:55.768403 containerd[1461]: time="2024-12-13T08:48:55.768343508Z" level=info msg="CreateContainer within sandbox \"09505b599c898e9f71425dc17a458ee65a8e7a4bdd08d51686af4c96c239c098\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"b579bc993ef2a3bcc1d65b58e44e1cd001d7c54415dc217de73c60e267593bca\"" Dec 13 08:48:55.769461 containerd[1461]: time="2024-12-13T08:48:55.769409643Z" level=info msg="StartContainer for \"b579bc993ef2a3bcc1d65b58e44e1cd001d7c54415dc217de73c60e267593bca\"" Dec 13 08:48:55.814273 systemd[1]: Started cri-containerd-b579bc993ef2a3bcc1d65b58e44e1cd001d7c54415dc217de73c60e267593bca.scope - libcontainer container b579bc993ef2a3bcc1d65b58e44e1cd001d7c54415dc217de73c60e267593bca. Dec 13 08:48:55.860249 containerd[1461]: time="2024-12-13T08:48:55.860116401Z" level=info msg="StartContainer for \"b579bc993ef2a3bcc1d65b58e44e1cd001d7c54415dc217de73c60e267593bca\" returns successfully" Dec 13 08:48:55.977736 kubelet[2507]: E1213 08:48:55.977552 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:48:55.980491 containerd[1461]: time="2024-12-13T08:48:55.980152577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wdj2t,Uid:91b190c2-7675-4b8c-890c-5284a189bcb0,Namespace:kube-system,Attempt:0,}" Dec 13 08:48:55.987997 kubelet[2507]: E1213 08:48:55.987267 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:48:55.988497 containerd[1461]: time="2024-12-13T08:48:55.988441076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xw5jj,Uid:f8a70286-08e7-416f-a17e-c2e5d78474d2,Namespace:kube-system,Attempt:0,}" Dec 13 08:48:56.025587 containerd[1461]: time="2024-12-13T08:48:56.024972152Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wdj2t,Uid:91b190c2-7675-4b8c-890c-5284a189bcb0,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7e50fa416c6e5c74204039882f6708a689575c5e07ab7bd25ec6b604140b63a3\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 08:48:56.025982 kubelet[2507]: E1213 08:48:56.025896 2507 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e50fa416c6e5c74204039882f6708a689575c5e07ab7bd25ec6b604140b63a3\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 08:48:56.026147 kubelet[2507]: E1213 08:48:56.026123 2507 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e50fa416c6e5c74204039882f6708a689575c5e07ab7bd25ec6b604140b63a3\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-wdj2t" Dec 13 08:48:56.027731 kubelet[2507]: E1213 08:48:56.027495 2507 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e50fa416c6e5c74204039882f6708a689575c5e07ab7bd25ec6b604140b63a3\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-wdj2t" Dec 13 08:48:56.028224 containerd[1461]: time="2024-12-13T08:48:56.027886591Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xw5jj,Uid:f8a70286-08e7-416f-a17e-c2e5d78474d2,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ad5ea9b037df246137dfff8abc5de02d4d228211355667dc09c444f43235c219\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 08:48:56.028334 kubelet[2507]: E1213 08:48:56.027976 2507 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-wdj2t_kube-system(91b190c2-7675-4b8c-890c-5284a189bcb0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-wdj2t_kube-system(91b190c2-7675-4b8c-890c-5284a189bcb0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7e50fa416c6e5c74204039882f6708a689575c5e07ab7bd25ec6b604140b63a3\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-wdj2t" podUID="91b190c2-7675-4b8c-890c-5284a189bcb0" Dec 13 08:48:56.029322 kubelet[2507]: E1213 08:48:56.029259 2507 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad5ea9b037df246137dfff8abc5de02d4d228211355667dc09c444f43235c219\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 08:48:56.029424 kubelet[2507]: E1213 08:48:56.029341 2507 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad5ea9b037df246137dfff8abc5de02d4d228211355667dc09c444f43235c219\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-xw5jj" Dec 13 08:48:56.029424 kubelet[2507]: E1213 08:48:56.029373 2507 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad5ea9b037df246137dfff8abc5de02d4d228211355667dc09c444f43235c219\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-xw5jj" Dec 13 08:48:56.029514 kubelet[2507]: E1213 08:48:56.029443 2507 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-xw5jj_kube-system(f8a70286-08e7-416f-a17e-c2e5d78474d2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-xw5jj_kube-system(f8a70286-08e7-416f-a17e-c2e5d78474d2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ad5ea9b037df246137dfff8abc5de02d4d228211355667dc09c444f43235c219\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-xw5jj" podUID="f8a70286-08e7-416f-a17e-c2e5d78474d2" Dec 13 08:48:56.762481 kubelet[2507]: E1213 08:48:56.753710 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:48:56.779024 kubelet[2507]: I1213 08:48:56.777580 2507 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-cxk48" podStartSLOduration=1.94917327 podStartE2EDuration="8.777554831s" podCreationTimestamp="2024-12-13 08:48:48 +0000 UTC" firstStartedPulling="2024-12-13 08:48:48.595199419 +0000 UTC m=+14.166130561" lastFinishedPulling="2024-12-13 08:48:55.423580977 +0000 UTC m=+20.994512122" observedRunningTime="2024-12-13 08:48:56.777137173 +0000 UTC m=+22.348068327" watchObservedRunningTime="2024-12-13 08:48:56.777554831 +0000 UTC m=+22.348485982" Dec 13 08:48:56.961594 systemd-networkd[1365]: flannel.1: Link UP Dec 13 08:48:56.961605 systemd-networkd[1365]: flannel.1: Gained carrier Dec 13 08:48:57.755586 kubelet[2507]: E1213 08:48:57.755535 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:48:58.198203 systemd-networkd[1365]: flannel.1: Gained IPv6LL Dec 13 08:49:07.608709 kubelet[2507]: E1213 08:49:07.608399 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:49:07.610522 containerd[1461]: time="2024-12-13T08:49:07.609406304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wdj2t,Uid:91b190c2-7675-4b8c-890c-5284a189bcb0,Namespace:kube-system,Attempt:0,}" Dec 13 08:49:07.650772 systemd-networkd[1365]: cni0: Link UP Dec 13 08:49:07.650780 systemd-networkd[1365]: cni0: Gained carrier Dec 13 08:49:07.663216 kernel: cni0: port 1(veth7fc800fd) entered blocking state Dec 13 08:49:07.663354 kernel: cni0: port 1(veth7fc800fd) entered disabled state Dec 13 08:49:07.662863 systemd-networkd[1365]: cni0: Lost carrier Dec 13 08:49:07.663203 systemd-networkd[1365]: veth7fc800fd: Link UP Dec 13 08:49:07.666807 kernel: veth7fc800fd: entered allmulticast mode Dec 13 08:49:07.666936 kernel: veth7fc800fd: entered promiscuous mode Dec 13 08:49:07.674042 kernel: cni0: port 1(veth7fc800fd) entered blocking state Dec 13 08:49:07.674135 kernel: cni0: port 1(veth7fc800fd) entered forwarding state Dec 13 08:49:07.675100 systemd-networkd[1365]: veth7fc800fd: Gained carrier Dec 13 08:49:07.677319 systemd-networkd[1365]: cni0: Gained carrier Dec 13 08:49:07.688392 containerd[1461]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc000022938), "name":"cbr0", "type":"bridge"} Dec 13 08:49:07.688392 containerd[1461]: delegateAdd: netconf sent to delegate plugin: Dec 13 08:49:07.715591 containerd[1461]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2024-12-13T08:49:07.715471832Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 08:49:07.715591 containerd[1461]: time="2024-12-13T08:49:07.715539240Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 08:49:07.715821 containerd[1461]: time="2024-12-13T08:49:07.715569567Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:49:07.715821 containerd[1461]: time="2024-12-13T08:49:07.715674890Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:49:07.747343 systemd[1]: Started cri-containerd-70fc4b74d7daec2721d938ccd1d46a07c85d49b0ce2683644a6a2e48bedcb0eb.scope - libcontainer container 70fc4b74d7daec2721d938ccd1d46a07c85d49b0ce2683644a6a2e48bedcb0eb. Dec 13 08:49:07.816231 containerd[1461]: time="2024-12-13T08:49:07.816080232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wdj2t,Uid:91b190c2-7675-4b8c-890c-5284a189bcb0,Namespace:kube-system,Attempt:0,} returns sandbox id \"70fc4b74d7daec2721d938ccd1d46a07c85d49b0ce2683644a6a2e48bedcb0eb\"" Dec 13 08:49:07.818111 kubelet[2507]: E1213 08:49:07.817623 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:49:07.820943 containerd[1461]: time="2024-12-13T08:49:07.820895875Z" level=info msg="CreateContainer within sandbox \"70fc4b74d7daec2721d938ccd1d46a07c85d49b0ce2683644a6a2e48bedcb0eb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 08:49:07.839869 containerd[1461]: time="2024-12-13T08:49:07.839769699Z" level=info msg="CreateContainer within sandbox \"70fc4b74d7daec2721d938ccd1d46a07c85d49b0ce2683644a6a2e48bedcb0eb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"179012d3d1fe8ae3a1a45c6513c7783dd5772c3b07c92467008ee87e5905fbd2\"" Dec 13 08:49:07.841154 containerd[1461]: time="2024-12-13T08:49:07.841082167Z" level=info msg="StartContainer for \"179012d3d1fe8ae3a1a45c6513c7783dd5772c3b07c92467008ee87e5905fbd2\"" Dec 13 08:49:07.877886 systemd[1]: Started cri-containerd-179012d3d1fe8ae3a1a45c6513c7783dd5772c3b07c92467008ee87e5905fbd2.scope - libcontainer container 179012d3d1fe8ae3a1a45c6513c7783dd5772c3b07c92467008ee87e5905fbd2. Dec 13 08:49:07.918187 containerd[1461]: time="2024-12-13T08:49:07.918125500Z" level=info msg="StartContainer for \"179012d3d1fe8ae3a1a45c6513c7783dd5772c3b07c92467008ee87e5905fbd2\" returns successfully" Dec 13 08:49:08.608928 kubelet[2507]: E1213 08:49:08.608601 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:49:08.611482 containerd[1461]: time="2024-12-13T08:49:08.609559697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xw5jj,Uid:f8a70286-08e7-416f-a17e-c2e5d78474d2,Namespace:kube-system,Attempt:0,}" Dec 13 08:49:08.643948 systemd-networkd[1365]: veth7cfee845: Link UP Dec 13 08:49:08.646029 kernel: cni0: port 2(veth7cfee845) entered blocking state Dec 13 08:49:08.646168 kernel: cni0: port 2(veth7cfee845) entered disabled state Dec 13 08:49:08.648517 kernel: veth7cfee845: entered allmulticast mode Dec 13 08:49:08.651480 kernel: veth7cfee845: entered promiscuous mode Dec 13 08:49:08.653803 kernel: cni0: port 2(veth7cfee845) entered blocking state Dec 13 08:49:08.653919 kernel: cni0: port 2(veth7cfee845) entered forwarding state Dec 13 08:49:08.662695 systemd-networkd[1365]: veth7cfee845: Gained carrier Dec 13 08:49:08.664884 containerd[1461]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00001a938), "name":"cbr0", "type":"bridge"} Dec 13 08:49:08.664884 containerd[1461]: delegateAdd: netconf sent to delegate plugin: Dec 13 08:49:08.691854 containerd[1461]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2024-12-13T08:49:08.691599987Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 08:49:08.691854 containerd[1461]: time="2024-12-13T08:49:08.691689452Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 08:49:08.691854 containerd[1461]: time="2024-12-13T08:49:08.691706331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:49:08.693090 containerd[1461]: time="2024-12-13T08:49:08.692946271Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:49:08.701232 systemd-networkd[1365]: cni0: Gained IPv6LL Dec 13 08:49:08.727272 systemd[1]: Started cri-containerd-bf70e310dfd2480be58c94a7c7f05f4fd984a9efff16817e59598f4ea434b79e.scope - libcontainer container bf70e310dfd2480be58c94a7c7f05f4fd984a9efff16817e59598f4ea434b79e. Dec 13 08:49:08.788383 containerd[1461]: time="2024-12-13T08:49:08.787321880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xw5jj,Uid:f8a70286-08e7-416f-a17e-c2e5d78474d2,Namespace:kube-system,Attempt:0,} returns sandbox id \"bf70e310dfd2480be58c94a7c7f05f4fd984a9efff16817e59598f4ea434b79e\"" Dec 13 08:49:08.788655 kubelet[2507]: E1213 08:49:08.788267 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:49:08.796992 containerd[1461]: time="2024-12-13T08:49:08.796939437Z" level=info msg="CreateContainer within sandbox \"bf70e310dfd2480be58c94a7c7f05f4fd984a9efff16817e59598f4ea434b79e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 08:49:08.798641 kubelet[2507]: E1213 08:49:08.798387 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:49:08.830176 containerd[1461]: time="2024-12-13T08:49:08.829976196Z" level=info msg="CreateContainer within sandbox \"bf70e310dfd2480be58c94a7c7f05f4fd984a9efff16817e59598f4ea434b79e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bc6bff14d374e349a027db6775cd9ea722371926f335d33df88449c76ec3e663\"" Dec 13 08:49:08.831256 containerd[1461]: time="2024-12-13T08:49:08.831215100Z" level=info msg="StartContainer for \"bc6bff14d374e349a027db6775cd9ea722371926f335d33df88449c76ec3e663\"" Dec 13 08:49:08.835851 kubelet[2507]: I1213 08:49:08.835520 2507 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-wdj2t" podStartSLOduration=20.835487763 podStartE2EDuration="20.835487763s" podCreationTimestamp="2024-12-13 08:48:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 08:49:08.816591428 +0000 UTC m=+34.387522576" watchObservedRunningTime="2024-12-13 08:49:08.835487763 +0000 UTC m=+34.406418911" Dec 13 08:49:08.884397 systemd[1]: Started cri-containerd-bc6bff14d374e349a027db6775cd9ea722371926f335d33df88449c76ec3e663.scope - libcontainer container bc6bff14d374e349a027db6775cd9ea722371926f335d33df88449c76ec3e663. Dec 13 08:49:08.926805 containerd[1461]: time="2024-12-13T08:49:08.926657905Z" level=info msg="StartContainer for \"bc6bff14d374e349a027db6775cd9ea722371926f335d33df88449c76ec3e663\" returns successfully" Dec 13 08:49:09.399980 systemd-networkd[1365]: veth7fc800fd: Gained IPv6LL Dec 13 08:49:09.802170 kubelet[2507]: E1213 08:49:09.802040 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:49:09.804347 kubelet[2507]: E1213 08:49:09.803303 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:49:09.823060 kubelet[2507]: I1213 08:49:09.821834 2507 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-xw5jj" podStartSLOduration=21.821812294 podStartE2EDuration="21.821812294s" podCreationTimestamp="2024-12-13 08:48:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 08:49:09.821081758 +0000 UTC m=+35.392012906" watchObservedRunningTime="2024-12-13 08:49:09.821812294 +0000 UTC m=+35.392743439" Dec 13 08:49:10.038601 systemd-networkd[1365]: veth7cfee845: Gained IPv6LL Dec 13 08:49:10.771477 systemd[1]: Started sshd@5-64.23.129.27:22-147.75.109.163:52610.service - OpenSSH per-connection server daemon (147.75.109.163:52610). Dec 13 08:49:10.806634 kubelet[2507]: E1213 08:49:10.806559 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:49:10.810210 kubelet[2507]: E1213 08:49:10.808175 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:49:10.839625 sshd[3395]: Accepted publickey for core from 147.75.109.163 port 52610 ssh2: RSA SHA256:GmRBCjv5DLbtT++ktFQz5R9M6+onrAQ9dTcgZ+NRPZM Dec 13 08:49:10.842157 sshd[3395]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:49:10.849296 systemd-logind[1446]: New session 6 of user core. Dec 13 08:49:10.854297 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 08:49:11.024785 sshd[3395]: pam_unix(sshd:session): session closed for user core Dec 13 08:49:11.031299 systemd[1]: sshd@5-64.23.129.27:22-147.75.109.163:52610.service: Deactivated successfully. Dec 13 08:49:11.035586 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 08:49:11.037068 systemd-logind[1446]: Session 6 logged out. Waiting for processes to exit. Dec 13 08:49:11.038537 systemd-logind[1446]: Removed session 6. Dec 13 08:49:11.808263 kubelet[2507]: E1213 08:49:11.808192 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 08:49:16.044377 systemd[1]: Started sshd@6-64.23.129.27:22-147.75.109.163:44124.service - OpenSSH per-connection server daemon (147.75.109.163:44124). Dec 13 08:49:16.119288 sshd[3437]: Accepted publickey for core from 147.75.109.163 port 44124 ssh2: RSA SHA256:GmRBCjv5DLbtT++ktFQz5R9M6+onrAQ9dTcgZ+NRPZM Dec 13 08:49:16.121256 sshd[3437]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:49:16.128847 systemd-logind[1446]: New session 7 of user core. Dec 13 08:49:16.134739 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 08:49:16.295128 sshd[3437]: pam_unix(sshd:session): session closed for user core Dec 13 08:49:16.300628 systemd-logind[1446]: Session 7 logged out. Waiting for processes to exit. Dec 13 08:49:16.303813 systemd[1]: sshd@6-64.23.129.27:22-147.75.109.163:44124.service: Deactivated successfully. Dec 13 08:49:16.307759 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 08:49:16.311317 systemd-logind[1446]: Removed session 7. Dec 13 08:49:21.316760 systemd[1]: Started sshd@7-64.23.129.27:22-147.75.109.163:44136.service - OpenSSH per-connection server daemon (147.75.109.163:44136). Dec 13 08:49:21.384630 sshd[3474]: Accepted publickey for core from 147.75.109.163 port 44136 ssh2: RSA SHA256:GmRBCjv5DLbtT++ktFQz5R9M6+onrAQ9dTcgZ+NRPZM Dec 13 08:49:21.387211 sshd[3474]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:49:21.397480 systemd-logind[1446]: New session 8 of user core. Dec 13 08:49:21.401357 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 08:49:21.577362 sshd[3474]: pam_unix(sshd:session): session closed for user core Dec 13 08:49:21.583402 systemd[1]: sshd@7-64.23.129.27:22-147.75.109.163:44136.service: Deactivated successfully. Dec 13 08:49:21.586753 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 08:49:21.588191 systemd-logind[1446]: Session 8 logged out. Waiting for processes to exit. Dec 13 08:49:21.589698 systemd-logind[1446]: Removed session 8. Dec 13 08:49:26.598741 systemd[1]: Started sshd@8-64.23.129.27:22-147.75.109.163:38588.service - OpenSSH per-connection server daemon (147.75.109.163:38588). Dec 13 08:49:26.653239 sshd[3509]: Accepted publickey for core from 147.75.109.163 port 38588 ssh2: RSA SHA256:GmRBCjv5DLbtT++ktFQz5R9M6+onrAQ9dTcgZ+NRPZM Dec 13 08:49:26.657247 sshd[3509]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:49:26.667442 systemd-logind[1446]: New session 9 of user core. Dec 13 08:49:26.673402 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 08:49:26.845205 sshd[3509]: pam_unix(sshd:session): session closed for user core Dec 13 08:49:26.858229 systemd[1]: sshd@8-64.23.129.27:22-147.75.109.163:38588.service: Deactivated successfully. Dec 13 08:49:26.864028 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 08:49:26.867952 systemd-logind[1446]: Session 9 logged out. Waiting for processes to exit. Dec 13 08:49:26.876702 systemd[1]: Started sshd@9-64.23.129.27:22-147.75.109.163:38604.service - OpenSSH per-connection server daemon (147.75.109.163:38604). Dec 13 08:49:26.879806 systemd-logind[1446]: Removed session 9. Dec 13 08:49:26.941054 sshd[3523]: Accepted publickey for core from 147.75.109.163 port 38604 ssh2: RSA SHA256:GmRBCjv5DLbtT++ktFQz5R9M6+onrAQ9dTcgZ+NRPZM Dec 13 08:49:26.943933 sshd[3523]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:49:26.952898 systemd-logind[1446]: New session 10 of user core. Dec 13 08:49:26.958471 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 08:49:27.181529 sshd[3523]: pam_unix(sshd:session): session closed for user core Dec 13 08:49:27.198575 systemd[1]: sshd@9-64.23.129.27:22-147.75.109.163:38604.service: Deactivated successfully. Dec 13 08:49:27.206973 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 08:49:27.209654 systemd-logind[1446]: Session 10 logged out. Waiting for processes to exit. Dec 13 08:49:27.225245 systemd[1]: Started sshd@10-64.23.129.27:22-147.75.109.163:38616.service - OpenSSH per-connection server daemon (147.75.109.163:38616). Dec 13 08:49:27.227116 systemd-logind[1446]: Removed session 10. Dec 13 08:49:27.291555 sshd[3539]: Accepted publickey for core from 147.75.109.163 port 38616 ssh2: RSA SHA256:GmRBCjv5DLbtT++ktFQz5R9M6+onrAQ9dTcgZ+NRPZM Dec 13 08:49:27.295289 sshd[3539]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:49:27.307351 systemd-logind[1446]: New session 11 of user core. Dec 13 08:49:27.315370 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 08:49:27.478205 sshd[3539]: pam_unix(sshd:session): session closed for user core Dec 13 08:49:27.485927 systemd[1]: sshd@10-64.23.129.27:22-147.75.109.163:38616.service: Deactivated successfully. Dec 13 08:49:27.489644 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 08:49:27.490991 systemd-logind[1446]: Session 11 logged out. Waiting for processes to exit. Dec 13 08:49:27.492460 systemd-logind[1446]: Removed session 11. Dec 13 08:49:32.504509 systemd[1]: Started sshd@11-64.23.129.27:22-147.75.109.163:38624.service - OpenSSH per-connection server daemon (147.75.109.163:38624). Dec 13 08:49:32.569089 sshd[3588]: Accepted publickey for core from 147.75.109.163 port 38624 ssh2: RSA SHA256:GmRBCjv5DLbtT++ktFQz5R9M6+onrAQ9dTcgZ+NRPZM Dec 13 08:49:32.571184 sshd[3588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:49:32.577451 systemd-logind[1446]: New session 12 of user core. Dec 13 08:49:32.584416 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 08:49:32.727667 sshd[3588]: pam_unix(sshd:session): session closed for user core Dec 13 08:49:32.736918 systemd[1]: sshd@11-64.23.129.27:22-147.75.109.163:38624.service: Deactivated successfully. Dec 13 08:49:32.739799 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 08:49:32.742880 systemd-logind[1446]: Session 12 logged out. Waiting for processes to exit. Dec 13 08:49:32.749459 systemd[1]: Started sshd@12-64.23.129.27:22-147.75.109.163:38636.service - OpenSSH per-connection server daemon (147.75.109.163:38636). Dec 13 08:49:32.751599 systemd-logind[1446]: Removed session 12. Dec 13 08:49:32.801025 sshd[3601]: Accepted publickey for core from 147.75.109.163 port 38636 ssh2: RSA SHA256:GmRBCjv5DLbtT++ktFQz5R9M6+onrAQ9dTcgZ+NRPZM Dec 13 08:49:32.803515 sshd[3601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:49:32.811545 systemd-logind[1446]: New session 13 of user core. Dec 13 08:49:32.817410 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 08:49:33.110563 sshd[3601]: pam_unix(sshd:session): session closed for user core Dec 13 08:49:33.122216 systemd[1]: sshd@12-64.23.129.27:22-147.75.109.163:38636.service: Deactivated successfully. Dec 13 08:49:33.125677 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 08:49:33.128795 systemd-logind[1446]: Session 13 logged out. Waiting for processes to exit. Dec 13 08:49:33.135497 systemd[1]: Started sshd@13-64.23.129.27:22-147.75.109.163:38644.service - OpenSSH per-connection server daemon (147.75.109.163:38644). Dec 13 08:49:33.138231 systemd-logind[1446]: Removed session 13. Dec 13 08:49:33.198609 sshd[3612]: Accepted publickey for core from 147.75.109.163 port 38644 ssh2: RSA SHA256:GmRBCjv5DLbtT++ktFQz5R9M6+onrAQ9dTcgZ+NRPZM Dec 13 08:49:33.200747 sshd[3612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:49:33.207782 systemd-logind[1446]: New session 14 of user core. Dec 13 08:49:33.217287 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 08:49:35.058471 sshd[3612]: pam_unix(sshd:session): session closed for user core Dec 13 08:49:35.077954 systemd[1]: sshd@13-64.23.129.27:22-147.75.109.163:38644.service: Deactivated successfully. Dec 13 08:49:35.084417 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 08:49:35.090563 systemd-logind[1446]: Session 14 logged out. Waiting for processes to exit. Dec 13 08:49:35.100491 systemd[1]: Started sshd@14-64.23.129.27:22-147.75.109.163:38650.service - OpenSSH per-connection server daemon (147.75.109.163:38650). Dec 13 08:49:35.106437 systemd-logind[1446]: Removed session 14. Dec 13 08:49:35.163253 sshd[3631]: Accepted publickey for core from 147.75.109.163 port 38650 ssh2: RSA SHA256:GmRBCjv5DLbtT++ktFQz5R9M6+onrAQ9dTcgZ+NRPZM Dec 13 08:49:35.165202 sshd[3631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:49:35.172496 systemd-logind[1446]: New session 15 of user core. Dec 13 08:49:35.182453 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 08:49:35.473349 sshd[3631]: pam_unix(sshd:session): session closed for user core Dec 13 08:49:35.487128 systemd[1]: sshd@14-64.23.129.27:22-147.75.109.163:38650.service: Deactivated successfully. Dec 13 08:49:35.490979 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 08:49:35.494469 systemd-logind[1446]: Session 15 logged out. Waiting for processes to exit. Dec 13 08:49:35.499429 systemd[1]: Started sshd@15-64.23.129.27:22-147.75.109.163:38656.service - OpenSSH per-connection server daemon (147.75.109.163:38656). Dec 13 08:49:35.501466 systemd-logind[1446]: Removed session 15. Dec 13 08:49:35.550042 sshd[3643]: Accepted publickey for core from 147.75.109.163 port 38656 ssh2: RSA SHA256:GmRBCjv5DLbtT++ktFQz5R9M6+onrAQ9dTcgZ+NRPZM Dec 13 08:49:35.551954 sshd[3643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:49:35.557748 systemd-logind[1446]: New session 16 of user core. Dec 13 08:49:35.561293 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 08:49:35.708286 sshd[3643]: pam_unix(sshd:session): session closed for user core Dec 13 08:49:35.712330 systemd[1]: sshd@15-64.23.129.27:22-147.75.109.163:38656.service: Deactivated successfully. Dec 13 08:49:35.715627 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 08:49:35.718923 systemd-logind[1446]: Session 16 logged out. Waiting for processes to exit. Dec 13 08:49:35.721633 systemd-logind[1446]: Removed session 16. Dec 13 08:49:40.724446 systemd[1]: Started sshd@16-64.23.129.27:22-147.75.109.163:57210.service - OpenSSH per-connection server daemon (147.75.109.163:57210). Dec 13 08:49:40.779094 sshd[3682]: Accepted publickey for core from 147.75.109.163 port 57210 ssh2: RSA SHA256:GmRBCjv5DLbtT++ktFQz5R9M6+onrAQ9dTcgZ+NRPZM Dec 13 08:49:40.780960 sshd[3682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:49:40.786196 systemd-logind[1446]: New session 17 of user core. Dec 13 08:49:40.798448 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 08:49:40.967396 sshd[3682]: pam_unix(sshd:session): session closed for user core Dec 13 08:49:40.973211 systemd-logind[1446]: Session 17 logged out. Waiting for processes to exit. Dec 13 08:49:40.973509 systemd[1]: sshd@16-64.23.129.27:22-147.75.109.163:57210.service: Deactivated successfully. Dec 13 08:49:40.977529 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 08:49:40.982662 systemd-logind[1446]: Removed session 17. Dec 13 08:49:45.987770 systemd[1]: Started sshd@17-64.23.129.27:22-147.75.109.163:57224.service - OpenSSH per-connection server daemon (147.75.109.163:57224). Dec 13 08:49:46.054033 sshd[3717]: Accepted publickey for core from 147.75.109.163 port 57224 ssh2: RSA SHA256:GmRBCjv5DLbtT++ktFQz5R9M6+onrAQ9dTcgZ+NRPZM Dec 13 08:49:46.056103 sshd[3717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:49:46.064202 systemd-logind[1446]: New session 18 of user core. Dec 13 08:49:46.070464 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 08:49:46.217325 sshd[3717]: pam_unix(sshd:session): session closed for user core Dec 13 08:49:46.223626 systemd[1]: sshd@17-64.23.129.27:22-147.75.109.163:57224.service: Deactivated successfully. Dec 13 08:49:46.226755 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 08:49:46.228139 systemd-logind[1446]: Session 18 logged out. Waiting for processes to exit. Dec 13 08:49:46.229915 systemd-logind[1446]: Removed session 18. Dec 13 08:49:51.238618 systemd[1]: Started sshd@18-64.23.129.27:22-147.75.109.163:55236.service - OpenSSH per-connection server daemon (147.75.109.163:55236). Dec 13 08:49:51.292313 sshd[3754]: Accepted publickey for core from 147.75.109.163 port 55236 ssh2: RSA SHA256:GmRBCjv5DLbtT++ktFQz5R9M6+onrAQ9dTcgZ+NRPZM Dec 13 08:49:51.293474 sshd[3754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:49:51.302313 systemd-logind[1446]: New session 19 of user core. Dec 13 08:49:51.308319 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 08:49:51.449990 sshd[3754]: pam_unix(sshd:session): session closed for user core Dec 13 08:49:51.455409 systemd[1]: sshd@18-64.23.129.27:22-147.75.109.163:55236.service: Deactivated successfully. Dec 13 08:49:51.460323 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 08:49:51.461769 systemd-logind[1446]: Session 19 logged out. Waiting for processes to exit. Dec 13 08:49:51.463822 systemd-logind[1446]: Removed session 19. Dec 13 08:49:56.479598 systemd[1]: Started sshd@19-64.23.129.27:22-147.75.109.163:39504.service - OpenSSH per-connection server daemon (147.75.109.163:39504). Dec 13 08:49:56.532361 sshd[3787]: Accepted publickey for core from 147.75.109.163 port 39504 ssh2: RSA SHA256:GmRBCjv5DLbtT++ktFQz5R9M6+onrAQ9dTcgZ+NRPZM Dec 13 08:49:56.536515 sshd[3787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:49:56.545487 systemd-logind[1446]: New session 20 of user core. Dec 13 08:49:56.555323 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 08:49:56.691992 sshd[3787]: pam_unix(sshd:session): session closed for user core Dec 13 08:49:56.697228 systemd[1]: sshd@19-64.23.129.27:22-147.75.109.163:39504.service: Deactivated successfully. Dec 13 08:49:56.699767 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 08:49:56.700651 systemd-logind[1446]: Session 20 logged out. Waiting for processes to exit. Dec 13 08:49:56.702144 systemd-logind[1446]: Removed session 20. Dec 13 08:49:57.607895 kubelet[2507]: E1213 08:49:57.607799 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"