Dec 13 09:10:43.067502 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Dec 12 23:15:00 -00 2024 Dec 13 09:10:43.067545 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 09:10:43.067562 kernel: BIOS-provided physical RAM map: Dec 13 09:10:43.067572 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 09:10:43.067581 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 09:10:43.067590 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 09:10:43.067601 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Dec 13 09:10:43.067610 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Dec 13 09:10:43.067619 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 09:10:43.067632 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 09:10:43.067650 kernel: NX (Execute Disable) protection: active Dec 13 09:10:43.067662 kernel: APIC: Static calls initialized Dec 13 09:10:43.067672 kernel: SMBIOS 2.8 present. Dec 13 09:10:43.067683 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Dec 13 09:10:43.067695 kernel: Hypervisor detected: KVM Dec 13 09:10:43.067709 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 09:10:43.067727 kernel: kvm-clock: using sched offset of 3679859346 cycles Dec 13 09:10:43.067739 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 09:10:43.067769 kernel: tsc: Detected 2000.000 MHz processor Dec 13 09:10:43.067782 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 09:10:43.067795 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 09:10:43.067809 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Dec 13 09:10:43.067822 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 13 09:10:43.067834 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 09:10:43.067850 kernel: ACPI: Early table checksum verification disabled Dec 13 09:10:43.067862 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Dec 13 09:10:43.067875 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 09:10:43.067885 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 09:10:43.067897 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 09:10:43.067908 kernel: ACPI: FACS 0x000000007FFE0000 000040 Dec 13 09:10:43.067920 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 09:10:43.067931 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 09:10:43.067943 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 09:10:43.067959 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 09:10:43.067969 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Dec 13 09:10:43.068026 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Dec 13 09:10:43.068036 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Dec 13 09:10:43.068046 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Dec 13 09:10:43.068056 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Dec 13 09:10:43.068066 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Dec 13 09:10:43.068093 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Dec 13 09:10:43.068105 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 09:10:43.068112 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 09:10:43.068120 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Dec 13 09:10:43.068127 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Dec 13 09:10:43.068135 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Dec 13 09:10:43.068142 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Dec 13 09:10:43.068153 kernel: Zone ranges: Dec 13 09:10:43.068161 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 09:10:43.068168 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Dec 13 09:10:43.068176 kernel: Normal empty Dec 13 09:10:43.068183 kernel: Movable zone start for each node Dec 13 09:10:43.068190 kernel: Early memory node ranges Dec 13 09:10:43.068198 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 09:10:43.068205 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Dec 13 09:10:43.068212 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Dec 13 09:10:43.068222 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 09:10:43.068233 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 09:10:43.068241 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Dec 13 09:10:43.068248 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 09:10:43.068256 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 09:10:43.068263 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 09:10:43.068270 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 09:10:43.068278 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 09:10:43.068285 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 09:10:43.068295 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 09:10:43.068302 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 09:10:43.068310 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 09:10:43.068317 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 09:10:43.068324 kernel: TSC deadline timer available Dec 13 09:10:43.068332 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 09:10:43.068339 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 13 09:10:43.068347 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Dec 13 09:10:43.068357 kernel: Booting paravirtualized kernel on KVM Dec 13 09:10:43.068365 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 09:10:43.068376 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 13 09:10:43.068383 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Dec 13 09:10:43.068391 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Dec 13 09:10:43.068398 kernel: pcpu-alloc: [0] 0 1 Dec 13 09:10:43.068405 kernel: kvm-guest: PV spinlocks disabled, no host support Dec 13 09:10:43.068414 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 09:10:43.068423 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 09:10:43.068433 kernel: random: crng init done Dec 13 09:10:43.068440 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 09:10:43.068447 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 09:10:43.068455 kernel: Fallback order for Node 0: 0 Dec 13 09:10:43.068462 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Dec 13 09:10:43.068469 kernel: Policy zone: DMA32 Dec 13 09:10:43.068477 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 09:10:43.068485 kernel: Memory: 1971204K/2096612K available (12288K kernel code, 2299K rwdata, 22724K rodata, 42844K init, 2348K bss, 125148K reserved, 0K cma-reserved) Dec 13 09:10:43.068492 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 09:10:43.068503 kernel: Kernel/User page tables isolation: enabled Dec 13 09:10:43.068511 kernel: ftrace: allocating 37902 entries in 149 pages Dec 13 09:10:43.068518 kernel: ftrace: allocated 149 pages with 4 groups Dec 13 09:10:43.068525 kernel: Dynamic Preempt: voluntary Dec 13 09:10:43.068532 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 09:10:43.068545 kernel: rcu: RCU event tracing is enabled. Dec 13 09:10:43.068553 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 09:10:43.068561 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 09:10:43.068568 kernel: Rude variant of Tasks RCU enabled. Dec 13 09:10:43.068576 kernel: Tracing variant of Tasks RCU enabled. Dec 13 09:10:43.068586 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 09:10:43.068594 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 09:10:43.068601 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 13 09:10:43.068612 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 09:10:43.068620 kernel: Console: colour VGA+ 80x25 Dec 13 09:10:43.068627 kernel: printk: console [tty0] enabled Dec 13 09:10:43.068635 kernel: printk: console [ttyS0] enabled Dec 13 09:10:43.068642 kernel: ACPI: Core revision 20230628 Dec 13 09:10:43.068650 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 13 09:10:43.068660 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 09:10:43.068667 kernel: x2apic enabled Dec 13 09:10:43.068675 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 09:10:43.068682 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 09:10:43.068690 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns Dec 13 09:10:43.068697 kernel: Calibrating delay loop (skipped) preset value.. 4000.00 BogoMIPS (lpj=2000000) Dec 13 09:10:43.068705 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Dec 13 09:10:43.068712 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Dec 13 09:10:43.068730 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 09:10:43.068738 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 09:10:43.068747 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 09:10:43.068757 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 09:10:43.068765 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Dec 13 09:10:43.068773 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 09:10:43.068782 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 13 09:10:43.068789 kernel: MDS: Mitigation: Clear CPU buffers Dec 13 09:10:43.068797 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 09:10:43.068812 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 09:10:43.068820 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 09:10:43.068828 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 09:10:43.068836 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 09:10:43.068844 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 13 09:10:43.068852 kernel: Freeing SMP alternatives memory: 32K Dec 13 09:10:43.068865 kernel: pid_max: default: 32768 minimum: 301 Dec 13 09:10:43.068876 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 09:10:43.068892 kernel: landlock: Up and running. Dec 13 09:10:43.068904 kernel: SELinux: Initializing. Dec 13 09:10:43.068916 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 09:10:43.068928 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 09:10:43.068941 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Dec 13 09:10:43.068954 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 09:10:43.068968 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 09:10:43.069005 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 09:10:43.069018 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Dec 13 09:10:43.069034 kernel: signal: max sigframe size: 1776 Dec 13 09:10:43.069045 kernel: rcu: Hierarchical SRCU implementation. Dec 13 09:10:43.069057 kernel: rcu: Max phase no-delay instances is 400. Dec 13 09:10:43.069071 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 09:10:43.069086 kernel: smp: Bringing up secondary CPUs ... Dec 13 09:10:43.069100 kernel: smpboot: x86: Booting SMP configuration: Dec 13 09:10:43.069117 kernel: .... node #0, CPUs: #1 Dec 13 09:10:43.069130 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 09:10:43.069142 kernel: smpboot: Max logical packages: 1 Dec 13 09:10:43.069160 kernel: smpboot: Total of 2 processors activated (8000.00 BogoMIPS) Dec 13 09:10:43.069174 kernel: devtmpfs: initialized Dec 13 09:10:43.069186 kernel: x86/mm: Memory block size: 128MB Dec 13 09:10:43.069199 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 09:10:43.069211 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 09:10:43.069223 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 09:10:43.069235 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 09:10:43.069247 kernel: audit: initializing netlink subsys (disabled) Dec 13 09:10:43.069258 kernel: audit: type=2000 audit(1734081041.934:1): state=initialized audit_enabled=0 res=1 Dec 13 09:10:43.069275 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 09:10:43.069288 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 09:10:43.069301 kernel: cpuidle: using governor menu Dec 13 09:10:43.069315 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 09:10:43.069328 kernel: dca service started, version 1.12.1 Dec 13 09:10:43.069342 kernel: PCI: Using configuration type 1 for base access Dec 13 09:10:43.069356 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 09:10:43.069370 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 09:10:43.069382 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 09:10:43.069398 kernel: ACPI: Added _OSI(Module Device) Dec 13 09:10:43.069409 kernel: ACPI: Added _OSI(Processor Device) Dec 13 09:10:43.069421 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 09:10:43.069434 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 09:10:43.069446 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 09:10:43.069458 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 09:10:43.069470 kernel: ACPI: Interpreter enabled Dec 13 09:10:43.069482 kernel: ACPI: PM: (supports S0 S5) Dec 13 09:10:43.069494 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 09:10:43.069509 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 09:10:43.069521 kernel: PCI: Using E820 reservations for host bridge windows Dec 13 09:10:43.069533 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Dec 13 09:10:43.069544 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 09:10:43.069870 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 13 09:10:43.070056 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Dec 13 09:10:43.070204 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Dec 13 09:10:43.070228 kernel: acpiphp: Slot [3] registered Dec 13 09:10:43.070241 kernel: acpiphp: Slot [4] registered Dec 13 09:10:43.070253 kernel: acpiphp: Slot [5] registered Dec 13 09:10:43.070266 kernel: acpiphp: Slot [6] registered Dec 13 09:10:43.070277 kernel: acpiphp: Slot [7] registered Dec 13 09:10:43.070289 kernel: acpiphp: Slot [8] registered Dec 13 09:10:43.070300 kernel: acpiphp: Slot [9] registered Dec 13 09:10:43.070311 kernel: acpiphp: Slot [10] registered Dec 13 09:10:43.070324 kernel: acpiphp: Slot [11] registered Dec 13 09:10:43.070340 kernel: acpiphp: Slot [12] registered Dec 13 09:10:43.070352 kernel: acpiphp: Slot [13] registered Dec 13 09:10:43.070363 kernel: acpiphp: Slot [14] registered Dec 13 09:10:43.070375 kernel: acpiphp: Slot [15] registered Dec 13 09:10:43.070387 kernel: acpiphp: Slot [16] registered Dec 13 09:10:43.070399 kernel: acpiphp: Slot [17] registered Dec 13 09:10:43.070411 kernel: acpiphp: Slot [18] registered Dec 13 09:10:43.070424 kernel: acpiphp: Slot [19] registered Dec 13 09:10:43.070436 kernel: acpiphp: Slot [20] registered Dec 13 09:10:43.070448 kernel: acpiphp: Slot [21] registered Dec 13 09:10:43.070465 kernel: acpiphp: Slot [22] registered Dec 13 09:10:43.070476 kernel: acpiphp: Slot [23] registered Dec 13 09:10:43.070488 kernel: acpiphp: Slot [24] registered Dec 13 09:10:43.070499 kernel: acpiphp: Slot [25] registered Dec 13 09:10:43.070512 kernel: acpiphp: Slot [26] registered Dec 13 09:10:43.070524 kernel: acpiphp: Slot [27] registered Dec 13 09:10:43.070536 kernel: acpiphp: Slot [28] registered Dec 13 09:10:43.070549 kernel: acpiphp: Slot [29] registered Dec 13 09:10:43.070562 kernel: acpiphp: Slot [30] registered Dec 13 09:10:43.070575 kernel: acpiphp: Slot [31] registered Dec 13 09:10:43.070592 kernel: PCI host bridge to bus 0000:00 Dec 13 09:10:43.070775 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 09:10:43.070871 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 09:10:43.070958 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 09:10:43.071060 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Dec 13 09:10:43.071185 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Dec 13 09:10:43.071301 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 09:10:43.071483 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Dec 13 09:10:43.071666 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Dec 13 09:10:43.071830 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Dec 13 09:10:43.071970 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Dec 13 09:10:43.072141 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Dec 13 09:10:43.072282 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Dec 13 09:10:43.072433 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Dec 13 09:10:43.072570 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Dec 13 09:10:43.072747 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Dec 13 09:10:43.072879 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Dec 13 09:10:43.073079 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Dec 13 09:10:43.073223 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Dec 13 09:10:43.073358 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Dec 13 09:10:43.073527 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Dec 13 09:10:43.073673 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Dec 13 09:10:43.073812 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Dec 13 09:10:43.073955 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Dec 13 09:10:43.074072 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Dec 13 09:10:43.074210 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 09:10:43.074361 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Dec 13 09:10:43.074459 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Dec 13 09:10:43.074585 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Dec 13 09:10:43.074705 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Dec 13 09:10:43.074872 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Dec 13 09:10:43.075032 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Dec 13 09:10:43.075328 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Dec 13 09:10:43.075512 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Dec 13 09:10:43.075674 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Dec 13 09:10:43.075813 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Dec 13 09:10:43.075948 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Dec 13 09:10:43.076454 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Dec 13 09:10:43.076666 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Dec 13 09:10:43.076776 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Dec 13 09:10:43.076881 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Dec 13 09:10:43.076996 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Dec 13 09:10:43.077159 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Dec 13 09:10:43.077256 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Dec 13 09:10:43.077351 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Dec 13 09:10:43.077494 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Dec 13 09:10:43.077631 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Dec 13 09:10:43.077750 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Dec 13 09:10:43.077892 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Dec 13 09:10:43.077909 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 09:10:43.077923 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 09:10:43.077935 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 09:10:43.077946 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 09:10:43.077958 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 13 09:10:43.077993 kernel: iommu: Default domain type: Translated Dec 13 09:10:43.078006 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 09:10:43.078020 kernel: PCI: Using ACPI for IRQ routing Dec 13 09:10:43.078032 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 09:10:43.078043 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 09:10:43.078056 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Dec 13 09:10:43.078202 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Dec 13 09:10:43.078319 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Dec 13 09:10:43.078413 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 09:10:43.078428 kernel: vgaarb: loaded Dec 13 09:10:43.078437 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 13 09:10:43.078445 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 13 09:10:43.078454 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 09:10:43.078462 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 09:10:43.078471 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 09:10:43.078479 kernel: pnp: PnP ACPI init Dec 13 09:10:43.078488 kernel: pnp: PnP ACPI: found 4 devices Dec 13 09:10:43.078496 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 09:10:43.078508 kernel: NET: Registered PF_INET protocol family Dec 13 09:10:43.078516 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 09:10:43.078525 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 09:10:43.078533 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 09:10:43.078541 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 09:10:43.078549 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 13 09:10:43.078557 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 09:10:43.078565 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 09:10:43.078573 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 09:10:43.078588 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 09:10:43.078601 kernel: NET: Registered PF_XDP protocol family Dec 13 09:10:43.078707 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 09:10:43.078795 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 09:10:43.078879 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 09:10:43.078965 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Dec 13 09:10:43.079349 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Dec 13 09:10:43.079460 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Dec 13 09:10:43.079635 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 13 09:10:43.079691 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Dec 13 09:10:43.079843 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 37333 usecs Dec 13 09:10:43.079860 kernel: PCI: CLS 0 bytes, default 64 Dec 13 09:10:43.079875 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 09:10:43.079889 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns Dec 13 09:10:43.079903 kernel: Initialise system trusted keyrings Dec 13 09:10:43.079917 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 09:10:43.079937 kernel: Key type asymmetric registered Dec 13 09:10:43.079950 kernel: Asymmetric key parser 'x509' registered Dec 13 09:10:43.079964 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 09:10:43.079994 kernel: io scheduler mq-deadline registered Dec 13 09:10:43.080002 kernel: io scheduler kyber registered Dec 13 09:10:43.080010 kernel: io scheduler bfq registered Dec 13 09:10:43.080019 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 09:10:43.080028 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Dec 13 09:10:43.080036 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Dec 13 09:10:43.080048 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Dec 13 09:10:43.080056 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 09:10:43.080064 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 09:10:43.080073 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 09:10:43.080082 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 09:10:43.080095 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 09:10:43.080286 kernel: rtc_cmos 00:03: RTC can wake from S4 Dec 13 09:10:43.080309 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 09:10:43.080442 kernel: rtc_cmos 00:03: registered as rtc0 Dec 13 09:10:43.080569 kernel: rtc_cmos 00:03: setting system clock to 2024-12-13T09:10:42 UTC (1734081042) Dec 13 09:10:43.080668 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Dec 13 09:10:43.080679 kernel: intel_pstate: CPU model not supported Dec 13 09:10:43.080687 kernel: NET: Registered PF_INET6 protocol family Dec 13 09:10:43.080695 kernel: Segment Routing with IPv6 Dec 13 09:10:43.080706 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 09:10:43.080719 kernel: NET: Registered PF_PACKET protocol family Dec 13 09:10:43.080732 kernel: Key type dns_resolver registered Dec 13 09:10:43.080749 kernel: IPI shorthand broadcast: enabled Dec 13 09:10:43.080757 kernel: sched_clock: Marking stable (1180003824, 161377386)->(1425667689, -84286479) Dec 13 09:10:43.080766 kernel: registered taskstats version 1 Dec 13 09:10:43.080775 kernel: Loading compiled-in X.509 certificates Dec 13 09:10:43.080786 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: c82d546f528d79a5758dcebbc47fb6daf92836a0' Dec 13 09:10:43.080820 kernel: Key type .fscrypt registered Dec 13 09:10:43.080864 kernel: Key type fscrypt-provisioning registered Dec 13 09:10:43.080873 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 09:10:43.080887 kernel: ima: Allocated hash algorithm: sha1 Dec 13 09:10:43.080906 kernel: ima: No architecture policies found Dec 13 09:10:43.080959 kernel: clk: Disabling unused clocks Dec 13 09:10:43.080967 kernel: Freeing unused kernel image (initmem) memory: 42844K Dec 13 09:10:43.081095 kernel: Write protecting the kernel read-only data: 36864k Dec 13 09:10:43.081133 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Dec 13 09:10:43.081150 kernel: Run /init as init process Dec 13 09:10:43.081166 kernel: with arguments: Dec 13 09:10:43.081181 kernel: /init Dec 13 09:10:43.081195 kernel: with environment: Dec 13 09:10:43.081211 kernel: HOME=/ Dec 13 09:10:43.081223 kernel: TERM=linux Dec 13 09:10:43.081235 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 09:10:43.081252 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 09:10:43.081270 systemd[1]: Detected virtualization kvm. Dec 13 09:10:43.081285 systemd[1]: Detected architecture x86-64. Dec 13 09:10:43.081299 systemd[1]: Running in initrd. Dec 13 09:10:43.081318 systemd[1]: No hostname configured, using default hostname. Dec 13 09:10:43.081333 systemd[1]: Hostname set to . Dec 13 09:10:43.081348 systemd[1]: Initializing machine ID from VM UUID. Dec 13 09:10:43.081363 systemd[1]: Queued start job for default target initrd.target. Dec 13 09:10:43.081379 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 09:10:43.081390 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 09:10:43.081401 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 09:10:43.081410 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 09:10:43.081427 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 09:10:43.081442 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 09:10:43.081461 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 09:10:43.081473 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 09:10:43.081487 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 09:10:43.081501 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 09:10:43.081517 systemd[1]: Reached target paths.target - Path Units. Dec 13 09:10:43.081553 systemd[1]: Reached target slices.target - Slice Units. Dec 13 09:10:43.081570 systemd[1]: Reached target swap.target - Swaps. Dec 13 09:10:43.081584 systemd[1]: Reached target timers.target - Timer Units. Dec 13 09:10:43.081597 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 09:10:43.081612 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 09:10:43.081629 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 09:10:43.081642 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 09:10:43.081656 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 09:10:43.081671 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 09:10:43.081687 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 09:10:43.081703 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 09:10:43.081719 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 09:10:43.081735 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 09:10:43.081754 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 09:10:43.081772 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 09:10:43.081788 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 09:10:43.081798 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 09:10:43.081807 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 09:10:43.081816 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 09:10:43.081825 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 09:10:43.081833 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 09:10:43.081891 systemd-journald[182]: Collecting audit messages is disabled. Dec 13 09:10:43.081930 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 09:10:43.081946 systemd-journald[182]: Journal started Dec 13 09:10:43.081972 systemd-journald[182]: Runtime Journal (/run/log/journal/572467512d0f462090db8a312ef818ee) is 4.9M, max 39.3M, 34.4M free. Dec 13 09:10:43.085593 systemd-modules-load[183]: Inserted module 'overlay' Dec 13 09:10:43.133041 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 09:10:43.133085 kernel: Bridge firewalling registered Dec 13 09:10:43.130822 systemd-modules-load[183]: Inserted module 'br_netfilter' Dec 13 09:10:43.141199 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 09:10:43.141339 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 09:10:43.149272 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 09:10:43.150640 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 09:10:43.161353 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 09:10:43.169604 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 09:10:43.176345 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 09:10:43.191385 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 09:10:43.194674 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 09:10:43.209681 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 09:10:43.213295 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 09:10:43.214435 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 09:10:43.222326 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 09:10:43.231415 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 09:10:43.254025 dracut-cmdline[218]: dracut-dracut-053 Dec 13 09:10:43.257312 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 09:10:43.274717 systemd-resolved[219]: Positive Trust Anchors: Dec 13 09:10:43.275673 systemd-resolved[219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 09:10:43.275727 systemd-resolved[219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 09:10:43.282899 systemd-resolved[219]: Defaulting to hostname 'linux'. Dec 13 09:10:43.286090 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 09:10:43.287102 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 09:10:43.407518 kernel: SCSI subsystem initialized Dec 13 09:10:43.423599 kernel: Loading iSCSI transport class v2.0-870. Dec 13 09:10:43.442043 kernel: iscsi: registered transport (tcp) Dec 13 09:10:43.475556 kernel: iscsi: registered transport (qla4xxx) Dec 13 09:10:43.475670 kernel: QLogic iSCSI HBA Driver Dec 13 09:10:43.550205 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 09:10:43.567437 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 09:10:43.607450 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 09:10:43.607555 kernel: device-mapper: uevent: version 1.0.3 Dec 13 09:10:43.610240 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 09:10:43.669057 kernel: raid6: avx2x4 gen() 18348 MB/s Dec 13 09:10:43.687054 kernel: raid6: avx2x2 gen() 18026 MB/s Dec 13 09:10:43.705331 kernel: raid6: avx2x1 gen() 13545 MB/s Dec 13 09:10:43.705446 kernel: raid6: using algorithm avx2x4 gen() 18348 MB/s Dec 13 09:10:43.724354 kernel: raid6: .... xor() 7096 MB/s, rmw enabled Dec 13 09:10:43.724471 kernel: raid6: using avx2x2 recovery algorithm Dec 13 09:10:43.764038 kernel: xor: automatically using best checksumming function avx Dec 13 09:10:43.969037 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 09:10:43.986922 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 09:10:43.995449 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 09:10:44.025463 systemd-udevd[402]: Using default interface naming scheme 'v255'. Dec 13 09:10:44.039725 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 09:10:44.049455 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 09:10:44.091046 dracut-pre-trigger[410]: rd.md=0: removing MD RAID activation Dec 13 09:10:44.147396 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 09:10:44.157397 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 09:10:44.229463 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 09:10:44.248317 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 09:10:44.284606 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 09:10:44.287798 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 09:10:44.289528 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 09:10:44.291944 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 09:10:44.300273 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 09:10:44.329690 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 09:10:44.355585 kernel: scsi host0: Virtio SCSI HBA Dec 13 09:10:44.360309 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 09:10:44.360374 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Dec 13 09:10:44.425024 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Dec 13 09:10:44.425275 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 09:10:44.425297 kernel: GPT:9289727 != 125829119 Dec 13 09:10:44.425313 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 09:10:44.425329 kernel: GPT:9289727 != 125829119 Dec 13 09:10:44.425347 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 09:10:44.425376 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 09:10:44.425394 kernel: libata version 3.00 loaded. Dec 13 09:10:44.428010 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Dec 13 09:10:44.451534 kernel: virtio_blk virtio5: [vdb] 964 512-byte logical blocks (494 kB/482 KiB) Dec 13 09:10:44.451749 kernel: ata_piix 0000:00:01.1: version 2.13 Dec 13 09:10:44.477037 kernel: scsi host1: ata_piix Dec 13 09:10:44.477214 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 09:10:44.477236 kernel: scsi host2: ata_piix Dec 13 09:10:44.479146 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Dec 13 09:10:44.479180 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Dec 13 09:10:44.479196 kernel: AES CTR mode by8 optimization enabled Dec 13 09:10:44.479212 kernel: ACPI: bus type USB registered Dec 13 09:10:44.479229 kernel: usbcore: registered new interface driver usbfs Dec 13 09:10:44.479246 kernel: usbcore: registered new interface driver hub Dec 13 09:10:44.479261 kernel: usbcore: registered new device driver usb Dec 13 09:10:44.451744 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 09:10:44.451927 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 09:10:44.453233 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 09:10:44.454035 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 09:10:44.454229 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 09:10:44.455847 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 09:10:44.469699 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 09:10:44.527221 kernel: BTRFS: device fsid c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (454) Dec 13 09:10:44.535013 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (457) Dec 13 09:10:44.576108 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 13 09:10:44.613267 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 09:10:44.626461 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 13 09:10:44.634939 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 09:10:44.644900 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 13 09:10:44.646348 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 13 09:10:44.656416 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 09:10:44.662296 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 09:10:44.683094 disk-uuid[541]: Primary Header is updated. Dec 13 09:10:44.683094 disk-uuid[541]: Secondary Entries is updated. Dec 13 09:10:44.683094 disk-uuid[541]: Secondary Header is updated. Dec 13 09:10:44.694011 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 09:10:44.694102 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Dec 13 09:10:44.709790 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Dec 13 09:10:44.710069 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Dec 13 09:10:44.710315 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Dec 13 09:10:44.710501 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 09:10:44.710521 kernel: hub 1-0:1.0: USB hub found Dec 13 09:10:44.710749 kernel: hub 1-0:1.0: 2 ports detected Dec 13 09:10:44.705122 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 09:10:45.718063 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 09:10:45.719542 disk-uuid[543]: The operation has completed successfully. Dec 13 09:10:45.770446 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 09:10:45.770584 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 09:10:45.787473 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 09:10:45.796720 sh[561]: Success Dec 13 09:10:45.818037 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 09:10:45.935774 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 09:10:45.944280 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 09:10:45.946682 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 09:10:45.972064 kernel: BTRFS info (device dm-0): first mount of filesystem c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be Dec 13 09:10:45.972181 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 09:10:45.972202 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 09:10:45.975590 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 09:10:45.975710 kernel: BTRFS info (device dm-0): using free space tree Dec 13 09:10:45.988830 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 09:10:45.990263 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 09:10:45.996277 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 09:10:46.003193 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 09:10:46.018524 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 09:10:46.018634 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 09:10:46.018656 kernel: BTRFS info (device vda6): using free space tree Dec 13 09:10:46.024058 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 09:10:46.039868 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 09:10:46.043605 kernel: BTRFS info (device vda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 09:10:46.052485 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 09:10:46.059413 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 09:10:46.244168 ignition[653]: Ignition 2.19.0 Dec 13 09:10:46.244192 ignition[653]: Stage: fetch-offline Dec 13 09:10:46.244271 ignition[653]: no configs at "/usr/lib/ignition/base.d" Dec 13 09:10:46.244309 ignition[653]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 13 09:10:46.244495 ignition[653]: parsed url from cmdline: "" Dec 13 09:10:46.249192 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 09:10:46.244501 ignition[653]: no config URL provided Dec 13 09:10:46.250874 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 09:10:46.244511 ignition[653]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 09:10:46.244524 ignition[653]: no config at "/usr/lib/ignition/user.ign" Dec 13 09:10:46.244533 ignition[653]: failed to fetch config: resource requires networking Dec 13 09:10:46.245181 ignition[653]: Ignition finished successfully Dec 13 09:10:46.260518 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 09:10:46.297933 systemd-networkd[751]: lo: Link UP Dec 13 09:10:46.297953 systemd-networkd[751]: lo: Gained carrier Dec 13 09:10:46.302419 systemd-networkd[751]: Enumeration completed Dec 13 09:10:46.303563 systemd-networkd[751]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Dec 13 09:10:46.303570 systemd-networkd[751]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Dec 13 09:10:46.304911 systemd-networkd[751]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 09:10:46.304917 systemd-networkd[751]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 09:10:46.305209 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 09:10:46.306879 systemd[1]: Reached target network.target - Network. Dec 13 09:10:46.309589 systemd-networkd[751]: eth0: Link UP Dec 13 09:10:46.309597 systemd-networkd[751]: eth0: Gained carrier Dec 13 09:10:46.309611 systemd-networkd[751]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Dec 13 09:10:46.314467 systemd-networkd[751]: eth1: Link UP Dec 13 09:10:46.314472 systemd-networkd[751]: eth1: Gained carrier Dec 13 09:10:46.314488 systemd-networkd[751]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 09:10:46.316518 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 09:10:46.333078 systemd-networkd[751]: eth0: DHCPv4 address 146.190.151.20/20, gateway 146.190.144.1 acquired from 169.254.169.253 Dec 13 09:10:46.338191 systemd-networkd[751]: eth1: DHCPv4 address 10.124.0.10/20, gateway 10.124.0.1 acquired from 169.254.169.253 Dec 13 09:10:46.369496 ignition[753]: Ignition 2.19.0 Dec 13 09:10:46.369620 ignition[753]: Stage: fetch Dec 13 09:10:46.370019 ignition[753]: no configs at "/usr/lib/ignition/base.d" Dec 13 09:10:46.370039 ignition[753]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 13 09:10:46.370333 ignition[753]: parsed url from cmdline: "" Dec 13 09:10:46.370377 ignition[753]: no config URL provided Dec 13 09:10:46.370390 ignition[753]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 09:10:46.370407 ignition[753]: no config at "/usr/lib/ignition/user.ign" Dec 13 09:10:46.370436 ignition[753]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Dec 13 09:10:46.415347 ignition[753]: GET result: OK Dec 13 09:10:46.415524 ignition[753]: parsing config with SHA512: 20434b4561e77c2db996689b8c66b322267fd62cc71789a6d6f05491ba7e22eb97e101283f74badcd53eccd45e81abd0e00a54c0c7501ab156bcee5971b841fc Dec 13 09:10:46.426386 unknown[753]: fetched base config from "system" Dec 13 09:10:46.431256 ignition[753]: fetch: fetch complete Dec 13 09:10:46.426398 unknown[753]: fetched base config from "system" Dec 13 09:10:46.431269 ignition[753]: fetch: fetch passed Dec 13 09:10:46.426423 unknown[753]: fetched user config from "digitalocean" Dec 13 09:10:46.431391 ignition[753]: Ignition finished successfully Dec 13 09:10:46.439293 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 09:10:46.447857 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 09:10:46.501352 ignition[760]: Ignition 2.19.0 Dec 13 09:10:46.502064 ignition[760]: Stage: kargs Dec 13 09:10:46.502556 ignition[760]: no configs at "/usr/lib/ignition/base.d" Dec 13 09:10:46.502576 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 13 09:10:46.512293 ignition[760]: kargs: kargs passed Dec 13 09:10:46.512419 ignition[760]: Ignition finished successfully Dec 13 09:10:46.515938 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 09:10:46.528407 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 09:10:46.590902 ignition[766]: Ignition 2.19.0 Dec 13 09:10:46.594942 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 09:10:46.590915 ignition[766]: Stage: disks Dec 13 09:10:46.596562 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 09:10:46.591298 ignition[766]: no configs at "/usr/lib/ignition/base.d" Dec 13 09:10:46.597414 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 09:10:46.591317 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 13 09:10:46.598223 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 09:10:46.592842 ignition[766]: disks: disks passed Dec 13 09:10:46.598924 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 09:10:46.592951 ignition[766]: Ignition finished successfully Dec 13 09:10:46.605260 systemd[1]: Reached target basic.target - Basic System. Dec 13 09:10:46.633270 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 09:10:46.661710 systemd-fsck[774]: ROOT: clean, 14/553520 files, 52654/553472 blocks Dec 13 09:10:46.667934 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 09:10:46.685797 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 09:10:46.868010 kernel: EXT4-fs (vda9): mounted filesystem 390119fa-ab9c-4f50-b046-3b5c76c46193 r/w with ordered data mode. Quota mode: none. Dec 13 09:10:46.869243 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 09:10:46.870691 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 09:10:46.878469 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 09:10:46.892219 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 09:10:46.895429 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Dec 13 09:10:46.908358 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (783) Dec 13 09:10:46.912047 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 09:10:46.916635 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 09:10:46.916714 kernel: BTRFS info (device vda6): using free space tree Dec 13 09:10:46.920632 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 13 09:10:46.927727 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 09:10:46.927789 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 09:10:46.964076 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 09:10:46.930582 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 09:10:46.936314 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 09:10:46.953281 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 09:10:47.062473 initrd-setup-root[813]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 09:10:47.085040 initrd-setup-root[820]: cut: /sysroot/etc/group: No such file or directory Dec 13 09:10:47.089239 coreos-metadata[786]: Dec 13 09:10:47.089 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Dec 13 09:10:47.098490 coreos-metadata[785]: Dec 13 09:10:47.098 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Dec 13 09:10:47.101708 initrd-setup-root[827]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 09:10:47.106137 coreos-metadata[786]: Dec 13 09:10:47.106 INFO Fetch successful Dec 13 09:10:47.114033 coreos-metadata[785]: Dec 13 09:10:47.113 INFO Fetch successful Dec 13 09:10:47.119532 initrd-setup-root[834]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 09:10:47.121834 coreos-metadata[786]: Dec 13 09:10:47.120 INFO wrote hostname ci-4081.2.1-5-05f51c210a to /sysroot/etc/hostname Dec 13 09:10:47.123304 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 09:10:47.129948 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Dec 13 09:10:47.130222 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Dec 13 09:10:47.321210 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 09:10:47.329739 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 09:10:47.334526 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 09:10:47.348473 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 09:10:47.352836 kernel: BTRFS info (device vda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 09:10:47.414761 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 09:10:47.420678 ignition[903]: INFO : Ignition 2.19.0 Dec 13 09:10:47.420678 ignition[903]: INFO : Stage: mount Dec 13 09:10:47.422287 ignition[903]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 09:10:47.422287 ignition[903]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 13 09:10:47.424425 ignition[903]: INFO : mount: mount passed Dec 13 09:10:47.425615 ignition[903]: INFO : Ignition finished successfully Dec 13 09:10:47.425817 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 09:10:47.432203 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 09:10:47.459392 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 09:10:47.485200 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (915) Dec 13 09:10:47.485300 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 09:10:47.491248 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 09:10:47.491970 kernel: BTRFS info (device vda6): using free space tree Dec 13 09:10:47.501792 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 09:10:47.503567 systemd-networkd[751]: eth0: Gained IPv6LL Dec 13 09:10:47.506666 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 09:10:47.549034 ignition[932]: INFO : Ignition 2.19.0 Dec 13 09:10:47.549034 ignition[932]: INFO : Stage: files Dec 13 09:10:47.552593 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 09:10:47.552593 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 13 09:10:47.552593 ignition[932]: DEBUG : files: compiled without relabeling support, skipping Dec 13 09:10:47.555783 ignition[932]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 09:10:47.555783 ignition[932]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 09:10:47.561239 ignition[932]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 09:10:47.564352 ignition[932]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 09:10:47.564352 ignition[932]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 09:10:47.563111 unknown[932]: wrote ssh authorized keys file for user: core Dec 13 09:10:47.569337 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 09:10:47.569337 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 09:10:47.612050 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 09:10:47.758554 systemd-networkd[751]: eth1: Gained IPv6LL Dec 13 09:10:47.799909 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 09:10:47.799909 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 09:10:47.799909 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Dec 13 09:10:48.292715 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 09:10:48.426031 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 09:10:48.426031 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 13 09:10:48.431620 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 09:10:48.431620 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 09:10:48.431620 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 09:10:48.431620 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 09:10:48.431620 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 09:10:48.431620 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 09:10:48.431620 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 09:10:48.431620 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 09:10:48.431620 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 09:10:48.431620 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 09:10:48.431620 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 09:10:48.431620 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 09:10:48.431620 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Dec 13 09:10:48.869007 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 13 09:10:49.262082 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 09:10:49.262082 ignition[932]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Dec 13 09:10:49.265581 ignition[932]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 09:10:49.265581 ignition[932]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 09:10:49.265581 ignition[932]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Dec 13 09:10:49.265581 ignition[932]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Dec 13 09:10:49.265581 ignition[932]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 09:10:49.274540 ignition[932]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 09:10:49.274540 ignition[932]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 09:10:49.274540 ignition[932]: INFO : files: files passed Dec 13 09:10:49.274540 ignition[932]: INFO : Ignition finished successfully Dec 13 09:10:49.268461 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 09:10:49.278571 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 09:10:49.295612 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 09:10:49.309970 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 09:10:49.311191 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 09:10:49.326870 initrd-setup-root-after-ignition[960]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 09:10:49.326870 initrd-setup-root-after-ignition[960]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 09:10:49.331660 initrd-setup-root-after-ignition[964]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 09:10:49.334877 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 09:10:49.336886 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 09:10:49.344643 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 09:10:49.419908 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 09:10:49.423357 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 09:10:49.426321 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 09:10:49.427336 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 09:10:49.429087 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 09:10:49.445621 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 09:10:49.481924 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 09:10:49.497284 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 09:10:49.522469 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 09:10:49.524806 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 09:10:49.525819 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 09:10:49.533624 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 09:10:49.533868 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 09:10:49.536275 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 09:10:49.541621 systemd[1]: Stopped target basic.target - Basic System. Dec 13 09:10:49.542547 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 09:10:49.545332 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 09:10:49.546162 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 09:10:49.546955 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 09:10:49.547873 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 09:10:49.551897 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 09:10:49.552701 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 09:10:49.553466 systemd[1]: Stopped target swap.target - Swaps. Dec 13 09:10:49.554113 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 09:10:49.554319 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 09:10:49.555322 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 09:10:49.556106 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 09:10:49.557030 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 09:10:49.560343 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 09:10:49.562113 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 09:10:49.562316 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 09:10:49.564372 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 09:10:49.564582 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 09:10:49.571490 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 09:10:49.571805 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 09:10:49.574701 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 13 09:10:49.574913 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 09:10:49.593767 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 09:10:49.596206 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 09:10:49.596462 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 09:10:49.603609 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 09:10:49.604840 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 09:10:49.605086 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 09:10:49.607417 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 09:10:49.607614 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 09:10:49.640337 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 09:10:49.640520 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 09:10:49.655117 ignition[984]: INFO : Ignition 2.19.0 Dec 13 09:10:49.655117 ignition[984]: INFO : Stage: umount Dec 13 09:10:49.655117 ignition[984]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 09:10:49.655117 ignition[984]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 13 09:10:49.692411 ignition[984]: INFO : umount: umount passed Dec 13 09:10:49.692411 ignition[984]: INFO : Ignition finished successfully Dec 13 09:10:49.669108 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 09:10:49.684772 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 09:10:49.685046 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 09:10:49.687788 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 09:10:49.688018 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 09:10:49.693872 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 09:10:49.694013 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 09:10:49.728027 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 09:10:49.728137 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 09:10:49.728904 systemd[1]: Stopped target network.target - Network. Dec 13 09:10:49.754047 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 09:10:49.754190 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 09:10:49.777937 systemd[1]: Stopped target paths.target - Path Units. Dec 13 09:10:49.785061 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 09:10:49.785154 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 09:10:49.797875 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 09:10:49.798582 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 09:10:49.809495 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 09:10:49.809582 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 09:10:49.810364 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 09:10:49.810431 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 09:10:49.811226 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 09:10:49.811311 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 09:10:49.812841 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 09:10:49.812919 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 09:10:49.817413 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 09:10:49.818897 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 09:10:49.822730 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 09:10:49.823283 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 09:10:49.824698 systemd-networkd[751]: eth0: DHCPv6 lease lost Dec 13 09:10:49.828093 systemd-networkd[751]: eth1: DHCPv6 lease lost Dec 13 09:10:49.828563 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 09:10:49.828696 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 09:10:49.831609 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 09:10:49.831812 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 09:10:49.835466 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 09:10:49.835744 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 09:10:49.838793 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 09:10:49.838873 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 09:10:49.847359 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 09:10:49.849302 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 09:10:49.849452 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 09:10:49.851948 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 09:10:49.852149 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 09:10:49.854049 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 09:10:49.854144 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 09:10:49.855932 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 09:10:49.856086 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 09:10:49.860354 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 09:10:49.884120 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 09:10:49.884386 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 09:10:49.887684 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 09:10:49.887805 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 09:10:49.905197 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 09:10:49.905288 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 09:10:49.907162 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 09:10:49.907269 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 09:10:49.910368 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 09:10:49.910512 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 09:10:49.914221 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 09:10:49.914315 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 09:10:49.924603 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 09:10:49.928116 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 09:10:49.928260 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 09:10:49.930513 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 09:10:49.930615 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 09:10:49.935003 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 09:10:49.935137 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 09:10:49.939676 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 09:10:49.939837 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 09:10:49.943876 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 09:10:49.952593 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 09:10:49.978850 systemd[1]: Switching root. Dec 13 09:10:50.027910 systemd-journald[182]: Journal stopped Dec 13 09:10:52.004356 systemd-journald[182]: Received SIGTERM from PID 1 (systemd). Dec 13 09:10:52.004508 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 09:10:52.004529 kernel: SELinux: policy capability open_perms=1 Dec 13 09:10:52.004545 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 09:10:52.004560 kernel: SELinux: policy capability always_check_network=0 Dec 13 09:10:52.004575 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 09:10:52.004590 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 09:10:52.004611 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 09:10:52.004627 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 09:10:52.004647 kernel: audit: type=1403 audit(1734081050.271:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 09:10:52.004666 systemd[1]: Successfully loaded SELinux policy in 55.200ms. Dec 13 09:10:52.004699 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 18.322ms. Dec 13 09:10:52.004719 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 09:10:52.004743 systemd[1]: Detected virtualization kvm. Dec 13 09:10:52.004767 systemd[1]: Detected architecture x86-64. Dec 13 09:10:52.004786 systemd[1]: Detected first boot. Dec 13 09:10:52.004804 systemd[1]: Hostname set to . Dec 13 09:10:52.004827 systemd[1]: Initializing machine ID from VM UUID. Dec 13 09:10:52.004844 zram_generator::config[1027]: No configuration found. Dec 13 09:10:52.004866 systemd[1]: Populated /etc with preset unit settings. Dec 13 09:10:52.004884 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 09:10:52.004902 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 09:10:52.004921 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 09:10:52.004941 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 09:10:52.004957 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 09:10:52.011084 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 09:10:52.011153 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 09:10:52.011176 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 09:10:52.011195 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 09:10:52.011214 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 09:10:52.011233 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 09:10:52.011252 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 09:10:52.011271 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 09:10:52.011291 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 09:10:52.011320 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 09:10:52.011341 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 09:10:52.011363 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 09:10:52.011381 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 09:10:52.011399 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 09:10:52.011418 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 09:10:52.011437 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 09:10:52.011461 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 09:10:52.011480 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 09:10:52.011498 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 09:10:52.011517 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 09:10:52.011536 systemd[1]: Reached target slices.target - Slice Units. Dec 13 09:10:52.011556 systemd[1]: Reached target swap.target - Swaps. Dec 13 09:10:52.011576 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 09:10:52.011594 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 09:10:52.011616 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 09:10:52.011634 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 09:10:52.011652 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 09:10:52.011670 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 09:10:52.011690 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 09:10:52.011707 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 09:10:52.011725 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 09:10:52.011743 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 09:10:52.011764 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 09:10:52.011788 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 09:10:52.011805 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 09:10:52.011822 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 09:10:52.011859 systemd[1]: Reached target machines.target - Containers. Dec 13 09:10:52.011877 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 09:10:52.011897 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 09:10:52.011916 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 09:10:52.011934 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 09:10:52.011954 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 09:10:52.012011 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 09:10:52.012031 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 09:10:52.012050 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 09:10:52.012067 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 09:10:52.012085 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 09:10:52.012104 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 09:10:52.012123 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 09:10:52.012143 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 09:10:52.012170 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 09:10:52.012190 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 09:10:52.012210 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 09:10:52.012230 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 09:10:52.012250 kernel: ACPI: bus type drm_connector registered Dec 13 09:10:52.012269 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 09:10:52.012288 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 09:10:52.012306 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 09:10:52.012324 systemd[1]: Stopped verity-setup.service. Dec 13 09:10:52.012346 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 09:10:52.012364 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 09:10:52.012383 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 09:10:52.012401 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 09:10:52.012419 kernel: fuse: init (API version 7.39) Dec 13 09:10:52.012442 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 09:10:52.012461 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 09:10:52.012480 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 09:10:52.012499 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 09:10:52.012516 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 09:10:52.012534 kernel: loop: module loaded Dec 13 09:10:52.012554 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 09:10:52.012573 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 09:10:52.012598 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 09:10:52.012621 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 09:10:52.012642 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 09:10:52.012663 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 09:10:52.012684 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 09:10:52.012774 systemd-journald[1100]: Collecting audit messages is disabled. Dec 13 09:10:52.012828 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 09:10:52.012851 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 09:10:52.012874 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 09:10:52.012896 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 09:10:52.012916 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 09:10:52.012938 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 09:10:52.012960 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 09:10:52.015090 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 09:10:52.015169 systemd-journald[1100]: Journal started Dec 13 09:10:52.015241 systemd-journald[1100]: Runtime Journal (/run/log/journal/572467512d0f462090db8a312ef818ee) is 4.9M, max 39.3M, 34.4M free. Dec 13 09:10:51.395610 systemd[1]: Queued start job for default target multi-user.target. Dec 13 09:10:51.445180 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 13 09:10:51.445878 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 09:10:52.028019 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 09:10:52.042073 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 09:10:52.049027 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 09:10:52.049163 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 09:10:52.056262 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 09:10:52.072053 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 09:10:52.085027 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 09:10:52.088192 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 09:10:52.099021 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 09:10:52.099162 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 09:10:52.132021 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 09:10:52.137045 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 09:10:52.153017 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 09:10:52.164035 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 09:10:52.171128 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 09:10:52.179645 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 09:10:52.183464 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 09:10:52.185272 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 09:10:52.193171 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 09:10:52.196752 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 09:10:52.290534 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 09:10:52.297335 kernel: loop0: detected capacity change from 0 to 140768 Dec 13 09:10:52.311371 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 09:10:52.321474 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 09:10:52.339415 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 09:10:52.344198 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 09:10:52.356749 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 09:10:52.378074 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 09:10:52.420089 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 09:10:52.454738 systemd-journald[1100]: Time spent on flushing to /var/log/journal/572467512d0f462090db8a312ef818ee is 65.760ms for 998 entries. Dec 13 09:10:52.454738 systemd-journald[1100]: System Journal (/var/log/journal/572467512d0f462090db8a312ef818ee) is 8.0M, max 195.6M, 187.6M free. Dec 13 09:10:52.543967 systemd-journald[1100]: Received client request to flush runtime journal. Dec 13 09:10:52.544096 kernel: loop1: detected capacity change from 0 to 8 Dec 13 09:10:52.475523 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 09:10:52.479452 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 09:10:52.524602 udevadm[1158]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 09:10:52.556014 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 09:10:52.559352 kernel: loop2: detected capacity change from 0 to 142488 Dec 13 09:10:52.627530 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 09:10:52.637297 kernel: loop3: detected capacity change from 0 to 205544 Dec 13 09:10:52.641259 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 09:10:52.692046 kernel: loop4: detected capacity change from 0 to 140768 Dec 13 09:10:52.734027 kernel: loop5: detected capacity change from 0 to 8 Dec 13 09:10:52.738170 kernel: loop6: detected capacity change from 0 to 142488 Dec 13 09:10:52.759957 systemd-tmpfiles[1168]: ACLs are not supported, ignoring. Dec 13 09:10:52.760027 systemd-tmpfiles[1168]: ACLs are not supported, ignoring. Dec 13 09:10:52.770740 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 09:10:52.776012 kernel: loop7: detected capacity change from 0 to 205544 Dec 13 09:10:52.813557 (sd-merge)[1170]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Dec 13 09:10:52.814452 (sd-merge)[1170]: Merged extensions into '/usr'. Dec 13 09:10:52.833708 systemd[1]: Reloading requested from client PID 1129 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 09:10:52.833731 systemd[1]: Reloading... Dec 13 09:10:53.117435 zram_generator::config[1201]: No configuration found. Dec 13 09:10:53.332286 ldconfig[1125]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 09:10:53.561585 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 09:10:53.693387 systemd[1]: Reloading finished in 858 ms. Dec 13 09:10:53.755143 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 09:10:53.757711 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 09:10:53.800488 systemd[1]: Starting ensure-sysext.service... Dec 13 09:10:53.809215 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 09:10:53.833116 systemd[1]: Reloading requested from client PID 1241 ('systemctl') (unit ensure-sysext.service)... Dec 13 09:10:53.833137 systemd[1]: Reloading... Dec 13 09:10:53.924737 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 09:10:53.931570 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 09:10:53.935095 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 09:10:53.935673 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. Dec 13 09:10:53.935782 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. Dec 13 09:10:53.949339 systemd-tmpfiles[1242]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 09:10:53.949359 systemd-tmpfiles[1242]: Skipping /boot Dec 13 09:10:53.995711 systemd-tmpfiles[1242]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 09:10:53.995729 systemd-tmpfiles[1242]: Skipping /boot Dec 13 09:10:54.049020 zram_generator::config[1272]: No configuration found. Dec 13 09:10:54.360936 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 09:10:54.449338 systemd[1]: Reloading finished in 615 ms. Dec 13 09:10:54.480365 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 09:10:54.482251 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 09:10:54.540580 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 09:10:54.551211 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 09:10:54.559453 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 09:10:54.571511 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 09:10:54.577544 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 09:10:54.588654 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 09:10:54.597761 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 09:10:54.598026 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 09:10:54.608516 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 09:10:54.618606 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 09:10:54.628569 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 09:10:54.629520 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 09:10:54.629730 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 09:10:54.633662 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 09:10:54.633906 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 09:10:54.634220 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 09:10:54.634322 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 09:10:54.639530 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 09:10:54.639915 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 09:10:54.651521 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 09:10:54.652567 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 09:10:54.652848 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 09:10:54.665143 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 09:10:54.666213 systemd[1]: Finished ensure-sysext.service. Dec 13 09:10:54.684678 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 09:10:54.686088 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 09:10:54.686466 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 09:10:54.700747 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 09:10:54.701051 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 09:10:54.702702 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 09:10:54.726645 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 09:10:54.741518 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 09:10:54.741834 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 09:10:54.749712 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 09:10:54.752333 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 09:10:54.754810 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 09:10:54.763016 augenrules[1346]: No rules Dec 13 09:10:54.767149 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 09:10:54.774690 systemd-udevd[1325]: Using default interface naming scheme 'v255'. Dec 13 09:10:54.778185 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 09:10:54.790313 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 09:10:54.834669 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 09:10:54.847316 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 09:10:54.849760 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 09:10:54.856140 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 09:10:54.874598 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 09:10:54.893307 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 09:10:54.956266 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Dec 13 09:10:54.958167 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 09:10:54.958325 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 09:10:54.969265 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 09:10:54.981345 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 09:10:54.992189 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 09:10:54.992951 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 09:10:54.993022 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 09:10:54.993043 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 09:10:55.023711 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 09:10:55.023963 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 09:10:55.025603 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 09:10:55.030085 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 09:10:55.033276 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 09:10:55.065196 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 13 09:10:55.101341 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 09:10:55.101635 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 09:10:55.103070 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 09:10:55.153014 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1372) Dec 13 09:10:55.177667 systemd-networkd[1358]: lo: Link UP Dec 13 09:10:55.177680 systemd-networkd[1358]: lo: Gained carrier Dec 13 09:10:55.194484 kernel: ISO 9660 Extensions: RRIP_1991A Dec 13 09:10:55.195539 systemd-networkd[1358]: Enumeration completed Dec 13 09:10:55.196135 systemd-networkd[1358]: eth1: Configuring with /run/systemd/network/10-3a:7c:77:bd:2d:9e.network. Dec 13 09:10:55.197812 systemd-networkd[1358]: eth1: Link UP Dec 13 09:10:55.197826 systemd-networkd[1358]: eth1: Gained carrier Dec 13 09:10:55.203515 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 09:10:55.204776 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Dec 13 09:10:55.219434 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 09:10:55.233535 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1372) Dec 13 09:10:55.234965 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 09:10:55.236063 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 09:10:55.289965 systemd-resolved[1324]: Positive Trust Anchors: Dec 13 09:10:55.292142 systemd-resolved[1324]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 09:10:55.292221 systemd-resolved[1324]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 09:10:55.300318 systemd-resolved[1324]: Using system hostname 'ci-4081.2.1-5-05f51c210a'. Dec 13 09:10:55.304072 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1368) Dec 13 09:10:55.304795 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 09:10:55.306804 systemd[1]: Reached target network.target - Network. Dec 13 09:10:55.308149 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 09:10:55.366320 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 09:10:55.376301 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 09:10:55.378022 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Dec 13 09:10:55.382019 kernel: ACPI: button: Power Button [PWRF] Dec 13 09:10:55.411054 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Dec 13 09:10:55.474926 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 09:10:55.485071 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Dec 13 09:10:55.557593 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 09:10:55.583014 systemd-networkd[1358]: eth0: Configuring with /run/systemd/network/10-9e:90:0f:45:f5:f8.network. Dec 13 09:10:55.585536 systemd-timesyncd[1339]: Network configuration changed, trying to establish connection. Dec 13 09:10:55.588188 systemd-networkd[1358]: eth0: Link UP Dec 13 09:10:55.588204 systemd-networkd[1358]: eth0: Gained carrier Dec 13 09:10:55.592780 systemd-timesyncd[1339]: Network configuration changed, trying to establish connection. Dec 13 09:10:55.594677 systemd-timesyncd[1339]: Network configuration changed, trying to establish connection. Dec 13 09:10:55.602813 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 09:10:55.664057 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Dec 13 09:10:55.671093 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Dec 13 09:10:55.709058 kernel: Console: switching to colour dummy device 80x25 Dec 13 09:10:55.710231 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Dec 13 09:10:55.710302 kernel: [drm] features: -context_init Dec 13 09:10:55.716070 kernel: [drm] number of scanouts: 1 Dec 13 09:10:55.716192 kernel: [drm] number of cap sets: 0 Dec 13 09:10:55.714120 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 09:10:55.715397 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 09:10:55.717059 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Dec 13 09:10:55.730298 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Dec 13 09:10:55.733027 kernel: Console: switching to colour frame buffer device 128x48 Dec 13 09:10:55.772591 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Dec 13 09:10:55.736409 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 09:10:55.782734 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 09:10:55.783606 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 09:10:55.805627 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 09:10:55.863013 kernel: EDAC MC: Ver: 3.0.0 Dec 13 09:10:55.879268 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 09:10:55.897743 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 09:10:55.904498 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 09:10:55.931548 lvm[1429]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 09:10:55.979115 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 09:10:55.979647 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 09:10:55.979743 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 09:10:55.979966 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 09:10:55.980116 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 09:10:55.980489 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 09:10:55.980731 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 09:10:55.980827 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 09:10:55.980902 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 09:10:55.980931 systemd[1]: Reached target paths.target - Path Units. Dec 13 09:10:55.981278 systemd[1]: Reached target timers.target - Timer Units. Dec 13 09:10:55.988496 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 09:10:55.994634 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 09:10:56.016873 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 09:10:56.023386 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 09:10:56.026962 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 09:10:56.028365 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 09:10:56.030910 systemd[1]: Reached target basic.target - Basic System. Dec 13 09:10:56.031808 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 09:10:56.031851 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 09:10:56.066149 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 09:10:56.082055 lvm[1433]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 09:10:56.097340 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 13 09:10:56.116536 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 09:10:56.132904 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 09:10:56.150246 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 09:10:56.151664 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 09:10:56.159377 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 09:10:56.181199 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 09:10:56.193420 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 09:10:56.205396 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 09:10:56.217038 coreos-metadata[1435]: Dec 13 09:10:56.216 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Dec 13 09:10:56.222269 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 09:10:56.223615 jq[1438]: false Dec 13 09:10:56.225329 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 09:10:56.226640 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 09:10:56.234143 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 09:10:56.243242 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 09:10:56.244898 coreos-metadata[1435]: Dec 13 09:10:56.244 INFO Fetch successful Dec 13 09:10:56.253349 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 09:10:56.271763 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 09:10:56.273210 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 09:10:56.273880 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 09:10:56.275439 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 09:10:56.325922 dbus-daemon[1436]: [system] SELinux support is enabled Dec 13 09:10:56.330839 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 09:10:56.331292 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 09:10:56.345750 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 09:10:56.351004 extend-filesystems[1440]: Found loop4 Dec 13 09:10:56.351004 extend-filesystems[1440]: Found loop5 Dec 13 09:10:56.351004 extend-filesystems[1440]: Found loop6 Dec 13 09:10:56.351004 extend-filesystems[1440]: Found loop7 Dec 13 09:10:56.351004 extend-filesystems[1440]: Found vda Dec 13 09:10:56.351004 extend-filesystems[1440]: Found vda1 Dec 13 09:10:56.351004 extend-filesystems[1440]: Found vda2 Dec 13 09:10:56.351004 extend-filesystems[1440]: Found vda3 Dec 13 09:10:56.351004 extend-filesystems[1440]: Found usr Dec 13 09:10:56.351004 extend-filesystems[1440]: Found vda4 Dec 13 09:10:56.351004 extend-filesystems[1440]: Found vda6 Dec 13 09:10:56.351004 extend-filesystems[1440]: Found vda7 Dec 13 09:10:56.351004 extend-filesystems[1440]: Found vda9 Dec 13 09:10:56.351004 extend-filesystems[1440]: Checking size of /dev/vda9 Dec 13 09:10:56.493189 extend-filesystems[1440]: Resized partition /dev/vda9 Dec 13 09:10:56.370724 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 09:10:56.494199 update_engine[1448]: I20241213 09:10:56.451359 1448 main.cc:92] Flatcar Update Engine starting Dec 13 09:10:56.494199 update_engine[1448]: I20241213 09:10:56.466800 1448 update_check_scheduler.cc:74] Next update check in 9m31s Dec 13 09:10:56.494582 tar[1459]: linux-amd64/helm Dec 13 09:10:56.503454 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Dec 13 09:10:56.503550 jq[1450]: true Dec 13 09:10:56.503643 extend-filesystems[1478]: resize2fs 1.47.1 (20-May-2024) Dec 13 09:10:56.370782 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 09:10:56.382385 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 09:10:56.382497 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Dec 13 09:10:56.525623 jq[1474]: true Dec 13 09:10:56.382529 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 09:10:56.417365 (ntainerd)[1469]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 09:10:56.450443 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 13 09:10:56.462233 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 09:10:56.467219 systemd[1]: Started update-engine.service - Update Engine. Dec 13 09:10:56.509724 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 09:10:56.706765 systemd-logind[1446]: New seat seat0. Dec 13 09:10:56.721709 systemd-logind[1446]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 09:10:56.725661 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Dec 13 09:10:56.754477 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1361) Dec 13 09:10:56.728905 systemd-logind[1446]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 09:10:56.729401 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 09:10:56.757892 extend-filesystems[1478]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 09:10:56.757892 extend-filesystems[1478]: old_desc_blocks = 1, new_desc_blocks = 8 Dec 13 09:10:56.757892 extend-filesystems[1478]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Dec 13 09:10:56.768094 extend-filesystems[1440]: Resized filesystem in /dev/vda9 Dec 13 09:10:56.768094 extend-filesystems[1440]: Found vdb Dec 13 09:10:56.783857 systemd-networkd[1358]: eth1: Gained IPv6LL Dec 13 09:10:56.785691 systemd-timesyncd[1339]: Network configuration changed, trying to establish connection. Dec 13 09:10:56.785822 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 09:10:56.786299 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 09:10:56.828352 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 09:10:56.840109 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 09:10:56.858030 bash[1497]: Updated "/home/core/.ssh/authorized_keys" Dec 13 09:10:56.861405 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 09:10:56.878157 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 09:10:56.884815 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 09:10:56.906566 systemd[1]: Starting sshkeys.service... Dec 13 09:10:56.914860 systemd-networkd[1358]: eth0: Gained IPv6LL Dec 13 09:10:56.916138 systemd-timesyncd[1339]: Network configuration changed, trying to establish connection. Dec 13 09:10:57.024281 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 13 09:10:57.043680 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 13 09:10:57.092145 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 09:10:57.212149 coreos-metadata[1516]: Dec 13 09:10:57.212 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Dec 13 09:10:57.236694 coreos-metadata[1516]: Dec 13 09:10:57.236 INFO Fetch successful Dec 13 09:10:57.240036 containerd[1469]: time="2024-12-13T09:10:57.239846520Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 09:10:57.267953 unknown[1516]: wrote ssh authorized keys file for user: core Dec 13 09:10:57.319586 update-ssh-keys[1527]: Updated "/home/core/.ssh/authorized_keys" Dec 13 09:10:57.320092 containerd[1469]: time="2024-12-13T09:10:57.318966188Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 09:10:57.320856 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 13 09:10:57.323134 systemd[1]: Finished sshkeys.service. Dec 13 09:10:57.333614 containerd[1469]: time="2024-12-13T09:10:57.332214194Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 09:10:57.333614 containerd[1469]: time="2024-12-13T09:10:57.332265461Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 09:10:57.333614 containerd[1469]: time="2024-12-13T09:10:57.332287703Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 09:10:57.333614 containerd[1469]: time="2024-12-13T09:10:57.332468957Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 09:10:57.333614 containerd[1469]: time="2024-12-13T09:10:57.332486326Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 09:10:57.333614 containerd[1469]: time="2024-12-13T09:10:57.332551848Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 09:10:57.333614 containerd[1469]: time="2024-12-13T09:10:57.332566582Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 09:10:57.333614 containerd[1469]: time="2024-12-13T09:10:57.332789440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 09:10:57.333614 containerd[1469]: time="2024-12-13T09:10:57.332808256Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 09:10:57.333614 containerd[1469]: time="2024-12-13T09:10:57.332822918Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 09:10:57.333614 containerd[1469]: time="2024-12-13T09:10:57.332833930Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 09:10:57.333968 containerd[1469]: time="2024-12-13T09:10:57.332920236Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 09:10:57.337468 containerd[1469]: time="2024-12-13T09:10:57.336380176Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 09:10:57.337468 containerd[1469]: time="2024-12-13T09:10:57.336662926Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 09:10:57.337468 containerd[1469]: time="2024-12-13T09:10:57.336685668Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 09:10:57.337468 containerd[1469]: time="2024-12-13T09:10:57.336805431Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 09:10:57.337468 containerd[1469]: time="2024-12-13T09:10:57.336857321Z" level=info msg="metadata content store policy set" policy=shared Dec 13 09:10:57.359820 locksmithd[1480]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 09:10:57.368447 containerd[1469]: time="2024-12-13T09:10:57.368308145Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 09:10:57.370009 containerd[1469]: time="2024-12-13T09:10:57.368672614Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 09:10:57.370009 containerd[1469]: time="2024-12-13T09:10:57.368700993Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 09:10:57.370009 containerd[1469]: time="2024-12-13T09:10:57.368734221Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 09:10:57.370009 containerd[1469]: time="2024-12-13T09:10:57.368753006Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 09:10:57.370009 containerd[1469]: time="2024-12-13T09:10:57.368948959Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 09:10:57.370009 containerd[1469]: time="2024-12-13T09:10:57.369257798Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 09:10:57.370009 containerd[1469]: time="2024-12-13T09:10:57.369391386Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 09:10:57.370009 containerd[1469]: time="2024-12-13T09:10:57.369407840Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 09:10:57.370009 containerd[1469]: time="2024-12-13T09:10:57.369422350Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 09:10:57.370009 containerd[1469]: time="2024-12-13T09:10:57.369437511Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 09:10:57.370009 containerd[1469]: time="2024-12-13T09:10:57.369451037Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 09:10:57.370009 containerd[1469]: time="2024-12-13T09:10:57.369463690Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 09:10:57.370009 containerd[1469]: time="2024-12-13T09:10:57.369477473Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 09:10:57.370009 containerd[1469]: time="2024-12-13T09:10:57.369493521Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 09:10:57.370390 containerd[1469]: time="2024-12-13T09:10:57.369507200Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 09:10:57.370390 containerd[1469]: time="2024-12-13T09:10:57.369521040Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 09:10:57.370390 containerd[1469]: time="2024-12-13T09:10:57.369534043Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 09:10:57.370390 containerd[1469]: time="2024-12-13T09:10:57.369565628Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 09:10:57.370390 containerd[1469]: time="2024-12-13T09:10:57.369581533Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 09:10:57.370390 containerd[1469]: time="2024-12-13T09:10:57.369596316Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 09:10:57.370390 containerd[1469]: time="2024-12-13T09:10:57.369610298Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 09:10:57.370390 containerd[1469]: time="2024-12-13T09:10:57.369637900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 09:10:57.370390 containerd[1469]: time="2024-12-13T09:10:57.369654508Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 09:10:57.370390 containerd[1469]: time="2024-12-13T09:10:57.369672257Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 09:10:57.370390 containerd[1469]: time="2024-12-13T09:10:57.369687091Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 09:10:57.370390 containerd[1469]: time="2024-12-13T09:10:57.369700816Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 09:10:57.370390 containerd[1469]: time="2024-12-13T09:10:57.369715400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 09:10:57.370390 containerd[1469]: time="2024-12-13T09:10:57.369727674Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 09:10:57.370688 containerd[1469]: time="2024-12-13T09:10:57.369739078Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 09:10:57.370688 containerd[1469]: time="2024-12-13T09:10:57.369751958Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 09:10:57.370688 containerd[1469]: time="2024-12-13T09:10:57.369768946Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 09:10:57.370688 containerd[1469]: time="2024-12-13T09:10:57.369798440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 09:10:57.370688 containerd[1469]: time="2024-12-13T09:10:57.369815496Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 09:10:57.370688 containerd[1469]: time="2024-12-13T09:10:57.369840369Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 09:10:57.370688 containerd[1469]: time="2024-12-13T09:10:57.369896636Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 09:10:57.370688 containerd[1469]: time="2024-12-13T09:10:57.369915300Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 09:10:57.370688 containerd[1469]: time="2024-12-13T09:10:57.369927361Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 09:10:57.370688 containerd[1469]: time="2024-12-13T09:10:57.369945570Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 09:10:57.370688 containerd[1469]: time="2024-12-13T09:10:57.369965600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 09:10:57.376026 containerd[1469]: time="2024-12-13T09:10:57.372209098Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 09:10:57.376026 containerd[1469]: time="2024-12-13T09:10:57.372269436Z" level=info msg="NRI interface is disabled by configuration." Dec 13 09:10:57.376026 containerd[1469]: time="2024-12-13T09:10:57.372296090Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 09:10:57.376217 containerd[1469]: time="2024-12-13T09:10:57.372740526Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 09:10:57.376217 containerd[1469]: time="2024-12-13T09:10:57.372859020Z" level=info msg="Connect containerd service" Dec 13 09:10:57.376217 containerd[1469]: time="2024-12-13T09:10:57.372921857Z" level=info msg="using legacy CRI server" Dec 13 09:10:57.376217 containerd[1469]: time="2024-12-13T09:10:57.372930746Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 09:10:57.376217 containerd[1469]: time="2024-12-13T09:10:57.373106077Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 09:10:57.377219 containerd[1469]: time="2024-12-13T09:10:57.377156438Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 09:10:57.378705 containerd[1469]: time="2024-12-13T09:10:57.378616276Z" level=info msg="Start subscribing containerd event" Dec 13 09:10:57.380199 containerd[1469]: time="2024-12-13T09:10:57.380165155Z" level=info msg="Start recovering state" Dec 13 09:10:57.383935 containerd[1469]: time="2024-12-13T09:10:57.379156448Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 09:10:57.387158 containerd[1469]: time="2024-12-13T09:10:57.382694306Z" level=info msg="Start event monitor" Dec 13 09:10:57.387158 containerd[1469]: time="2024-12-13T09:10:57.385849975Z" level=info msg="Start snapshots syncer" Dec 13 09:10:57.387158 containerd[1469]: time="2024-12-13T09:10:57.385872301Z" level=info msg="Start cni network conf syncer for default" Dec 13 09:10:57.387158 containerd[1469]: time="2024-12-13T09:10:57.385884482Z" level=info msg="Start streaming server" Dec 13 09:10:57.388167 containerd[1469]: time="2024-12-13T09:10:57.388121404Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 09:10:57.389536 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 09:10:57.395468 containerd[1469]: time="2024-12-13T09:10:57.394704240Z" level=info msg="containerd successfully booted in 0.159569s" Dec 13 09:10:57.473778 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 09:10:57.571051 sshd_keygen[1458]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 09:10:57.636327 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 09:10:57.652329 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 09:10:57.664642 systemd[1]: Started sshd@0-146.190.151.20:22-147.75.109.163:35394.service - OpenSSH per-connection server daemon (147.75.109.163:35394). Dec 13 09:10:57.711185 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 09:10:57.711598 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 09:10:57.730743 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 09:10:57.792817 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 09:10:57.802638 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 09:10:57.807273 sshd[1545]: Accepted publickey for core from 147.75.109.163 port 35394 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo Dec 13 09:10:57.817566 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 09:10:57.820821 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 09:10:57.829223 sshd[1545]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:10:57.855920 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 09:10:57.874596 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 09:10:57.889588 systemd-logind[1446]: New session 1 of user core. Dec 13 09:10:57.919171 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 09:10:57.931002 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 09:10:57.949223 (systemd)[1557]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 09:10:58.125137 tar[1459]: linux-amd64/LICENSE Dec 13 09:10:58.125137 tar[1459]: linux-amd64/README.md Dec 13 09:10:58.178143 systemd[1557]: Queued start job for default target default.target. Dec 13 09:10:58.184787 systemd[1557]: Created slice app.slice - User Application Slice. Dec 13 09:10:58.184831 systemd[1557]: Reached target paths.target - Paths. Dec 13 09:10:58.184850 systemd[1557]: Reached target timers.target - Timers. Dec 13 09:10:58.185398 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 09:10:58.189223 systemd[1557]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 09:10:58.209144 systemd[1557]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 09:10:58.210297 systemd[1557]: Reached target sockets.target - Sockets. Dec 13 09:10:58.210329 systemd[1557]: Reached target basic.target - Basic System. Dec 13 09:10:58.210417 systemd[1557]: Reached target default.target - Main User Target. Dec 13 09:10:58.210463 systemd[1557]: Startup finished in 247ms. Dec 13 09:10:58.211796 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 09:10:58.229946 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 09:10:58.336723 systemd[1]: Started sshd@1-146.190.151.20:22-147.75.109.163:35400.service - OpenSSH per-connection server daemon (147.75.109.163:35400). Dec 13 09:10:58.421286 sshd[1571]: Accepted publickey for core from 147.75.109.163 port 35400 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo Dec 13 09:10:58.422342 sshd[1571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:10:58.438092 systemd-logind[1446]: New session 2 of user core. Dec 13 09:10:58.446298 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 09:10:58.529708 sshd[1571]: pam_unix(sshd:session): session closed for user core Dec 13 09:10:58.540638 systemd[1]: sshd@1-146.190.151.20:22-147.75.109.163:35400.service: Deactivated successfully. Dec 13 09:10:58.545765 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 09:10:58.549268 systemd-logind[1446]: Session 2 logged out. Waiting for processes to exit. Dec 13 09:10:58.562483 systemd[1]: Started sshd@2-146.190.151.20:22-147.75.109.163:35408.service - OpenSSH per-connection server daemon (147.75.109.163:35408). Dec 13 09:10:58.567465 systemd-logind[1446]: Removed session 2. Dec 13 09:10:58.623627 sshd[1578]: Accepted publickey for core from 147.75.109.163 port 35408 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo Dec 13 09:10:58.626378 sshd[1578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:10:58.635319 systemd-logind[1446]: New session 3 of user core. Dec 13 09:10:58.640316 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 09:10:58.731152 sshd[1578]: pam_unix(sshd:session): session closed for user core Dec 13 09:10:58.740455 systemd[1]: sshd@2-146.190.151.20:22-147.75.109.163:35408.service: Deactivated successfully. Dec 13 09:10:58.744776 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 09:10:58.748440 systemd-logind[1446]: Session 3 logged out. Waiting for processes to exit. Dec 13 09:10:58.759418 systemd-logind[1446]: Removed session 3. Dec 13 09:10:58.876882 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 09:10:58.881237 (kubelet)[1589]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 09:10:58.882580 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 09:10:58.886813 systemd[1]: Startup finished in 1.350s (kernel) + 7.514s (initrd) + 8.669s (userspace) = 17.534s. Dec 13 09:10:59.921965 kubelet[1589]: E1213 09:10:59.921848 1589 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 09:10:59.925101 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 09:10:59.925353 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 09:10:59.925792 systemd[1]: kubelet.service: Consumed 1.486s CPU time. Dec 13 09:11:08.755681 systemd[1]: Started sshd@3-146.190.151.20:22-147.75.109.163:39894.service - OpenSSH per-connection server daemon (147.75.109.163:39894). Dec 13 09:11:08.802514 sshd[1602]: Accepted publickey for core from 147.75.109.163 port 39894 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo Dec 13 09:11:08.804855 sshd[1602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:11:08.812526 systemd-logind[1446]: New session 4 of user core. Dec 13 09:11:08.818351 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 09:11:08.891222 sshd[1602]: pam_unix(sshd:session): session closed for user core Dec 13 09:11:08.903784 systemd[1]: sshd@3-146.190.151.20:22-147.75.109.163:39894.service: Deactivated successfully. Dec 13 09:11:08.906192 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 09:11:08.907504 systemd-logind[1446]: Session 4 logged out. Waiting for processes to exit. Dec 13 09:11:08.915475 systemd[1]: Started sshd@4-146.190.151.20:22-147.75.109.163:39900.service - OpenSSH per-connection server daemon (147.75.109.163:39900). Dec 13 09:11:08.918723 systemd-logind[1446]: Removed session 4. Dec 13 09:11:08.979819 sshd[1609]: Accepted publickey for core from 147.75.109.163 port 39900 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo Dec 13 09:11:08.982044 sshd[1609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:11:08.989943 systemd-logind[1446]: New session 5 of user core. Dec 13 09:11:08.996323 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 09:11:09.057282 sshd[1609]: pam_unix(sshd:session): session closed for user core Dec 13 09:11:09.072307 systemd[1]: sshd@4-146.190.151.20:22-147.75.109.163:39900.service: Deactivated successfully. Dec 13 09:11:09.075466 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 09:11:09.077237 systemd-logind[1446]: Session 5 logged out. Waiting for processes to exit. Dec 13 09:11:09.083563 systemd[1]: Started sshd@5-146.190.151.20:22-147.75.109.163:39908.service - OpenSSH per-connection server daemon (147.75.109.163:39908). Dec 13 09:11:09.085583 systemd-logind[1446]: Removed session 5. Dec 13 09:11:09.134650 sshd[1616]: Accepted publickey for core from 147.75.109.163 port 39908 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo Dec 13 09:11:09.136899 sshd[1616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:11:09.143073 systemd-logind[1446]: New session 6 of user core. Dec 13 09:11:09.150356 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 09:11:09.224101 sshd[1616]: pam_unix(sshd:session): session closed for user core Dec 13 09:11:09.228900 systemd[1]: sshd@5-146.190.151.20:22-147.75.109.163:39908.service: Deactivated successfully. Dec 13 09:11:09.231765 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 09:11:09.248297 systemd-logind[1446]: Session 6 logged out. Waiting for processes to exit. Dec 13 09:11:09.259532 systemd[1]: Started sshd@6-146.190.151.20:22-147.75.109.163:39922.service - OpenSSH per-connection server daemon (147.75.109.163:39922). Dec 13 09:11:09.261846 systemd-logind[1446]: Removed session 6. Dec 13 09:11:09.319655 sshd[1623]: Accepted publickey for core from 147.75.109.163 port 39922 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo Dec 13 09:11:09.323449 sshd[1623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:11:09.331408 systemd-logind[1446]: New session 7 of user core. Dec 13 09:11:09.339470 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 09:11:09.427825 sudo[1626]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 09:11:09.428327 sudo[1626]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 09:11:09.442951 sudo[1626]: pam_unix(sudo:session): session closed for user root Dec 13 09:11:09.448074 sshd[1623]: pam_unix(sshd:session): session closed for user core Dec 13 09:11:09.460470 systemd[1]: sshd@6-146.190.151.20:22-147.75.109.163:39922.service: Deactivated successfully. Dec 13 09:11:09.463350 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 09:11:09.464635 systemd-logind[1446]: Session 7 logged out. Waiting for processes to exit. Dec 13 09:11:09.479748 systemd[1]: Started sshd@7-146.190.151.20:22-147.75.109.163:39938.service - OpenSSH per-connection server daemon (147.75.109.163:39938). Dec 13 09:11:09.484039 systemd-logind[1446]: Removed session 7. Dec 13 09:11:09.535884 sshd[1631]: Accepted publickey for core from 147.75.109.163 port 39938 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo Dec 13 09:11:09.537373 sshd[1631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:11:09.548951 systemd-logind[1446]: New session 8 of user core. Dec 13 09:11:09.558036 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 09:11:09.633462 sudo[1635]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 09:11:09.637512 sudo[1635]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 09:11:09.647241 sudo[1635]: pam_unix(sudo:session): session closed for user root Dec 13 09:11:09.655890 sudo[1634]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 09:11:09.656562 sudo[1634]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 09:11:09.685750 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 13 09:11:09.691691 auditctl[1638]: No rules Dec 13 09:11:09.692752 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 09:11:09.693373 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 13 09:11:09.699337 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 09:11:09.760048 augenrules[1656]: No rules Dec 13 09:11:09.761325 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 09:11:09.764538 sudo[1634]: pam_unix(sudo:session): session closed for user root Dec 13 09:11:09.770086 sshd[1631]: pam_unix(sshd:session): session closed for user core Dec 13 09:11:09.794956 systemd[1]: sshd@7-146.190.151.20:22-147.75.109.163:39938.service: Deactivated successfully. Dec 13 09:11:09.799210 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 09:11:09.803199 systemd-logind[1446]: Session 8 logged out. Waiting for processes to exit. Dec 13 09:11:09.813115 systemd[1]: Started sshd@8-146.190.151.20:22-147.75.109.163:39948.service - OpenSSH per-connection server daemon (147.75.109.163:39948). Dec 13 09:11:09.815543 systemd-logind[1446]: Removed session 8. Dec 13 09:11:09.888414 sshd[1664]: Accepted publickey for core from 147.75.109.163 port 39948 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo Dec 13 09:11:09.893499 sshd[1664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:11:09.909090 systemd-logind[1446]: New session 9 of user core. Dec 13 09:11:09.920338 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 09:11:09.926303 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 09:11:09.942162 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 09:11:10.011506 sudo[1670]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 09:11:10.012665 sudo[1670]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 09:11:10.231231 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 09:11:10.253163 (kubelet)[1683]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 09:11:10.370795 kubelet[1683]: E1213 09:11:10.370640 1683 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 09:11:10.374570 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 09:11:10.374767 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 09:11:10.883122 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 09:11:10.888272 (dockerd)[1697]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 09:11:11.656431 dockerd[1697]: time="2024-12-13T09:11:11.656294350Z" level=info msg="Starting up" Dec 13 09:11:11.917103 dockerd[1697]: time="2024-12-13T09:11:11.912455992Z" level=info msg="Loading containers: start." Dec 13 09:11:12.154749 kernel: Initializing XFRM netlink socket Dec 13 09:11:12.222917 systemd-timesyncd[1339]: Network configuration changed, trying to establish connection. Dec 13 09:11:13.011805 systemd-resolved[1324]: Clock change detected. Flushing caches. Dec 13 09:11:13.013057 systemd-timesyncd[1339]: Contacted time server 23.168.136.132:123 (2.flatcar.pool.ntp.org). Dec 13 09:11:13.013140 systemd-timesyncd[1339]: Initial clock synchronization to Fri 2024-12-13 09:11:13.011697 UTC. Dec 13 09:11:13.072708 systemd-networkd[1358]: docker0: Link UP Dec 13 09:11:13.113393 dockerd[1697]: time="2024-12-13T09:11:13.113140794Z" level=info msg="Loading containers: done." Dec 13 09:11:13.144033 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2672993473-merged.mount: Deactivated successfully. Dec 13 09:11:13.149854 dockerd[1697]: time="2024-12-13T09:11:13.148948148Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 09:11:13.149854 dockerd[1697]: time="2024-12-13T09:11:13.149128567Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Dec 13 09:11:13.149854 dockerd[1697]: time="2024-12-13T09:11:13.149446761Z" level=info msg="Daemon has completed initialization" Dec 13 09:11:13.231744 dockerd[1697]: time="2024-12-13T09:11:13.231407340Z" level=info msg="API listen on /run/docker.sock" Dec 13 09:11:13.232713 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 09:11:14.450809 containerd[1469]: time="2024-12-13T09:11:14.450733329Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\"" Dec 13 09:11:15.349308 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3835425367.mount: Deactivated successfully. Dec 13 09:11:17.344855 containerd[1469]: time="2024-12-13T09:11:17.344786084Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:11:17.347778 containerd[1469]: time="2024-12-13T09:11:17.347703270Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.4: active requests=0, bytes read=27975483" Dec 13 09:11:17.349386 containerd[1469]: time="2024-12-13T09:11:17.349314352Z" level=info msg="ImageCreate event name:\"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:11:17.353053 containerd[1469]: time="2024-12-13T09:11:17.352992215Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:11:17.354788 containerd[1469]: time="2024-12-13T09:11:17.354387599Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.4\" with image id \"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\", size \"27972283\" in 2.903594608s" Dec 13 09:11:17.354788 containerd[1469]: time="2024-12-13T09:11:17.354438993Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\" returns image reference \"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\"" Dec 13 09:11:17.357338 containerd[1469]: time="2024-12-13T09:11:17.357301292Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\"" Dec 13 09:11:19.106702 containerd[1469]: time="2024-12-13T09:11:19.106637308Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:11:19.108486 containerd[1469]: time="2024-12-13T09:11:19.107820915Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.4: active requests=0, bytes read=24702157" Dec 13 09:11:19.109467 containerd[1469]: time="2024-12-13T09:11:19.109417688Z" level=info msg="ImageCreate event name:\"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:11:19.113764 containerd[1469]: time="2024-12-13T09:11:19.113710597Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:11:19.115702 containerd[1469]: time="2024-12-13T09:11:19.115092663Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.4\" with image id \"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\", size \"26147269\" in 1.757742766s" Dec 13 09:11:19.115702 containerd[1469]: time="2024-12-13T09:11:19.115164881Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\" returns image reference \"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\"" Dec 13 09:11:19.116422 containerd[1469]: time="2024-12-13T09:11:19.116387064Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\"" Dec 13 09:11:19.118640 systemd-resolved[1324]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Dec 13 09:11:20.612337 containerd[1469]: time="2024-12-13T09:11:20.612217145Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:11:20.613912 containerd[1469]: time="2024-12-13T09:11:20.613836568Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.4: active requests=0, bytes read=18652067" Dec 13 09:11:20.615448 containerd[1469]: time="2024-12-13T09:11:20.614790988Z" level=info msg="ImageCreate event name:\"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:11:20.619366 containerd[1469]: time="2024-12-13T09:11:20.619295756Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:11:20.624689 containerd[1469]: time="2024-12-13T09:11:20.624599869Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.4\" with image id \"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\", size \"20097197\" in 1.508161184s" Dec 13 09:11:20.624689 containerd[1469]: time="2024-12-13T09:11:20.624678049Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\" returns image reference \"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\"" Dec 13 09:11:20.627871 containerd[1469]: time="2024-12-13T09:11:20.627815466Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\"" Dec 13 09:11:21.240676 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 09:11:21.250914 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 09:11:21.433898 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 09:11:21.437474 (kubelet)[1914]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 09:11:21.539867 kubelet[1914]: E1213 09:11:21.539584 1914 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 09:11:21.544649 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 09:11:21.544796 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 09:11:22.139199 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1233753286.mount: Deactivated successfully. Dec 13 09:11:22.205249 systemd-resolved[1324]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Dec 13 09:11:22.846315 containerd[1469]: time="2024-12-13T09:11:22.845359784Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:11:22.846315 containerd[1469]: time="2024-12-13T09:11:22.846271176Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.4: active requests=0, bytes read=30230243" Dec 13 09:11:22.847073 containerd[1469]: time="2024-12-13T09:11:22.847041515Z" level=info msg="ImageCreate event name:\"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:11:22.849528 containerd[1469]: time="2024-12-13T09:11:22.849454781Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:11:22.850487 containerd[1469]: time="2024-12-13T09:11:22.850438940Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.4\" with image id \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\", repo tag \"registry.k8s.io/kube-proxy:v1.31.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\", size \"30229262\" in 2.22256326s" Dec 13 09:11:22.850652 containerd[1469]: time="2024-12-13T09:11:22.850632752Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\" returns image reference \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\"" Dec 13 09:11:22.851367 containerd[1469]: time="2024-12-13T09:11:22.851331097Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 09:11:23.441840 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3411885202.mount: Deactivated successfully. Dec 13 09:11:24.724685 containerd[1469]: time="2024-12-13T09:11:24.724346450Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:11:24.727365 containerd[1469]: time="2024-12-13T09:11:24.726747869Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Dec 13 09:11:24.729405 containerd[1469]: time="2024-12-13T09:11:24.728325732Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:11:24.734166 containerd[1469]: time="2024-12-13T09:11:24.734076378Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:11:24.736126 containerd[1469]: time="2024-12-13T09:11:24.736025111Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.884640362s" Dec 13 09:11:24.736448 containerd[1469]: time="2024-12-13T09:11:24.736415347Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 09:11:24.738285 containerd[1469]: time="2024-12-13T09:11:24.737916412Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 13 09:11:25.322879 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2908961991.mount: Deactivated successfully. Dec 13 09:11:25.339589 containerd[1469]: time="2024-12-13T09:11:25.338240521Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:11:25.340122 containerd[1469]: time="2024-12-13T09:11:25.340062185Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Dec 13 09:11:25.342371 containerd[1469]: time="2024-12-13T09:11:25.342310028Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:11:25.346513 containerd[1469]: time="2024-12-13T09:11:25.346424424Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:11:25.347883 containerd[1469]: time="2024-12-13T09:11:25.347822128Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 609.853121ms" Dec 13 09:11:25.347883 containerd[1469]: time="2024-12-13T09:11:25.347886799Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Dec 13 09:11:25.348701 containerd[1469]: time="2024-12-13T09:11:25.348649284Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Dec 13 09:11:25.351020 systemd-resolved[1324]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. Dec 13 09:11:25.956396 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3163636899.mount: Deactivated successfully. Dec 13 09:11:28.145618 containerd[1469]: time="2024-12-13T09:11:28.145344863Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:11:28.150327 containerd[1469]: time="2024-12-13T09:11:28.149888590Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56779973" Dec 13 09:11:28.152370 containerd[1469]: time="2024-12-13T09:11:28.151444264Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:11:28.155732 containerd[1469]: time="2024-12-13T09:11:28.155680841Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:11:28.157128 containerd[1469]: time="2024-12-13T09:11:28.157079814Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.808284958s" Dec 13 09:11:28.157128 containerd[1469]: time="2024-12-13T09:11:28.157131378Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Dec 13 09:11:31.269412 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 09:11:31.275927 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 09:11:31.330764 systemd[1]: Reloading requested from client PID 2057 ('systemctl') (unit session-9.scope)... Dec 13 09:11:31.330785 systemd[1]: Reloading... Dec 13 09:11:31.484882 zram_generator::config[2099]: No configuration found. Dec 13 09:11:31.632926 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 09:11:31.715636 systemd[1]: Reloading finished in 383 ms. Dec 13 09:11:31.762122 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 09:11:31.762223 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 09:11:31.762525 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 09:11:31.768044 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 09:11:31.905546 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 09:11:31.923068 (kubelet)[2149]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 09:11:31.989955 kubelet[2149]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 09:11:31.989955 kubelet[2149]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 09:11:31.989955 kubelet[2149]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 09:11:31.990563 kubelet[2149]: I1213 09:11:31.990038 2149 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 09:11:32.447214 kubelet[2149]: I1213 09:11:32.447127 2149 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Dec 13 09:11:32.447214 kubelet[2149]: I1213 09:11:32.447186 2149 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 09:11:32.447569 kubelet[2149]: I1213 09:11:32.447546 2149 server.go:929] "Client rotation is on, will bootstrap in background" Dec 13 09:11:32.469709 kubelet[2149]: I1213 09:11:32.469626 2149 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 09:11:32.472537 kubelet[2149]: E1213 09:11:32.471948 2149 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://146.190.151.20:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 146.190.151.20:6443: connect: connection refused" logger="UnhandledError" Dec 13 09:11:32.479829 kubelet[2149]: E1213 09:11:32.479746 2149 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Dec 13 09:11:32.479829 kubelet[2149]: I1213 09:11:32.479797 2149 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Dec 13 09:11:32.486601 kubelet[2149]: I1213 09:11:32.486069 2149 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 09:11:32.487858 kubelet[2149]: I1213 09:11:32.487723 2149 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 13 09:11:32.488398 kubelet[2149]: I1213 09:11:32.488201 2149 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 09:11:32.488642 kubelet[2149]: I1213 09:11:32.488407 2149 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.2.1-5-05f51c210a","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 09:11:32.488763 kubelet[2149]: I1213 09:11:32.488657 2149 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 09:11:32.488763 kubelet[2149]: I1213 09:11:32.488670 2149 container_manager_linux.go:300] "Creating device plugin manager" Dec 13 09:11:32.488818 kubelet[2149]: I1213 09:11:32.488807 2149 state_mem.go:36] "Initialized new in-memory state store" Dec 13 09:11:32.490704 kubelet[2149]: I1213 09:11:32.490669 2149 kubelet.go:408] "Attempting to sync node with API server" Dec 13 09:11:32.490704 kubelet[2149]: I1213 09:11:32.490701 2149 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 09:11:32.490808 kubelet[2149]: I1213 09:11:32.490737 2149 kubelet.go:314] "Adding apiserver pod source" Dec 13 09:11:32.490808 kubelet[2149]: I1213 09:11:32.490753 2149 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 09:11:32.497531 kubelet[2149]: W1213 09:11:32.496113 2149 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://146.190.151.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.1-5-05f51c210a&limit=500&resourceVersion=0": dial tcp 146.190.151.20:6443: connect: connection refused Dec 13 09:11:32.497531 kubelet[2149]: E1213 09:11:32.496194 2149 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://146.190.151.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.1-5-05f51c210a&limit=500&resourceVersion=0\": dial tcp 146.190.151.20:6443: connect: connection refused" logger="UnhandledError" Dec 13 09:11:32.497905 kubelet[2149]: I1213 09:11:32.497876 2149 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 09:11:32.503225 kubelet[2149]: I1213 09:11:32.502347 2149 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 09:11:32.503693 kubelet[2149]: W1213 09:11:32.503661 2149 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 09:11:32.504584 kubelet[2149]: I1213 09:11:32.504556 2149 server.go:1269] "Started kubelet" Dec 13 09:11:32.504805 kubelet[2149]: W1213 09:11:32.504749 2149 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://146.190.151.20:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 146.190.151.20:6443: connect: connection refused Dec 13 09:11:32.504891 kubelet[2149]: E1213 09:11:32.504827 2149 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://146.190.151.20:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 146.190.151.20:6443: connect: connection refused" logger="UnhandledError" Dec 13 09:11:32.507142 kubelet[2149]: I1213 09:11:32.507103 2149 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 09:11:32.508535 kubelet[2149]: I1213 09:11:32.508489 2149 server.go:460] "Adding debug handlers to kubelet server" Dec 13 09:11:32.512443 kubelet[2149]: I1213 09:11:32.512317 2149 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 09:11:32.512750 kubelet[2149]: I1213 09:11:32.512731 2149 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 09:11:32.514023 kubelet[2149]: I1213 09:11:32.513997 2149 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 09:11:32.518847 kubelet[2149]: E1213 09:11:32.514236 2149 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://146.190.151.20:6443/api/v1/namespaces/default/events\": dial tcp 146.190.151.20:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.2.1-5-05f51c210a.1810b190ea74062d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.2.1-5-05f51c210a,UID:ci-4081.2.1-5-05f51c210a,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.2.1-5-05f51c210a,},FirstTimestamp:2024-12-13 09:11:32.504520237 +0000 UTC m=+0.576461161,LastTimestamp:2024-12-13 09:11:32.504520237 +0000 UTC m=+0.576461161,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.2.1-5-05f51c210a,}" Dec 13 09:11:32.522849 kubelet[2149]: I1213 09:11:32.522693 2149 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 09:11:32.526234 kubelet[2149]: I1213 09:11:32.524108 2149 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 13 09:11:32.527576 kubelet[2149]: E1213 09:11:32.526788 2149 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.2.1-5-05f51c210a\" not found" Dec 13 09:11:32.528352 kubelet[2149]: I1213 09:11:32.528333 2149 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 13 09:11:32.528551 kubelet[2149]: I1213 09:11:32.528540 2149 reconciler.go:26] "Reconciler: start to sync state" Dec 13 09:11:32.529085 kubelet[2149]: W1213 09:11:32.529044 2149 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://146.190.151.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 146.190.151.20:6443: connect: connection refused Dec 13 09:11:32.529187 kubelet[2149]: E1213 09:11:32.529170 2149 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://146.190.151.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 146.190.151.20:6443: connect: connection refused" logger="UnhandledError" Dec 13 09:11:32.529889 kubelet[2149]: I1213 09:11:32.529870 2149 factory.go:221] Registration of the systemd container factory successfully Dec 13 09:11:32.530044 kubelet[2149]: I1213 09:11:32.530029 2149 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 09:11:32.530795 kubelet[2149]: E1213 09:11:32.530758 2149 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.151.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.1-5-05f51c210a?timeout=10s\": dial tcp 146.190.151.20:6443: connect: connection refused" interval="200ms" Dec 13 09:11:32.533925 kubelet[2149]: E1213 09:11:32.532932 2149 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 09:11:32.534382 kubelet[2149]: I1213 09:11:32.534360 2149 factory.go:221] Registration of the containerd container factory successfully Dec 13 09:11:32.548252 kubelet[2149]: I1213 09:11:32.548185 2149 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 09:11:32.550110 kubelet[2149]: I1213 09:11:32.550071 2149 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 09:11:32.550196 kubelet[2149]: I1213 09:11:32.550131 2149 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 09:11:32.550196 kubelet[2149]: I1213 09:11:32.550160 2149 kubelet.go:2321] "Starting kubelet main sync loop" Dec 13 09:11:32.550255 kubelet[2149]: E1213 09:11:32.550227 2149 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 09:11:32.556143 kubelet[2149]: W1213 09:11:32.555799 2149 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://146.190.151.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 146.190.151.20:6443: connect: connection refused Dec 13 09:11:32.556143 kubelet[2149]: E1213 09:11:32.555857 2149 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://146.190.151.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 146.190.151.20:6443: connect: connection refused" logger="UnhandledError" Dec 13 09:11:32.562929 kubelet[2149]: I1213 09:11:32.562836 2149 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 09:11:32.562929 kubelet[2149]: I1213 09:11:32.562876 2149 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 09:11:32.562929 kubelet[2149]: I1213 09:11:32.562917 2149 state_mem.go:36] "Initialized new in-memory state store" Dec 13 09:11:32.566303 kubelet[2149]: I1213 09:11:32.566259 2149 policy_none.go:49] "None policy: Start" Dec 13 09:11:32.567391 kubelet[2149]: I1213 09:11:32.567354 2149 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 09:11:32.567391 kubelet[2149]: I1213 09:11:32.567386 2149 state_mem.go:35] "Initializing new in-memory state store" Dec 13 09:11:32.575547 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 09:11:32.584926 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 09:11:32.594900 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 09:11:32.607529 kubelet[2149]: I1213 09:11:32.606178 2149 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 09:11:32.607529 kubelet[2149]: I1213 09:11:32.606433 2149 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 09:11:32.607529 kubelet[2149]: I1213 09:11:32.606446 2149 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 09:11:32.607529 kubelet[2149]: I1213 09:11:32.607223 2149 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 09:11:32.612144 kubelet[2149]: E1213 09:11:32.612106 2149 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.2.1-5-05f51c210a\" not found" Dec 13 09:11:32.663454 systemd[1]: Created slice kubepods-burstable-pod6386e79fc2fed2ce1bc0aa99c6570e63.slice - libcontainer container kubepods-burstable-pod6386e79fc2fed2ce1bc0aa99c6570e63.slice. Dec 13 09:11:32.680781 systemd[1]: Created slice kubepods-burstable-podf81eb8ca2f5a376b8e4afec0062b2001.slice - libcontainer container kubepods-burstable-podf81eb8ca2f5a376b8e4afec0062b2001.slice. Dec 13 09:11:32.695880 systemd[1]: Created slice kubepods-burstable-pod0ec71c55a86766a684f39415b9a62b02.slice - libcontainer container kubepods-burstable-pod0ec71c55a86766a684f39415b9a62b02.slice. Dec 13 09:11:32.708560 kubelet[2149]: I1213 09:11:32.707891 2149 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.2.1-5-05f51c210a" Dec 13 09:11:32.708560 kubelet[2149]: E1213 09:11:32.708319 2149 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://146.190.151.20:6443/api/v1/nodes\": dial tcp 146.190.151.20:6443: connect: connection refused" node="ci-4081.2.1-5-05f51c210a" Dec 13 09:11:32.729407 kubelet[2149]: I1213 09:11:32.729357 2149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f81eb8ca2f5a376b8e4afec0062b2001-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.2.1-5-05f51c210a\" (UID: \"f81eb8ca2f5a376b8e4afec0062b2001\") " pod="kube-system/kube-apiserver-ci-4081.2.1-5-05f51c210a" Dec 13 09:11:32.729739 kubelet[2149]: I1213 09:11:32.729644 2149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0ec71c55a86766a684f39415b9a62b02-ca-certs\") pod \"kube-controller-manager-ci-4081.2.1-5-05f51c210a\" (UID: \"0ec71c55a86766a684f39415b9a62b02\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-5-05f51c210a" Dec 13 09:11:32.729739 kubelet[2149]: I1213 09:11:32.729678 2149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0ec71c55a86766a684f39415b9a62b02-kubeconfig\") pod \"kube-controller-manager-ci-4081.2.1-5-05f51c210a\" (UID: \"0ec71c55a86766a684f39415b9a62b02\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-5-05f51c210a" Dec 13 09:11:32.729739 kubelet[2149]: I1213 09:11:32.729699 2149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6386e79fc2fed2ce1bc0aa99c6570e63-kubeconfig\") pod \"kube-scheduler-ci-4081.2.1-5-05f51c210a\" (UID: \"6386e79fc2fed2ce1bc0aa99c6570e63\") " pod="kube-system/kube-scheduler-ci-4081.2.1-5-05f51c210a" Dec 13 09:11:32.729953 kubelet[2149]: I1213 09:11:32.729767 2149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f81eb8ca2f5a376b8e4afec0062b2001-ca-certs\") pod \"kube-apiserver-ci-4081.2.1-5-05f51c210a\" (UID: \"f81eb8ca2f5a376b8e4afec0062b2001\") " pod="kube-system/kube-apiserver-ci-4081.2.1-5-05f51c210a" Dec 13 09:11:32.729953 kubelet[2149]: I1213 09:11:32.729824 2149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f81eb8ca2f5a376b8e4afec0062b2001-k8s-certs\") pod \"kube-apiserver-ci-4081.2.1-5-05f51c210a\" (UID: \"f81eb8ca2f5a376b8e4afec0062b2001\") " pod="kube-system/kube-apiserver-ci-4081.2.1-5-05f51c210a" Dec 13 09:11:32.729953 kubelet[2149]: I1213 09:11:32.729850 2149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0ec71c55a86766a684f39415b9a62b02-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.2.1-5-05f51c210a\" (UID: \"0ec71c55a86766a684f39415b9a62b02\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-5-05f51c210a" Dec 13 09:11:32.729953 kubelet[2149]: I1213 09:11:32.729893 2149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0ec71c55a86766a684f39415b9a62b02-k8s-certs\") pod \"kube-controller-manager-ci-4081.2.1-5-05f51c210a\" (UID: \"0ec71c55a86766a684f39415b9a62b02\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-5-05f51c210a" Dec 13 09:11:32.729953 kubelet[2149]: I1213 09:11:32.729915 2149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0ec71c55a86766a684f39415b9a62b02-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.2.1-5-05f51c210a\" (UID: \"0ec71c55a86766a684f39415b9a62b02\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-5-05f51c210a" Dec 13 09:11:32.731608 kubelet[2149]: E1213 09:11:32.731549 2149 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.151.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.1-5-05f51c210a?timeout=10s\": dial tcp 146.190.151.20:6443: connect: connection refused" interval="400ms" Dec 13 09:11:32.910429 kubelet[2149]: I1213 09:11:32.910386 2149 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.2.1-5-05f51c210a" Dec 13 09:11:32.910903 kubelet[2149]: E1213 09:11:32.910867 2149 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://146.190.151.20:6443/api/v1/nodes\": dial tcp 146.190.151.20:6443: connect: connection refused" node="ci-4081.2.1-5-05f51c210a" Dec 13 09:11:32.977684 kubelet[2149]: E1213 09:11:32.977048 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:32.978055 containerd[1469]: time="2024-12-13T09:11:32.978008266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.2.1-5-05f51c210a,Uid:6386e79fc2fed2ce1bc0aa99c6570e63,Namespace:kube-system,Attempt:0,}" Dec 13 09:11:32.995148 kubelet[2149]: E1213 09:11:32.995091 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:32.999720 containerd[1469]: time="2024-12-13T09:11:32.999656545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.2.1-5-05f51c210a,Uid:f81eb8ca2f5a376b8e4afec0062b2001,Namespace:kube-system,Attempt:0,}" Dec 13 09:11:33.000285 kubelet[2149]: E1213 09:11:33.000253 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:33.000979 containerd[1469]: time="2024-12-13T09:11:33.000927451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.2.1-5-05f51c210a,Uid:0ec71c55a86766a684f39415b9a62b02,Namespace:kube-system,Attempt:0,}" Dec 13 09:11:33.132681 kubelet[2149]: E1213 09:11:33.132607 2149 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.151.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.1-5-05f51c210a?timeout=10s\": dial tcp 146.190.151.20:6443: connect: connection refused" interval="800ms" Dec 13 09:11:33.312860 kubelet[2149]: I1213 09:11:33.312781 2149 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.2.1-5-05f51c210a" Dec 13 09:11:33.313375 kubelet[2149]: E1213 09:11:33.313324 2149 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://146.190.151.20:6443/api/v1/nodes\": dial tcp 146.190.151.20:6443: connect: connection refused" node="ci-4081.2.1-5-05f51c210a" Dec 13 09:11:33.462002 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3231668444.mount: Deactivated successfully. Dec 13 09:11:33.469532 containerd[1469]: time="2024-12-13T09:11:33.469440222Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 09:11:33.470476 containerd[1469]: time="2024-12-13T09:11:33.470419459Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Dec 13 09:11:33.471570 containerd[1469]: time="2024-12-13T09:11:33.471527968Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 09:11:33.473221 containerd[1469]: time="2024-12-13T09:11:33.473046225Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 09:11:33.474127 containerd[1469]: time="2024-12-13T09:11:33.473955294Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 09:11:33.474127 containerd[1469]: time="2024-12-13T09:11:33.474067886Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 09:11:33.476490 containerd[1469]: time="2024-12-13T09:11:33.476439181Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 09:11:33.478533 containerd[1469]: time="2024-12-13T09:11:33.478104849Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 477.08234ms" Dec 13 09:11:33.480076 containerd[1469]: time="2024-12-13T09:11:33.479079903Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 09:11:33.486776 containerd[1469]: time="2024-12-13T09:11:33.485435074Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 485.662779ms" Dec 13 09:11:33.494623 containerd[1469]: time="2024-12-13T09:11:33.493514940Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 515.355906ms" Dec 13 09:11:33.690731 containerd[1469]: time="2024-12-13T09:11:33.688281932Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 09:11:33.690731 containerd[1469]: time="2024-12-13T09:11:33.689265456Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 09:11:33.690731 containerd[1469]: time="2024-12-13T09:11:33.689403801Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:11:33.690731 containerd[1469]: time="2024-12-13T09:11:33.690270564Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:11:33.701905 containerd[1469]: time="2024-12-13T09:11:33.701685923Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 09:11:33.701905 containerd[1469]: time="2024-12-13T09:11:33.701800696Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 09:11:33.701905 containerd[1469]: time="2024-12-13T09:11:33.701825077Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:11:33.705468 containerd[1469]: time="2024-12-13T09:11:33.705291848Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:11:33.706442 containerd[1469]: time="2024-12-13T09:11:33.706028287Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 09:11:33.713538 containerd[1469]: time="2024-12-13T09:11:33.711604670Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 09:11:33.713538 containerd[1469]: time="2024-12-13T09:11:33.711664244Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:11:33.713538 containerd[1469]: time="2024-12-13T09:11:33.711828180Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:11:33.742665 systemd[1]: Started cri-containerd-f11f8d0a96b41c01da70889def27418a8821fb34c5603c9a454f58be3b7626c1.scope - libcontainer container f11f8d0a96b41c01da70889def27418a8821fb34c5603c9a454f58be3b7626c1. Dec 13 09:11:33.756454 systemd[1]: Started cri-containerd-c124b1e9752beb3981f1281174567a3e5fdbc606e38b68c20c3ba6038b40f465.scope - libcontainer container c124b1e9752beb3981f1281174567a3e5fdbc606e38b68c20c3ba6038b40f465. Dec 13 09:11:33.768803 systemd[1]: Started cri-containerd-988b6bb4809cf664d6d35f32ec08e9f7794af9ca145255435a6fd529a017af08.scope - libcontainer container 988b6bb4809cf664d6d35f32ec08e9f7794af9ca145255435a6fd529a017af08. Dec 13 09:11:33.826233 kubelet[2149]: W1213 09:11:33.826116 2149 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://146.190.151.20:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 146.190.151.20:6443: connect: connection refused Dec 13 09:11:33.826592 kubelet[2149]: E1213 09:11:33.826558 2149 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://146.190.151.20:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 146.190.151.20:6443: connect: connection refused" logger="UnhandledError" Dec 13 09:11:33.881299 kubelet[2149]: W1213 09:11:33.881205 2149 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://146.190.151.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 146.190.151.20:6443: connect: connection refused Dec 13 09:11:33.881299 kubelet[2149]: E1213 09:11:33.881295 2149 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://146.190.151.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 146.190.151.20:6443: connect: connection refused" logger="UnhandledError" Dec 13 09:11:33.886383 containerd[1469]: time="2024-12-13T09:11:33.886103485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.2.1-5-05f51c210a,Uid:f81eb8ca2f5a376b8e4afec0062b2001,Namespace:kube-system,Attempt:0,} returns sandbox id \"988b6bb4809cf664d6d35f32ec08e9f7794af9ca145255435a6fd529a017af08\"" Dec 13 09:11:33.893487 kubelet[2149]: E1213 09:11:33.893375 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:33.901509 containerd[1469]: time="2024-12-13T09:11:33.899110254Z" level=info msg="CreateContainer within sandbox \"988b6bb4809cf664d6d35f32ec08e9f7794af9ca145255435a6fd529a017af08\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 09:11:33.902230 kubelet[2149]: W1213 09:11:33.902158 2149 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://146.190.151.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.1-5-05f51c210a&limit=500&resourceVersion=0": dial tcp 146.190.151.20:6443: connect: connection refused Dec 13 09:11:33.902408 kubelet[2149]: E1213 09:11:33.902385 2149 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://146.190.151.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.1-5-05f51c210a&limit=500&resourceVersion=0\": dial tcp 146.190.151.20:6443: connect: connection refused" logger="UnhandledError" Dec 13 09:11:33.931411 containerd[1469]: time="2024-12-13T09:11:33.925215053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.2.1-5-05f51c210a,Uid:0ec71c55a86766a684f39415b9a62b02,Namespace:kube-system,Attempt:0,} returns sandbox id \"f11f8d0a96b41c01da70889def27418a8821fb34c5603c9a454f58be3b7626c1\"" Dec 13 09:11:33.931411 containerd[1469]: time="2024-12-13T09:11:33.928931225Z" level=info msg="CreateContainer within sandbox \"f11f8d0a96b41c01da70889def27418a8821fb34c5603c9a454f58be3b7626c1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 09:11:33.932223 kubelet[2149]: E1213 09:11:33.926423 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:33.934387 kubelet[2149]: E1213 09:11:33.934321 2149 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.151.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.1-5-05f51c210a?timeout=10s\": dial tcp 146.190.151.20:6443: connect: connection refused" interval="1.6s" Dec 13 09:11:33.961973 containerd[1469]: time="2024-12-13T09:11:33.961762721Z" level=info msg="CreateContainer within sandbox \"f11f8d0a96b41c01da70889def27418a8821fb34c5603c9a454f58be3b7626c1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a302dc1b7403d790f473f4860f8d60c63bd664d3c6746caf90eea034e5c1a9ed\"" Dec 13 09:11:33.973473 containerd[1469]: time="2024-12-13T09:11:33.973075288Z" level=info msg="StartContainer for \"a302dc1b7403d790f473f4860f8d60c63bd664d3c6746caf90eea034e5c1a9ed\"" Dec 13 09:11:33.974347 containerd[1469]: time="2024-12-13T09:11:33.974290856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.2.1-5-05f51c210a,Uid:6386e79fc2fed2ce1bc0aa99c6570e63,Namespace:kube-system,Attempt:0,} returns sandbox id \"c124b1e9752beb3981f1281174567a3e5fdbc606e38b68c20c3ba6038b40f465\"" Dec 13 09:11:33.975725 kubelet[2149]: E1213 09:11:33.975644 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:33.978626 containerd[1469]: time="2024-12-13T09:11:33.978570141Z" level=info msg="CreateContainer within sandbox \"988b6bb4809cf664d6d35f32ec08e9f7794af9ca145255435a6fd529a017af08\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"77ee894feaca71067c20eea9064cd9775b8c704f97dac8ce0b49ac9ac15abc95\"" Dec 13 09:11:33.981073 containerd[1469]: time="2024-12-13T09:11:33.980906699Z" level=info msg="CreateContainer within sandbox \"c124b1e9752beb3981f1281174567a3e5fdbc606e38b68c20c3ba6038b40f465\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 09:11:33.981641 containerd[1469]: time="2024-12-13T09:11:33.981610224Z" level=info msg="StartContainer for \"77ee894feaca71067c20eea9064cd9775b8c704f97dac8ce0b49ac9ac15abc95\"" Dec 13 09:11:34.009578 containerd[1469]: time="2024-12-13T09:11:34.009426568Z" level=info msg="CreateContainer within sandbox \"c124b1e9752beb3981f1281174567a3e5fdbc606e38b68c20c3ba6038b40f465\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"df1b629a15b6ac821f21e9612ae5e918cf85ce0aa3142e7916e555ace8b616e3\"" Dec 13 09:11:34.010891 containerd[1469]: time="2024-12-13T09:11:34.010794171Z" level=info msg="StartContainer for \"df1b629a15b6ac821f21e9612ae5e918cf85ce0aa3142e7916e555ace8b616e3\"" Dec 13 09:11:34.036036 systemd[1]: Started cri-containerd-a302dc1b7403d790f473f4860f8d60c63bd664d3c6746caf90eea034e5c1a9ed.scope - libcontainer container a302dc1b7403d790f473f4860f8d60c63bd664d3c6746caf90eea034e5c1a9ed. Dec 13 09:11:34.051845 systemd[1]: Started cri-containerd-77ee894feaca71067c20eea9064cd9775b8c704f97dac8ce0b49ac9ac15abc95.scope - libcontainer container 77ee894feaca71067c20eea9064cd9775b8c704f97dac8ce0b49ac9ac15abc95. Dec 13 09:11:34.075804 kubelet[2149]: W1213 09:11:34.075602 2149 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://146.190.151.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 146.190.151.20:6443: connect: connection refused Dec 13 09:11:34.077549 kubelet[2149]: E1213 09:11:34.075717 2149 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://146.190.151.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 146.190.151.20:6443: connect: connection refused" logger="UnhandledError" Dec 13 09:11:34.099749 systemd[1]: Started cri-containerd-df1b629a15b6ac821f21e9612ae5e918cf85ce0aa3142e7916e555ace8b616e3.scope - libcontainer container df1b629a15b6ac821f21e9612ae5e918cf85ce0aa3142e7916e555ace8b616e3. Dec 13 09:11:34.115842 kubelet[2149]: I1213 09:11:34.115797 2149 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.2.1-5-05f51c210a" Dec 13 09:11:34.116383 kubelet[2149]: E1213 09:11:34.116246 2149 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://146.190.151.20:6443/api/v1/nodes\": dial tcp 146.190.151.20:6443: connect: connection refused" node="ci-4081.2.1-5-05f51c210a" Dec 13 09:11:34.161662 containerd[1469]: time="2024-12-13T09:11:34.161513215Z" level=info msg="StartContainer for \"a302dc1b7403d790f473f4860f8d60c63bd664d3c6746caf90eea034e5c1a9ed\" returns successfully" Dec 13 09:11:34.204547 containerd[1469]: time="2024-12-13T09:11:34.204133364Z" level=info msg="StartContainer for \"77ee894feaca71067c20eea9064cd9775b8c704f97dac8ce0b49ac9ac15abc95\" returns successfully" Dec 13 09:11:34.241579 containerd[1469]: time="2024-12-13T09:11:34.241413954Z" level=info msg="StartContainer for \"df1b629a15b6ac821f21e9612ae5e918cf85ce0aa3142e7916e555ace8b616e3\" returns successfully" Dec 13 09:11:34.575771 kubelet[2149]: E1213 09:11:34.572835 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:34.575771 kubelet[2149]: E1213 09:11:34.573666 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:34.576318 kubelet[2149]: E1213 09:11:34.576286 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:35.578971 kubelet[2149]: E1213 09:11:35.578919 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:35.718844 kubelet[2149]: I1213 09:11:35.718792 2149 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.2.1-5-05f51c210a" Dec 13 09:11:37.098469 kubelet[2149]: E1213 09:11:37.098404 2149 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.2.1-5-05f51c210a\" not found" node="ci-4081.2.1-5-05f51c210a" Dec 13 09:11:37.193948 kubelet[2149]: I1213 09:11:37.192135 2149 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.2.1-5-05f51c210a" Dec 13 09:11:37.193948 kubelet[2149]: E1213 09:11:37.192250 2149 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4081.2.1-5-05f51c210a\": node \"ci-4081.2.1-5-05f51c210a\" not found" Dec 13 09:11:37.510191 kubelet[2149]: I1213 09:11:37.509853 2149 apiserver.go:52] "Watching apiserver" Dec 13 09:11:37.535236 kubelet[2149]: I1213 09:11:37.535155 2149 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 13 09:11:39.881102 systemd[1]: Reloading requested from client PID 2423 ('systemctl') (unit session-9.scope)... Dec 13 09:11:39.881127 systemd[1]: Reloading... Dec 13 09:11:40.080198 zram_generator::config[2471]: No configuration found. Dec 13 09:11:40.288406 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 09:11:40.427777 systemd[1]: Reloading finished in 546 ms. Dec 13 09:11:40.494689 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 09:11:40.523205 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 09:11:40.523887 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 09:11:40.524130 systemd[1]: kubelet.service: Consumed 1.095s CPU time, 112.4M memory peak, 0B memory swap peak. Dec 13 09:11:40.534347 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 09:11:40.789857 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 09:11:40.792931 (kubelet)[2513]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 09:11:40.877976 kubelet[2513]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 09:11:40.877976 kubelet[2513]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 09:11:40.877976 kubelet[2513]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 09:11:40.878855 kubelet[2513]: I1213 09:11:40.878082 2513 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 09:11:40.894084 kubelet[2513]: I1213 09:11:40.893830 2513 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Dec 13 09:11:40.894084 kubelet[2513]: I1213 09:11:40.893881 2513 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 09:11:40.894870 kubelet[2513]: I1213 09:11:40.894834 2513 server.go:929] "Client rotation is on, will bootstrap in background" Dec 13 09:11:40.898150 kubelet[2513]: I1213 09:11:40.898055 2513 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 09:11:40.909560 kubelet[2513]: I1213 09:11:40.907770 2513 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 09:11:40.918234 kubelet[2513]: E1213 09:11:40.918165 2513 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Dec 13 09:11:40.918234 kubelet[2513]: I1213 09:11:40.918217 2513 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Dec 13 09:11:40.925896 kubelet[2513]: I1213 09:11:40.924333 2513 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 09:11:40.925896 kubelet[2513]: I1213 09:11:40.924580 2513 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 13 09:11:40.925896 kubelet[2513]: I1213 09:11:40.924798 2513 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 09:11:40.925896 kubelet[2513]: I1213 09:11:40.924845 2513 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.2.1-5-05f51c210a","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 09:11:40.926532 kubelet[2513]: I1213 09:11:40.925299 2513 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 09:11:40.926532 kubelet[2513]: I1213 09:11:40.925317 2513 container_manager_linux.go:300] "Creating device plugin manager" Dec 13 09:11:40.926532 kubelet[2513]: I1213 09:11:40.925376 2513 state_mem.go:36] "Initialized new in-memory state store" Dec 13 09:11:40.926811 kubelet[2513]: I1213 09:11:40.926788 2513 kubelet.go:408] "Attempting to sync node with API server" Dec 13 09:11:40.926914 kubelet[2513]: I1213 09:11:40.926900 2513 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 09:11:40.927027 kubelet[2513]: I1213 09:11:40.927014 2513 kubelet.go:314] "Adding apiserver pod source" Dec 13 09:11:40.928797 kubelet[2513]: I1213 09:11:40.928771 2513 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 09:11:40.935018 kubelet[2513]: I1213 09:11:40.933568 2513 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 09:11:40.938090 kubelet[2513]: I1213 09:11:40.936706 2513 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 09:11:40.941155 kubelet[2513]: I1213 09:11:40.940370 2513 server.go:1269] "Started kubelet" Dec 13 09:11:40.951853 kubelet[2513]: I1213 09:11:40.950374 2513 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 09:11:40.960877 kubelet[2513]: I1213 09:11:40.960674 2513 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 09:11:40.967445 kubelet[2513]: I1213 09:11:40.967347 2513 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 09:11:40.976104 kubelet[2513]: I1213 09:11:40.972763 2513 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 09:11:40.980142 kubelet[2513]: I1213 09:11:40.968274 2513 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 09:11:40.992029 kubelet[2513]: I1213 09:11:40.969654 2513 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 13 09:11:40.995178 kubelet[2513]: I1213 09:11:40.969677 2513 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 13 09:11:40.995178 kubelet[2513]: I1213 09:11:40.993720 2513 reconciler.go:26] "Reconciler: start to sync state" Dec 13 09:11:40.995178 kubelet[2513]: I1213 09:11:40.967876 2513 server.go:460] "Adding debug handlers to kubelet server" Dec 13 09:11:40.999141 kubelet[2513]: I1213 09:11:40.999098 2513 factory.go:221] Registration of the systemd container factory successfully Dec 13 09:11:40.999355 kubelet[2513]: I1213 09:11:40.999253 2513 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 09:11:41.005374 kubelet[2513]: I1213 09:11:41.005248 2513 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 09:11:41.006154 sudo[2527]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 13 09:11:41.007803 sudo[2527]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Dec 13 09:11:41.010371 kubelet[2513]: I1213 09:11:41.010309 2513 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 09:11:41.010371 kubelet[2513]: I1213 09:11:41.010367 2513 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 09:11:41.010622 kubelet[2513]: I1213 09:11:41.010393 2513 kubelet.go:2321] "Starting kubelet main sync loop" Dec 13 09:11:41.010622 kubelet[2513]: E1213 09:11:41.010448 2513 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 09:11:41.032437 kubelet[2513]: E1213 09:11:41.030744 2513 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 09:11:41.033870 kubelet[2513]: I1213 09:11:41.033674 2513 factory.go:221] Registration of the containerd container factory successfully Dec 13 09:11:41.115707 kubelet[2513]: E1213 09:11:41.110708 2513 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 09:11:41.135959 kubelet[2513]: I1213 09:11:41.133032 2513 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 09:11:41.135959 kubelet[2513]: I1213 09:11:41.133058 2513 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 09:11:41.135959 kubelet[2513]: I1213 09:11:41.133084 2513 state_mem.go:36] "Initialized new in-memory state store" Dec 13 09:11:41.135959 kubelet[2513]: I1213 09:11:41.133286 2513 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 09:11:41.135959 kubelet[2513]: I1213 09:11:41.133300 2513 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 09:11:41.135959 kubelet[2513]: I1213 09:11:41.133329 2513 policy_none.go:49] "None policy: Start" Dec 13 09:11:41.135959 kubelet[2513]: I1213 09:11:41.135067 2513 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 09:11:41.135959 kubelet[2513]: I1213 09:11:41.135105 2513 state_mem.go:35] "Initializing new in-memory state store" Dec 13 09:11:41.135959 kubelet[2513]: I1213 09:11:41.135346 2513 state_mem.go:75] "Updated machine memory state" Dec 13 09:11:41.159264 kubelet[2513]: I1213 09:11:41.159206 2513 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 09:11:41.159476 kubelet[2513]: I1213 09:11:41.159458 2513 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 09:11:41.160279 kubelet[2513]: I1213 09:11:41.159474 2513 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 09:11:41.165525 kubelet[2513]: I1213 09:11:41.163621 2513 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 09:11:41.279248 kubelet[2513]: I1213 09:11:41.278450 2513 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.2.1-5-05f51c210a" Dec 13 09:11:41.301152 kubelet[2513]: I1213 09:11:41.301079 2513 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081.2.1-5-05f51c210a" Dec 13 09:11:41.301842 kubelet[2513]: I1213 09:11:41.301203 2513 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.2.1-5-05f51c210a" Dec 13 09:11:41.331326 kubelet[2513]: W1213 09:11:41.331108 2513 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 09:11:41.333860 kubelet[2513]: W1213 09:11:41.333822 2513 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 09:11:41.335484 kubelet[2513]: W1213 09:11:41.335322 2513 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 09:11:41.494841 kubelet[2513]: I1213 09:11:41.494780 2513 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6386e79fc2fed2ce1bc0aa99c6570e63-kubeconfig\") pod \"kube-scheduler-ci-4081.2.1-5-05f51c210a\" (UID: \"6386e79fc2fed2ce1bc0aa99c6570e63\") " pod="kube-system/kube-scheduler-ci-4081.2.1-5-05f51c210a" Dec 13 09:11:41.494841 kubelet[2513]: I1213 09:11:41.494830 2513 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f81eb8ca2f5a376b8e4afec0062b2001-k8s-certs\") pod \"kube-apiserver-ci-4081.2.1-5-05f51c210a\" (UID: \"f81eb8ca2f5a376b8e4afec0062b2001\") " pod="kube-system/kube-apiserver-ci-4081.2.1-5-05f51c210a" Dec 13 09:11:41.494841 kubelet[2513]: I1213 09:11:41.494853 2513 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f81eb8ca2f5a376b8e4afec0062b2001-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.2.1-5-05f51c210a\" (UID: \"f81eb8ca2f5a376b8e4afec0062b2001\") " pod="kube-system/kube-apiserver-ci-4081.2.1-5-05f51c210a" Dec 13 09:11:41.495154 kubelet[2513]: I1213 09:11:41.494871 2513 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0ec71c55a86766a684f39415b9a62b02-ca-certs\") pod \"kube-controller-manager-ci-4081.2.1-5-05f51c210a\" (UID: \"0ec71c55a86766a684f39415b9a62b02\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-5-05f51c210a" Dec 13 09:11:41.495154 kubelet[2513]: I1213 09:11:41.494888 2513 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0ec71c55a86766a684f39415b9a62b02-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.2.1-5-05f51c210a\" (UID: \"0ec71c55a86766a684f39415b9a62b02\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-5-05f51c210a" Dec 13 09:11:41.495154 kubelet[2513]: I1213 09:11:41.494908 2513 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0ec71c55a86766a684f39415b9a62b02-kubeconfig\") pod \"kube-controller-manager-ci-4081.2.1-5-05f51c210a\" (UID: \"0ec71c55a86766a684f39415b9a62b02\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-5-05f51c210a" Dec 13 09:11:41.495154 kubelet[2513]: I1213 09:11:41.494932 2513 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f81eb8ca2f5a376b8e4afec0062b2001-ca-certs\") pod \"kube-apiserver-ci-4081.2.1-5-05f51c210a\" (UID: \"f81eb8ca2f5a376b8e4afec0062b2001\") " pod="kube-system/kube-apiserver-ci-4081.2.1-5-05f51c210a" Dec 13 09:11:41.495154 kubelet[2513]: I1213 09:11:41.494967 2513 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0ec71c55a86766a684f39415b9a62b02-k8s-certs\") pod \"kube-controller-manager-ci-4081.2.1-5-05f51c210a\" (UID: \"0ec71c55a86766a684f39415b9a62b02\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-5-05f51c210a" Dec 13 09:11:41.495277 kubelet[2513]: I1213 09:11:41.494995 2513 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0ec71c55a86766a684f39415b9a62b02-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.2.1-5-05f51c210a\" (UID: \"0ec71c55a86766a684f39415b9a62b02\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-5-05f51c210a" Dec 13 09:11:41.636523 kubelet[2513]: E1213 09:11:41.632645 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:41.636523 kubelet[2513]: E1213 09:11:41.634702 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:41.636523 kubelet[2513]: E1213 09:11:41.636004 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:41.896511 sudo[2527]: pam_unix(sudo:session): session closed for user root Dec 13 09:11:41.942542 kubelet[2513]: I1213 09:11:41.941354 2513 apiserver.go:52] "Watching apiserver" Dec 13 09:11:41.994599 kubelet[2513]: I1213 09:11:41.994546 2513 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 13 09:11:42.071290 update_engine[1448]: I20241213 09:11:42.071162 1448 update_attempter.cc:509] Updating boot flags... Dec 13 09:11:42.095448 kubelet[2513]: E1213 09:11:42.092932 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:42.104430 kubelet[2513]: E1213 09:11:42.103017 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:42.113302 kubelet[2513]: W1213 09:11:42.110819 2513 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 09:11:42.113302 kubelet[2513]: E1213 09:11:42.110912 2513 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.2.1-5-05f51c210a\" already exists" pod="kube-system/kube-apiserver-ci-4081.2.1-5-05f51c210a" Dec 13 09:11:42.114385 kubelet[2513]: E1213 09:11:42.114246 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:42.199719 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2562) Dec 13 09:11:42.242537 kubelet[2513]: I1213 09:11:42.242026 2513 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.2.1-5-05f51c210a" podStartSLOduration=1.242003865 podStartE2EDuration="1.242003865s" podCreationTimestamp="2024-12-13 09:11:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 09:11:42.237793509 +0000 UTC m=+1.438671311" watchObservedRunningTime="2024-12-13 09:11:42.242003865 +0000 UTC m=+1.442881658" Dec 13 09:11:42.357588 kubelet[2513]: I1213 09:11:42.357132 2513 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.2.1-5-05f51c210a" podStartSLOduration=1.357107254 podStartE2EDuration="1.357107254s" podCreationTimestamp="2024-12-13 09:11:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 09:11:42.302603859 +0000 UTC m=+1.503481671" watchObservedRunningTime="2024-12-13 09:11:42.357107254 +0000 UTC m=+1.557985053" Dec 13 09:11:42.413590 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2564) Dec 13 09:11:42.520306 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2564) Dec 13 09:11:43.093063 kubelet[2513]: E1213 09:11:43.091366 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:44.062958 kubelet[2513]: I1213 09:11:44.062364 2513 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 09:11:44.063144 containerd[1469]: time="2024-12-13T09:11:44.062850968Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 09:11:44.064274 kubelet[2513]: I1213 09:11:44.063832 2513 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 09:11:44.323764 sudo[1670]: pam_unix(sudo:session): session closed for user root Dec 13 09:11:44.327894 sshd[1664]: pam_unix(sshd:session): session closed for user core Dec 13 09:11:44.334326 systemd[1]: sshd@8-146.190.151.20:22-147.75.109.163:39948.service: Deactivated successfully. Dec 13 09:11:44.341206 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 09:11:44.341931 systemd[1]: session-9.scope: Consumed 6.426s CPU time, 149.4M memory peak, 0B memory swap peak. Dec 13 09:11:44.342928 systemd-logind[1446]: Session 9 logged out. Waiting for processes to exit. Dec 13 09:11:44.345398 systemd-logind[1446]: Removed session 9. Dec 13 09:11:44.720750 kubelet[2513]: I1213 09:11:44.720436 2513 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.2.1-5-05f51c210a" podStartSLOduration=3.720399445 podStartE2EDuration="3.720399445s" podCreationTimestamp="2024-12-13 09:11:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 09:11:42.357987515 +0000 UTC m=+1.558865319" watchObservedRunningTime="2024-12-13 09:11:44.720399445 +0000 UTC m=+3.921277248" Dec 13 09:11:44.748572 systemd[1]: Created slice kubepods-besteffort-podc49bd2e7_8fe6_423b_86a2_1ba082025645.slice - libcontainer container kubepods-besteffort-podc49bd2e7_8fe6_423b_86a2_1ba082025645.slice. Dec 13 09:11:44.770123 systemd[1]: Created slice kubepods-burstable-pod56c6e77a_9013_47b8_99a9_4dc5b9930b0c.slice - libcontainer container kubepods-burstable-pod56c6e77a_9013_47b8_99a9_4dc5b9930b0c.slice. Dec 13 09:11:44.829474 kubelet[2513]: I1213 09:11:44.829200 2513 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c49bd2e7-8fe6-423b-86a2-1ba082025645-kube-proxy\") pod \"kube-proxy-lwxnc\" (UID: \"c49bd2e7-8fe6-423b-86a2-1ba082025645\") " pod="kube-system/kube-proxy-lwxnc" Dec 13 09:11:44.829474 kubelet[2513]: I1213 09:11:44.829267 2513 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c49bd2e7-8fe6-423b-86a2-1ba082025645-xtables-lock\") pod \"kube-proxy-lwxnc\" (UID: \"c49bd2e7-8fe6-423b-86a2-1ba082025645\") " pod="kube-system/kube-proxy-lwxnc" Dec 13 09:11:44.829474 kubelet[2513]: I1213 09:11:44.829297 2513 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/56c6e77a-9013-47b8-99a9-4dc5b9930b0c-cilium-run\") pod \"cilium-mlndh\" (UID: \"56c6e77a-9013-47b8-99a9-4dc5b9930b0c\") " pod="kube-system/cilium-mlndh" Dec 13 09:11:44.829474 kubelet[2513]: I1213 09:11:44.829326 2513 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c49bd2e7-8fe6-423b-86a2-1ba082025645-lib-modules\") pod \"kube-proxy-lwxnc\" (UID: \"c49bd2e7-8fe6-423b-86a2-1ba082025645\") " pod="kube-system/kube-proxy-lwxnc" Dec 13 09:11:44.829474 kubelet[2513]: I1213 09:11:44.829350 2513 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77vft\" (UniqueName: \"kubernetes.io/projected/c49bd2e7-8fe6-423b-86a2-1ba082025645-kube-api-access-77vft\") pod \"kube-proxy-lwxnc\" (UID: \"c49bd2e7-8fe6-423b-86a2-1ba082025645\") " pod="kube-system/kube-proxy-lwxnc" Dec 13 09:11:44.902732 systemd[1]: Created slice kubepods-besteffort-podca543a1e_a48e_4fd7_b1da_fc54da14712c.slice - libcontainer container kubepods-besteffort-podca543a1e_a48e_4fd7_b1da_fc54da14712c.slice. Dec 13 09:11:44.930464 kubelet[2513]: I1213 09:11:44.930379 2513 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/56c6e77a-9013-47b8-99a9-4dc5b9930b0c-etc-cni-netd\") pod \"cilium-mlndh\" (UID: \"56c6e77a-9013-47b8-99a9-4dc5b9930b0c\") " pod="kube-system/cilium-mlndh" Dec 13 09:11:44.930464 kubelet[2513]: I1213 09:11:44.930453 2513 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/56c6e77a-9013-47b8-99a9-4dc5b9930b0c-cilium-config-path\") pod \"cilium-mlndh\" (UID: \"56c6e77a-9013-47b8-99a9-4dc5b9930b0c\") " pod="kube-system/cilium-mlndh" Dec 13 09:11:44.930464 kubelet[2513]: I1213 09:11:44.930473 2513 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/56c6e77a-9013-47b8-99a9-4dc5b9930b0c-xtables-lock\") pod \"cilium-mlndh\" (UID: \"56c6e77a-9013-47b8-99a9-4dc5b9930b0c\") " pod="kube-system/cilium-mlndh" Dec 13 09:11:44.930464 kubelet[2513]: I1213 09:11:44.930489 2513 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/56c6e77a-9013-47b8-99a9-4dc5b9930b0c-host-proc-sys-kernel\") pod \"cilium-mlndh\" (UID: \"56c6e77a-9013-47b8-99a9-4dc5b9930b0c\") " pod="kube-system/cilium-mlndh" Dec 13 09:11:44.932614 kubelet[2513]: I1213 09:11:44.930556 2513 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/56c6e77a-9013-47b8-99a9-4dc5b9930b0c-hubble-tls\") pod \"cilium-mlndh\" (UID: \"56c6e77a-9013-47b8-99a9-4dc5b9930b0c\") " pod="kube-system/cilium-mlndh" Dec 13 09:11:44.932614 kubelet[2513]: I1213 09:11:44.930626 2513 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/56c6e77a-9013-47b8-99a9-4dc5b9930b0c-host-proc-sys-net\") pod \"cilium-mlndh\" (UID: \"56c6e77a-9013-47b8-99a9-4dc5b9930b0c\") " pod="kube-system/cilium-mlndh" Dec 13 09:11:44.932614 kubelet[2513]: I1213 09:11:44.930686 2513 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgzhm\" (UniqueName: \"kubernetes.io/projected/56c6e77a-9013-47b8-99a9-4dc5b9930b0c-kube-api-access-zgzhm\") pod \"cilium-mlndh\" (UID: \"56c6e77a-9013-47b8-99a9-4dc5b9930b0c\") " pod="kube-system/cilium-mlndh" Dec 13 09:11:44.932614 kubelet[2513]: I1213 09:11:44.930759 2513 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/56c6e77a-9013-47b8-99a9-4dc5b9930b0c-clustermesh-secrets\") pod \"cilium-mlndh\" (UID: \"56c6e77a-9013-47b8-99a9-4dc5b9930b0c\") " pod="kube-system/cilium-mlndh" Dec 13 09:11:44.932614 kubelet[2513]: I1213 09:11:44.930820 2513 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/56c6e77a-9013-47b8-99a9-4dc5b9930b0c-hostproc\") pod \"cilium-mlndh\" (UID: \"56c6e77a-9013-47b8-99a9-4dc5b9930b0c\") " pod="kube-system/cilium-mlndh" Dec 13 09:11:44.932614 kubelet[2513]: I1213 09:11:44.930847 2513 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/56c6e77a-9013-47b8-99a9-4dc5b9930b0c-cilium-cgroup\") pod \"cilium-mlndh\" (UID: \"56c6e77a-9013-47b8-99a9-4dc5b9930b0c\") " pod="kube-system/cilium-mlndh" Dec 13 09:11:44.932915 kubelet[2513]: I1213 09:11:44.930881 2513 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/56c6e77a-9013-47b8-99a9-4dc5b9930b0c-cni-path\") pod \"cilium-mlndh\" (UID: \"56c6e77a-9013-47b8-99a9-4dc5b9930b0c\") " pod="kube-system/cilium-mlndh" Dec 13 09:11:44.932915 kubelet[2513]: I1213 09:11:44.930906 2513 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/56c6e77a-9013-47b8-99a9-4dc5b9930b0c-bpf-maps\") pod \"cilium-mlndh\" (UID: \"56c6e77a-9013-47b8-99a9-4dc5b9930b0c\") " pod="kube-system/cilium-mlndh" Dec 13 09:11:44.932915 kubelet[2513]: I1213 09:11:44.930930 2513 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/56c6e77a-9013-47b8-99a9-4dc5b9930b0c-lib-modules\") pod \"cilium-mlndh\" (UID: \"56c6e77a-9013-47b8-99a9-4dc5b9930b0c\") " pod="kube-system/cilium-mlndh" Dec 13 09:11:45.032023 kubelet[2513]: I1213 09:11:45.031219 2513 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ca543a1e-a48e-4fd7-b1da-fc54da14712c-cilium-config-path\") pod \"cilium-operator-5d85765b45-dpxmn\" (UID: \"ca543a1e-a48e-4fd7-b1da-fc54da14712c\") " pod="kube-system/cilium-operator-5d85765b45-dpxmn" Dec 13 09:11:45.032023 kubelet[2513]: I1213 09:11:45.031291 2513 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2frjb\" (UniqueName: \"kubernetes.io/projected/ca543a1e-a48e-4fd7-b1da-fc54da14712c-kube-api-access-2frjb\") pod \"cilium-operator-5d85765b45-dpxmn\" (UID: \"ca543a1e-a48e-4fd7-b1da-fc54da14712c\") " pod="kube-system/cilium-operator-5d85765b45-dpxmn" Dec 13 09:11:45.062812 kubelet[2513]: E1213 09:11:45.062765 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:45.065003 containerd[1469]: time="2024-12-13T09:11:45.064923479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lwxnc,Uid:c49bd2e7-8fe6-423b-86a2-1ba082025645,Namespace:kube-system,Attempt:0,}" Dec 13 09:11:45.077965 kubelet[2513]: E1213 09:11:45.077896 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:45.078789 containerd[1469]: time="2024-12-13T09:11:45.078747137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mlndh,Uid:56c6e77a-9013-47b8-99a9-4dc5b9930b0c,Namespace:kube-system,Attempt:0,}" Dec 13 09:11:45.114402 containerd[1469]: time="2024-12-13T09:11:45.113629245Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 09:11:45.116906 containerd[1469]: time="2024-12-13T09:11:45.116596152Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 09:11:45.116906 containerd[1469]: time="2024-12-13T09:11:45.116631382Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:11:45.116906 containerd[1469]: time="2024-12-13T09:11:45.116790288Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:11:45.154239 containerd[1469]: time="2024-12-13T09:11:45.154061630Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 09:11:45.154825 containerd[1469]: time="2024-12-13T09:11:45.154771320Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 09:11:45.157006 containerd[1469]: time="2024-12-13T09:11:45.156865823Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:11:45.157313 containerd[1469]: time="2024-12-13T09:11:45.157280077Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:11:45.184823 systemd[1]: Started cri-containerd-2f128ebe518c0436fb2584a533ceacb1fd17e3cefd67fa0f0f3d1e053c6c03da.scope - libcontainer container 2f128ebe518c0436fb2584a533ceacb1fd17e3cefd67fa0f0f3d1e053c6c03da. Dec 13 09:11:45.191257 systemd[1]: Started cri-containerd-7459b9df1d0a2f7cfb466b5a7c2a25127a4e5e614f5cf24b682a8219783e30d5.scope - libcontainer container 7459b9df1d0a2f7cfb466b5a7c2a25127a4e5e614f5cf24b682a8219783e30d5. Dec 13 09:11:45.208634 kubelet[2513]: E1213 09:11:45.208570 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:45.211459 containerd[1469]: time="2024-12-13T09:11:45.211123164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-dpxmn,Uid:ca543a1e-a48e-4fd7-b1da-fc54da14712c,Namespace:kube-system,Attempt:0,}" Dec 13 09:11:45.257684 containerd[1469]: time="2024-12-13T09:11:45.257594590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mlndh,Uid:56c6e77a-9013-47b8-99a9-4dc5b9930b0c,Namespace:kube-system,Attempt:0,} returns sandbox id \"7459b9df1d0a2f7cfb466b5a7c2a25127a4e5e614f5cf24b682a8219783e30d5\"" Dec 13 09:11:45.262725 kubelet[2513]: E1213 09:11:45.261346 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:45.275896 containerd[1469]: time="2024-12-13T09:11:45.275803726Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 09:11:45.278566 containerd[1469]: time="2024-12-13T09:11:45.278415156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lwxnc,Uid:c49bd2e7-8fe6-423b-86a2-1ba082025645,Namespace:kube-system,Attempt:0,} returns sandbox id \"2f128ebe518c0436fb2584a533ceacb1fd17e3cefd67fa0f0f3d1e053c6c03da\"" Dec 13 09:11:45.282629 kubelet[2513]: E1213 09:11:45.282429 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:45.290792 containerd[1469]: time="2024-12-13T09:11:45.290431470Z" level=info msg="CreateContainer within sandbox \"2f128ebe518c0436fb2584a533ceacb1fd17e3cefd67fa0f0f3d1e053c6c03da\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 09:11:45.310641 containerd[1469]: time="2024-12-13T09:11:45.305332796Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 09:11:45.310641 containerd[1469]: time="2024-12-13T09:11:45.305402540Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 09:11:45.310641 containerd[1469]: time="2024-12-13T09:11:45.305417186Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:11:45.310641 containerd[1469]: time="2024-12-13T09:11:45.305600026Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:11:45.325231 containerd[1469]: time="2024-12-13T09:11:45.325182099Z" level=info msg="CreateContainer within sandbox \"2f128ebe518c0436fb2584a533ceacb1fd17e3cefd67fa0f0f3d1e053c6c03da\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"13f41aad1af094b4baee72baf7f4efb9d3309aab6b56aaf678a30c1e19f66482\"" Dec 13 09:11:45.329698 containerd[1469]: time="2024-12-13T09:11:45.329278642Z" level=info msg="StartContainer for \"13f41aad1af094b4baee72baf7f4efb9d3309aab6b56aaf678a30c1e19f66482\"" Dec 13 09:11:45.340968 systemd[1]: Started cri-containerd-01a95862e00d8efa5c2ff145e48902cbb956f258a5982e8553c38828611a00a5.scope - libcontainer container 01a95862e00d8efa5c2ff145e48902cbb956f258a5982e8553c38828611a00a5. Dec 13 09:11:45.381821 systemd[1]: Started cri-containerd-13f41aad1af094b4baee72baf7f4efb9d3309aab6b56aaf678a30c1e19f66482.scope - libcontainer container 13f41aad1af094b4baee72baf7f4efb9d3309aab6b56aaf678a30c1e19f66482. Dec 13 09:11:45.432195 containerd[1469]: time="2024-12-13T09:11:45.432147562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-dpxmn,Uid:ca543a1e-a48e-4fd7-b1da-fc54da14712c,Namespace:kube-system,Attempt:0,} returns sandbox id \"01a95862e00d8efa5c2ff145e48902cbb956f258a5982e8553c38828611a00a5\"" Dec 13 09:11:45.434689 kubelet[2513]: E1213 09:11:45.434618 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:45.449547 containerd[1469]: time="2024-12-13T09:11:45.449438150Z" level=info msg="StartContainer for \"13f41aad1af094b4baee72baf7f4efb9d3309aab6b56aaf678a30c1e19f66482\" returns successfully" Dec 13 09:11:46.105986 kubelet[2513]: E1213 09:11:46.105596 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:46.120016 kubelet[2513]: I1213 09:11:46.119053 2513 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lwxnc" podStartSLOduration=2.119033926 podStartE2EDuration="2.119033926s" podCreationTimestamp="2024-12-13 09:11:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 09:11:46.118484104 +0000 UTC m=+5.319361919" watchObservedRunningTime="2024-12-13 09:11:46.119033926 +0000 UTC m=+5.319911759" Dec 13 09:11:49.405883 kubelet[2513]: E1213 09:11:49.405215 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:49.785294 kubelet[2513]: E1213 09:11:49.785240 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:50.220981 kubelet[2513]: E1213 09:11:50.220103 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:50.240661 kubelet[2513]: E1213 09:11:50.238723 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:50.668833 kubelet[2513]: E1213 09:11:50.667520 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:51.207783 kubelet[2513]: E1213 09:11:51.206953 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:52.894670 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount82805120.mount: Deactivated successfully. Dec 13 09:11:56.001958 containerd[1469]: time="2024-12-13T09:11:56.001656468Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:11:56.005285 containerd[1469]: time="2024-12-13T09:11:56.004070506Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735379" Dec 13 09:11:56.007834 containerd[1469]: time="2024-12-13T09:11:56.007684230Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:11:56.024462 containerd[1469]: time="2024-12-13T09:11:56.024379961Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.748290604s" Dec 13 09:11:56.024810 containerd[1469]: time="2024-12-13T09:11:56.024772028Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 09:11:56.036544 containerd[1469]: time="2024-12-13T09:11:56.035798300Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 09:11:56.056564 containerd[1469]: time="2024-12-13T09:11:56.056478213Z" level=info msg="CreateContainer within sandbox \"7459b9df1d0a2f7cfb466b5a7c2a25127a4e5e614f5cf24b682a8219783e30d5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 09:11:56.196203 containerd[1469]: time="2024-12-13T09:11:56.196097887Z" level=info msg="CreateContainer within sandbox \"7459b9df1d0a2f7cfb466b5a7c2a25127a4e5e614f5cf24b682a8219783e30d5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"09201d2ae86bfbc45c401ac2f7f1375629f0e126f3f453cd49fb7ba361a630f1\"" Dec 13 09:11:56.200250 containerd[1469]: time="2024-12-13T09:11:56.198698226Z" level=info msg="StartContainer for \"09201d2ae86bfbc45c401ac2f7f1375629f0e126f3f453cd49fb7ba361a630f1\"" Dec 13 09:11:56.344673 systemd[1]: run-containerd-runc-k8s.io-09201d2ae86bfbc45c401ac2f7f1375629f0e126f3f453cd49fb7ba361a630f1-runc.X1UDEH.mount: Deactivated successfully. Dec 13 09:11:56.353983 systemd[1]: Started cri-containerd-09201d2ae86bfbc45c401ac2f7f1375629f0e126f3f453cd49fb7ba361a630f1.scope - libcontainer container 09201d2ae86bfbc45c401ac2f7f1375629f0e126f3f453cd49fb7ba361a630f1. Dec 13 09:11:56.408761 containerd[1469]: time="2024-12-13T09:11:56.408659683Z" level=info msg="StartContainer for \"09201d2ae86bfbc45c401ac2f7f1375629f0e126f3f453cd49fb7ba361a630f1\" returns successfully" Dec 13 09:11:56.421289 systemd[1]: cri-containerd-09201d2ae86bfbc45c401ac2f7f1375629f0e126f3f453cd49fb7ba361a630f1.scope: Deactivated successfully. Dec 13 09:11:56.566862 containerd[1469]: time="2024-12-13T09:11:56.552902302Z" level=info msg="shim disconnected" id=09201d2ae86bfbc45c401ac2f7f1375629f0e126f3f453cd49fb7ba361a630f1 namespace=k8s.io Dec 13 09:11:56.566862 containerd[1469]: time="2024-12-13T09:11:56.566813250Z" level=warning msg="cleaning up after shim disconnected" id=09201d2ae86bfbc45c401ac2f7f1375629f0e126f3f453cd49fb7ba361a630f1 namespace=k8s.io Dec 13 09:11:56.566862 containerd[1469]: time="2024-12-13T09:11:56.566833322Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 09:11:56.584221 containerd[1469]: time="2024-12-13T09:11:56.584143543Z" level=warning msg="cleanup warnings time=\"2024-12-13T09:11:56Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 09:11:57.172468 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-09201d2ae86bfbc45c401ac2f7f1375629f0e126f3f453cd49fb7ba361a630f1-rootfs.mount: Deactivated successfully. Dec 13 09:11:57.269437 kubelet[2513]: E1213 09:11:57.269343 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:57.275370 containerd[1469]: time="2024-12-13T09:11:57.274763942Z" level=info msg="CreateContainer within sandbox \"7459b9df1d0a2f7cfb466b5a7c2a25127a4e5e614f5cf24b682a8219783e30d5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 09:11:57.319988 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3348516972.mount: Deactivated successfully. Dec 13 09:11:57.328214 containerd[1469]: time="2024-12-13T09:11:57.328134008Z" level=info msg="CreateContainer within sandbox \"7459b9df1d0a2f7cfb466b5a7c2a25127a4e5e614f5cf24b682a8219783e30d5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e2a8f5d41e541aa1140caf67f0552c9481ae577ac733f288ce8c1a7d6c5a89fe\"" Dec 13 09:11:57.334121 containerd[1469]: time="2024-12-13T09:11:57.331745890Z" level=info msg="StartContainer for \"e2a8f5d41e541aa1140caf67f0552c9481ae577ac733f288ce8c1a7d6c5a89fe\"" Dec 13 09:11:57.387873 systemd[1]: Started cri-containerd-e2a8f5d41e541aa1140caf67f0552c9481ae577ac733f288ce8c1a7d6c5a89fe.scope - libcontainer container e2a8f5d41e541aa1140caf67f0552c9481ae577ac733f288ce8c1a7d6c5a89fe. Dec 13 09:11:57.431377 containerd[1469]: time="2024-12-13T09:11:57.431114841Z" level=info msg="StartContainer for \"e2a8f5d41e541aa1140caf67f0552c9481ae577ac733f288ce8c1a7d6c5a89fe\" returns successfully" Dec 13 09:11:57.452484 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 09:11:57.453462 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 09:11:57.453658 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Dec 13 09:11:57.461091 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 09:11:57.461412 systemd[1]: cri-containerd-e2a8f5d41e541aa1140caf67f0552c9481ae577ac733f288ce8c1a7d6c5a89fe.scope: Deactivated successfully. Dec 13 09:11:57.508447 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 09:11:57.520526 containerd[1469]: time="2024-12-13T09:11:57.520010032Z" level=info msg="shim disconnected" id=e2a8f5d41e541aa1140caf67f0552c9481ae577ac733f288ce8c1a7d6c5a89fe namespace=k8s.io Dec 13 09:11:57.520526 containerd[1469]: time="2024-12-13T09:11:57.520099643Z" level=warning msg="cleaning up after shim disconnected" id=e2a8f5d41e541aa1140caf67f0552c9481ae577ac733f288ce8c1a7d6c5a89fe namespace=k8s.io Dec 13 09:11:57.520526 containerd[1469]: time="2024-12-13T09:11:57.520113608Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 09:11:58.173251 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e2a8f5d41e541aa1140caf67f0552c9481ae577ac733f288ce8c1a7d6c5a89fe-rootfs.mount: Deactivated successfully. Dec 13 09:11:58.274582 kubelet[2513]: E1213 09:11:58.274133 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:58.293926 containerd[1469]: time="2024-12-13T09:11:58.290213640Z" level=info msg="CreateContainer within sandbox \"7459b9df1d0a2f7cfb466b5a7c2a25127a4e5e614f5cf24b682a8219783e30d5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 09:11:58.372351 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1189771139.mount: Deactivated successfully. Dec 13 09:11:58.377711 containerd[1469]: time="2024-12-13T09:11:58.377334050Z" level=info msg="CreateContainer within sandbox \"7459b9df1d0a2f7cfb466b5a7c2a25127a4e5e614f5cf24b682a8219783e30d5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"393ab94d5d1419b129e37905c78e8b95c82a7aa1a3794f969d2b024c616efb66\"" Dec 13 09:11:58.379604 containerd[1469]: time="2024-12-13T09:11:58.379248960Z" level=info msg="StartContainer for \"393ab94d5d1419b129e37905c78e8b95c82a7aa1a3794f969d2b024c616efb66\"" Dec 13 09:11:58.456413 systemd[1]: Started cri-containerd-393ab94d5d1419b129e37905c78e8b95c82a7aa1a3794f969d2b024c616efb66.scope - libcontainer container 393ab94d5d1419b129e37905c78e8b95c82a7aa1a3794f969d2b024c616efb66. Dec 13 09:11:58.516325 containerd[1469]: time="2024-12-13T09:11:58.516251333Z" level=info msg="StartContainer for \"393ab94d5d1419b129e37905c78e8b95c82a7aa1a3794f969d2b024c616efb66\" returns successfully" Dec 13 09:11:58.523876 systemd[1]: cri-containerd-393ab94d5d1419b129e37905c78e8b95c82a7aa1a3794f969d2b024c616efb66.scope: Deactivated successfully. Dec 13 09:11:58.555207 containerd[1469]: time="2024-12-13T09:11:58.554999457Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:11:58.558626 containerd[1469]: time="2024-12-13T09:11:58.558548140Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907273" Dec 13 09:11:58.559423 containerd[1469]: time="2024-12-13T09:11:58.559373404Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:11:58.565060 containerd[1469]: time="2024-12-13T09:11:58.563943839Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.528097742s" Dec 13 09:11:58.565060 containerd[1469]: time="2024-12-13T09:11:58.564025632Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 09:11:58.573825 containerd[1469]: time="2024-12-13T09:11:58.570664481Z" level=info msg="CreateContainer within sandbox \"01a95862e00d8efa5c2ff145e48902cbb956f258a5982e8553c38828611a00a5\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 09:11:58.597118 containerd[1469]: time="2024-12-13T09:11:58.596702697Z" level=info msg="shim disconnected" id=393ab94d5d1419b129e37905c78e8b95c82a7aa1a3794f969d2b024c616efb66 namespace=k8s.io Dec 13 09:11:58.597118 containerd[1469]: time="2024-12-13T09:11:58.596808961Z" level=warning msg="cleaning up after shim disconnected" id=393ab94d5d1419b129e37905c78e8b95c82a7aa1a3794f969d2b024c616efb66 namespace=k8s.io Dec 13 09:11:58.597118 containerd[1469]: time="2024-12-13T09:11:58.596823952Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 09:11:58.609439 containerd[1469]: time="2024-12-13T09:11:58.609291794Z" level=info msg="CreateContainer within sandbox \"01a95862e00d8efa5c2ff145e48902cbb956f258a5982e8553c38828611a00a5\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"5df1ca3c980974eadbce73cb6ca97cca9eb79b0e84708cec3c518df7b6773e3a\"" Dec 13 09:11:58.611823 containerd[1469]: time="2024-12-13T09:11:58.611773776Z" level=info msg="StartContainer for \"5df1ca3c980974eadbce73cb6ca97cca9eb79b0e84708cec3c518df7b6773e3a\"" Dec 13 09:11:58.651792 systemd[1]: Started cri-containerd-5df1ca3c980974eadbce73cb6ca97cca9eb79b0e84708cec3c518df7b6773e3a.scope - libcontainer container 5df1ca3c980974eadbce73cb6ca97cca9eb79b0e84708cec3c518df7b6773e3a. Dec 13 09:11:58.714274 containerd[1469]: time="2024-12-13T09:11:58.714089665Z" level=info msg="StartContainer for \"5df1ca3c980974eadbce73cb6ca97cca9eb79b0e84708cec3c518df7b6773e3a\" returns successfully" Dec 13 09:11:59.177001 systemd[1]: run-containerd-runc-k8s.io-393ab94d5d1419b129e37905c78e8b95c82a7aa1a3794f969d2b024c616efb66-runc.P8HATK.mount: Deactivated successfully. Dec 13 09:11:59.177157 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-393ab94d5d1419b129e37905c78e8b95c82a7aa1a3794f969d2b024c616efb66-rootfs.mount: Deactivated successfully. Dec 13 09:11:59.285114 kubelet[2513]: E1213 09:11:59.285009 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:59.295131 kubelet[2513]: E1213 09:11:59.294733 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:59.299693 containerd[1469]: time="2024-12-13T09:11:59.299093851Z" level=info msg="CreateContainer within sandbox \"7459b9df1d0a2f7cfb466b5a7c2a25127a4e5e614f5cf24b682a8219783e30d5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 09:11:59.342700 containerd[1469]: time="2024-12-13T09:11:59.341986867Z" level=info msg="CreateContainer within sandbox \"7459b9df1d0a2f7cfb466b5a7c2a25127a4e5e614f5cf24b682a8219783e30d5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9a2867a414bc4f20255f9bc8cef1fd9dd72141497792ee8f88f4adeae3776317\"" Dec 13 09:11:59.345048 containerd[1469]: time="2024-12-13T09:11:59.344705157Z" level=info msg="StartContainer for \"9a2867a414bc4f20255f9bc8cef1fd9dd72141497792ee8f88f4adeae3776317\"" Dec 13 09:11:59.414852 systemd[1]: Started cri-containerd-9a2867a414bc4f20255f9bc8cef1fd9dd72141497792ee8f88f4adeae3776317.scope - libcontainer container 9a2867a414bc4f20255f9bc8cef1fd9dd72141497792ee8f88f4adeae3776317. Dec 13 09:11:59.514921 systemd[1]: cri-containerd-9a2867a414bc4f20255f9bc8cef1fd9dd72141497792ee8f88f4adeae3776317.scope: Deactivated successfully. Dec 13 09:11:59.518620 containerd[1469]: time="2024-12-13T09:11:59.517140149Z" level=info msg="StartContainer for \"9a2867a414bc4f20255f9bc8cef1fd9dd72141497792ee8f88f4adeae3776317\" returns successfully" Dec 13 09:11:59.572168 containerd[1469]: time="2024-12-13T09:11:59.572083841Z" level=info msg="shim disconnected" id=9a2867a414bc4f20255f9bc8cef1fd9dd72141497792ee8f88f4adeae3776317 namespace=k8s.io Dec 13 09:11:59.573060 containerd[1469]: time="2024-12-13T09:11:59.572585628Z" level=warning msg="cleaning up after shim disconnected" id=9a2867a414bc4f20255f9bc8cef1fd9dd72141497792ee8f88f4adeae3776317 namespace=k8s.io Dec 13 09:11:59.573060 containerd[1469]: time="2024-12-13T09:11:59.572632473Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 09:11:59.613569 containerd[1469]: time="2024-12-13T09:11:59.612570858Z" level=warning msg="cleanup warnings time=\"2024-12-13T09:11:59Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 09:11:59.647844 kubelet[2513]: I1213 09:11:59.647727 2513 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-dpxmn" podStartSLOduration=2.518800986 podStartE2EDuration="15.647706321s" podCreationTimestamp="2024-12-13 09:11:44 +0000 UTC" firstStartedPulling="2024-12-13 09:11:45.437344256 +0000 UTC m=+4.638222048" lastFinishedPulling="2024-12-13 09:11:58.56624958 +0000 UTC m=+17.767127383" observedRunningTime="2024-12-13 09:11:59.47195609 +0000 UTC m=+18.672833902" watchObservedRunningTime="2024-12-13 09:11:59.647706321 +0000 UTC m=+18.848584127" Dec 13 09:12:00.174770 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9a2867a414bc4f20255f9bc8cef1fd9dd72141497792ee8f88f4adeae3776317-rootfs.mount: Deactivated successfully. Dec 13 09:12:00.304123 kubelet[2513]: E1213 09:12:00.301242 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:12:00.304123 kubelet[2513]: E1213 09:12:00.301271 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:12:00.309026 containerd[1469]: time="2024-12-13T09:12:00.308964188Z" level=info msg="CreateContainer within sandbox \"7459b9df1d0a2f7cfb466b5a7c2a25127a4e5e614f5cf24b682a8219783e30d5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 09:12:00.352771 containerd[1469]: time="2024-12-13T09:12:00.352677482Z" level=info msg="CreateContainer within sandbox \"7459b9df1d0a2f7cfb466b5a7c2a25127a4e5e614f5cf24b682a8219783e30d5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"423f208f78fe90a485e0d0c98305ab89219dafc70091f0815e284846cacfe30f\"" Dec 13 09:12:00.354554 containerd[1469]: time="2024-12-13T09:12:00.353787346Z" level=info msg="StartContainer for \"423f208f78fe90a485e0d0c98305ab89219dafc70091f0815e284846cacfe30f\"" Dec 13 09:12:00.444547 systemd[1]: Started cri-containerd-423f208f78fe90a485e0d0c98305ab89219dafc70091f0815e284846cacfe30f.scope - libcontainer container 423f208f78fe90a485e0d0c98305ab89219dafc70091f0815e284846cacfe30f. Dec 13 09:12:00.583282 containerd[1469]: time="2024-12-13T09:12:00.583068190Z" level=info msg="StartContainer for \"423f208f78fe90a485e0d0c98305ab89219dafc70091f0815e284846cacfe30f\" returns successfully" Dec 13 09:12:00.943543 kubelet[2513]: I1213 09:12:00.941147 2513 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Dec 13 09:12:01.083843 systemd[1]: Created slice kubepods-burstable-podcec8a4dc_1954_4ead_86e2_83ff1c421454.slice - libcontainer container kubepods-burstable-podcec8a4dc_1954_4ead_86e2_83ff1c421454.slice. Dec 13 09:12:01.099849 systemd[1]: Created slice kubepods-burstable-pod9e976adc_3ddc_4db8_97d1_07e632ac64f7.slice - libcontainer container kubepods-burstable-pod9e976adc_3ddc_4db8_97d1_07e632ac64f7.slice. Dec 13 09:12:01.263672 kubelet[2513]: I1213 09:12:01.263611 2513 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cf2dv\" (UniqueName: \"kubernetes.io/projected/9e976adc-3ddc-4db8-97d1-07e632ac64f7-kube-api-access-cf2dv\") pod \"coredns-6f6b679f8f-frkq6\" (UID: \"9e976adc-3ddc-4db8-97d1-07e632ac64f7\") " pod="kube-system/coredns-6f6b679f8f-frkq6" Dec 13 09:12:01.263672 kubelet[2513]: I1213 09:12:01.263693 2513 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cec8a4dc-1954-4ead-86e2-83ff1c421454-config-volume\") pod \"coredns-6f6b679f8f-69snq\" (UID: \"cec8a4dc-1954-4ead-86e2-83ff1c421454\") " pod="kube-system/coredns-6f6b679f8f-69snq" Dec 13 09:12:01.263963 kubelet[2513]: I1213 09:12:01.263728 2513 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9e976adc-3ddc-4db8-97d1-07e632ac64f7-config-volume\") pod \"coredns-6f6b679f8f-frkq6\" (UID: \"9e976adc-3ddc-4db8-97d1-07e632ac64f7\") " pod="kube-system/coredns-6f6b679f8f-frkq6" Dec 13 09:12:01.263963 kubelet[2513]: I1213 09:12:01.263763 2513 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxrj9\" (UniqueName: \"kubernetes.io/projected/cec8a4dc-1954-4ead-86e2-83ff1c421454-kube-api-access-xxrj9\") pod \"coredns-6f6b679f8f-69snq\" (UID: \"cec8a4dc-1954-4ead-86e2-83ff1c421454\") " pod="kube-system/coredns-6f6b679f8f-69snq" Dec 13 09:12:01.330594 kubelet[2513]: E1213 09:12:01.327934 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:12:01.542408 kubelet[2513]: I1213 09:12:01.533268 2513 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mlndh" podStartSLOduration=6.774501405 podStartE2EDuration="17.533233605s" podCreationTimestamp="2024-12-13 09:11:44 +0000 UTC" firstStartedPulling="2024-12-13 09:11:45.274739549 +0000 UTC m=+4.475617331" lastFinishedPulling="2024-12-13 09:11:56.033471738 +0000 UTC m=+15.234349531" observedRunningTime="2024-12-13 09:12:01.530290381 +0000 UTC m=+20.731168182" watchObservedRunningTime="2024-12-13 09:12:01.533233605 +0000 UTC m=+20.734111408" Dec 13 09:12:01.693516 kubelet[2513]: E1213 09:12:01.693290 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:12:01.695718 containerd[1469]: time="2024-12-13T09:12:01.694559300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-69snq,Uid:cec8a4dc-1954-4ead-86e2-83ff1c421454,Namespace:kube-system,Attempt:0,}" Dec 13 09:12:01.710730 kubelet[2513]: E1213 09:12:01.710234 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:12:01.711792 containerd[1469]: time="2024-12-13T09:12:01.711263958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-frkq6,Uid:9e976adc-3ddc-4db8-97d1-07e632ac64f7,Namespace:kube-system,Attempt:0,}" Dec 13 09:12:02.332656 kubelet[2513]: E1213 09:12:02.330357 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:12:03.333650 kubelet[2513]: E1213 09:12:03.333272 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:12:03.893802 systemd-networkd[1358]: cilium_host: Link UP Dec 13 09:12:03.895607 systemd-networkd[1358]: cilium_net: Link UP Dec 13 09:12:03.895613 systemd-networkd[1358]: cilium_net: Gained carrier Dec 13 09:12:03.895976 systemd-networkd[1358]: cilium_host: Gained carrier Dec 13 09:12:04.254636 systemd-networkd[1358]: cilium_vxlan: Link UP Dec 13 09:12:04.254651 systemd-networkd[1358]: cilium_vxlan: Gained carrier Dec 13 09:12:04.301356 systemd-networkd[1358]: cilium_host: Gained IPv6LL Dec 13 09:12:04.509458 systemd-networkd[1358]: cilium_net: Gained IPv6LL Dec 13 09:12:05.054679 kernel: NET: Registered PF_ALG protocol family Dec 13 09:12:05.795228 systemd-networkd[1358]: cilium_vxlan: Gained IPv6LL Dec 13 09:12:06.316223 systemd-networkd[1358]: lxc_health: Link UP Dec 13 09:12:06.343699 systemd-networkd[1358]: lxc_health: Gained carrier Dec 13 09:12:06.913378 systemd-networkd[1358]: lxc15375dedea9f: Link UP Dec 13 09:12:06.923714 kernel: eth0: renamed from tmpec0fd Dec 13 09:12:06.933170 systemd-networkd[1358]: lxc15375dedea9f: Gained carrier Dec 13 09:12:06.958207 systemd-networkd[1358]: lxcbe336c26d8c2: Link UP Dec 13 09:12:06.966650 kernel: eth0: renamed from tmp760f4 Dec 13 09:12:06.970475 systemd-networkd[1358]: lxcbe336c26d8c2: Gained carrier Dec 13 09:12:07.083788 kubelet[2513]: E1213 09:12:07.080799 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:12:07.352182 kubelet[2513]: E1213 09:12:07.352143 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:12:08.156798 systemd-networkd[1358]: lxc_health: Gained IPv6LL Dec 13 09:12:08.668914 systemd-networkd[1358]: lxcbe336c26d8c2: Gained IPv6LL Dec 13 09:12:08.924711 systemd-networkd[1358]: lxc15375dedea9f: Gained IPv6LL Dec 13 09:12:12.756482 containerd[1469]: time="2024-12-13T09:12:12.755433188Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 09:12:12.757351 containerd[1469]: time="2024-12-13T09:12:12.755512686Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 09:12:12.757351 containerd[1469]: time="2024-12-13T09:12:12.756872229Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:12:12.757777 containerd[1469]: time="2024-12-13T09:12:12.757202687Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:12:12.809813 systemd[1]: Started cri-containerd-760f42c6ff9bf5ec41f9b6fc81939c341c9b9ec0a6feb9ce95a4214333082d91.scope - libcontainer container 760f42c6ff9bf5ec41f9b6fc81939c341c9b9ec0a6feb9ce95a4214333082d91. Dec 13 09:12:12.857999 containerd[1469]: time="2024-12-13T09:12:12.857214338Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 09:12:12.858188 containerd[1469]: time="2024-12-13T09:12:12.857657300Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 09:12:12.860980 containerd[1469]: time="2024-12-13T09:12:12.860694132Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:12:12.860980 containerd[1469]: time="2024-12-13T09:12:12.860939300Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:12:12.894858 systemd[1]: Started cri-containerd-ec0fddb4018be36f39f7c5fb95a28b1daf6d3466aac485770483b408ec73fe1a.scope - libcontainer container ec0fddb4018be36f39f7c5fb95a28b1daf6d3466aac485770483b408ec73fe1a. Dec 13 09:12:12.977263 containerd[1469]: time="2024-12-13T09:12:12.977198459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-frkq6,Uid:9e976adc-3ddc-4db8-97d1-07e632ac64f7,Namespace:kube-system,Attempt:0,} returns sandbox id \"760f42c6ff9bf5ec41f9b6fc81939c341c9b9ec0a6feb9ce95a4214333082d91\"" Dec 13 09:12:12.980897 kubelet[2513]: E1213 09:12:12.979820 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:12:12.991052 containerd[1469]: time="2024-12-13T09:12:12.990532874Z" level=info msg="CreateContainer within sandbox \"760f42c6ff9bf5ec41f9b6fc81939c341c9b9ec0a6feb9ce95a4214333082d91\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 09:12:13.041659 containerd[1469]: time="2024-12-13T09:12:13.039658368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-69snq,Uid:cec8a4dc-1954-4ead-86e2-83ff1c421454,Namespace:kube-system,Attempt:0,} returns sandbox id \"ec0fddb4018be36f39f7c5fb95a28b1daf6d3466aac485770483b408ec73fe1a\"" Dec 13 09:12:13.041659 containerd[1469]: time="2024-12-13T09:12:13.041432211Z" level=info msg="CreateContainer within sandbox \"760f42c6ff9bf5ec41f9b6fc81939c341c9b9ec0a6feb9ce95a4214333082d91\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"770fdec538c175dd55fcc5bcc285bca7f708278111f9846933fc04daff307f64\"" Dec 13 09:12:13.044396 containerd[1469]: time="2024-12-13T09:12:13.044346382Z" level=info msg="StartContainer for \"770fdec538c175dd55fcc5bcc285bca7f708278111f9846933fc04daff307f64\"" Dec 13 09:12:13.045371 kubelet[2513]: E1213 09:12:13.045323 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:12:13.048943 containerd[1469]: time="2024-12-13T09:12:13.048567150Z" level=info msg="CreateContainer within sandbox \"ec0fddb4018be36f39f7c5fb95a28b1daf6d3466aac485770483b408ec73fe1a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 09:12:13.075485 containerd[1469]: time="2024-12-13T09:12:13.075394189Z" level=info msg="CreateContainer within sandbox \"ec0fddb4018be36f39f7c5fb95a28b1daf6d3466aac485770483b408ec73fe1a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a881d290a663835a2113daa2681768fdddf6250695bbe31dcc2533badec23f56\"" Dec 13 09:12:13.077742 containerd[1469]: time="2024-12-13T09:12:13.077559653Z" level=info msg="StartContainer for \"a881d290a663835a2113daa2681768fdddf6250695bbe31dcc2533badec23f56\"" Dec 13 09:12:13.116810 systemd[1]: Started cri-containerd-770fdec538c175dd55fcc5bcc285bca7f708278111f9846933fc04daff307f64.scope - libcontainer container 770fdec538c175dd55fcc5bcc285bca7f708278111f9846933fc04daff307f64. Dec 13 09:12:13.141838 systemd[1]: Started cri-containerd-a881d290a663835a2113daa2681768fdddf6250695bbe31dcc2533badec23f56.scope - libcontainer container a881d290a663835a2113daa2681768fdddf6250695bbe31dcc2533badec23f56. Dec 13 09:12:13.203365 containerd[1469]: time="2024-12-13T09:12:13.200574794Z" level=info msg="StartContainer for \"770fdec538c175dd55fcc5bcc285bca7f708278111f9846933fc04daff307f64\" returns successfully" Dec 13 09:12:13.208310 containerd[1469]: time="2024-12-13T09:12:13.208004347Z" level=info msg="StartContainer for \"a881d290a663835a2113daa2681768fdddf6250695bbe31dcc2533badec23f56\" returns successfully" Dec 13 09:12:13.392898 kubelet[2513]: E1213 09:12:13.391114 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:12:13.396153 kubelet[2513]: E1213 09:12:13.395253 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:12:13.447587 kubelet[2513]: I1213 09:12:13.447414 2513 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-frkq6" podStartSLOduration=29.447386876 podStartE2EDuration="29.447386876s" podCreationTimestamp="2024-12-13 09:11:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 09:12:13.425847476 +0000 UTC m=+32.626725274" watchObservedRunningTime="2024-12-13 09:12:13.447386876 +0000 UTC m=+32.648264689" Dec 13 09:12:13.764476 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2300652477.mount: Deactivated successfully. Dec 13 09:12:14.398533 kubelet[2513]: E1213 09:12:14.397879 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:12:14.400466 kubelet[2513]: E1213 09:12:14.400399 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:12:14.423471 kubelet[2513]: I1213 09:12:14.421883 2513 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-69snq" podStartSLOduration=30.421854339 podStartE2EDuration="30.421854339s" podCreationTimestamp="2024-12-13 09:11:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 09:12:13.448136819 +0000 UTC m=+32.649014625" watchObservedRunningTime="2024-12-13 09:12:14.421854339 +0000 UTC m=+33.622732146" Dec 13 09:12:15.399881 kubelet[2513]: E1213 09:12:15.399824 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:12:15.400396 kubelet[2513]: E1213 09:12:15.399839 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:12:30.260387 systemd[1]: Started sshd@9-146.190.151.20:22-147.75.109.163:48514.service - OpenSSH per-connection server daemon (147.75.109.163:48514). Dec 13 09:12:30.373557 sshd[3905]: Accepted publickey for core from 147.75.109.163 port 48514 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo Dec 13 09:12:30.379156 sshd[3905]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:12:30.394824 systemd-logind[1446]: New session 10 of user core. Dec 13 09:12:30.402664 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 09:12:31.159730 sshd[3905]: pam_unix(sshd:session): session closed for user core Dec 13 09:12:31.178786 systemd[1]: sshd@9-146.190.151.20:22-147.75.109.163:48514.service: Deactivated successfully. Dec 13 09:12:31.189077 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 09:12:31.192308 systemd-logind[1446]: Session 10 logged out. Waiting for processes to exit. Dec 13 09:12:31.205849 systemd-logind[1446]: Removed session 10. Dec 13 09:12:36.176895 systemd[1]: Started sshd@10-146.190.151.20:22-147.75.109.163:34822.service - OpenSSH per-connection server daemon (147.75.109.163:34822). Dec 13 09:12:36.231565 sshd[3920]: Accepted publickey for core from 147.75.109.163 port 34822 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo Dec 13 09:12:36.233150 sshd[3920]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:12:36.240555 systemd-logind[1446]: New session 11 of user core. Dec 13 09:12:36.247816 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 09:12:36.407229 sshd[3920]: pam_unix(sshd:session): session closed for user core Dec 13 09:12:36.412615 systemd[1]: sshd@10-146.190.151.20:22-147.75.109.163:34822.service: Deactivated successfully. Dec 13 09:12:36.416447 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 09:12:36.418956 systemd-logind[1446]: Session 11 logged out. Waiting for processes to exit. Dec 13 09:12:36.420653 systemd-logind[1446]: Removed session 11. Dec 13 09:12:41.428560 systemd[1]: Started sshd@11-146.190.151.20:22-147.75.109.163:34828.service - OpenSSH per-connection server daemon (147.75.109.163:34828). Dec 13 09:12:41.505142 sshd[3936]: Accepted publickey for core from 147.75.109.163 port 34828 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo Dec 13 09:12:41.507555 sshd[3936]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:12:41.514217 systemd-logind[1446]: New session 12 of user core. Dec 13 09:12:41.521782 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 09:12:41.694390 sshd[3936]: pam_unix(sshd:session): session closed for user core Dec 13 09:12:41.702462 systemd[1]: sshd@11-146.190.151.20:22-147.75.109.163:34828.service: Deactivated successfully. Dec 13 09:12:41.705548 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 09:12:41.706400 systemd-logind[1446]: Session 12 logged out. Waiting for processes to exit. Dec 13 09:12:41.707527 systemd-logind[1446]: Removed session 12. Dec 13 09:12:46.715937 systemd[1]: Started sshd@12-146.190.151.20:22-147.75.109.163:33848.service - OpenSSH per-connection server daemon (147.75.109.163:33848). Dec 13 09:12:46.767098 sshd[3952]: Accepted publickey for core from 147.75.109.163 port 33848 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo Dec 13 09:12:46.769943 sshd[3952]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:12:46.776546 systemd-logind[1446]: New session 13 of user core. Dec 13 09:12:46.782859 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 09:12:46.930174 sshd[3952]: pam_unix(sshd:session): session closed for user core Dec 13 09:12:46.946487 systemd[1]: sshd@12-146.190.151.20:22-147.75.109.163:33848.service: Deactivated successfully. Dec 13 09:12:46.948971 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 09:12:46.950718 systemd-logind[1446]: Session 13 logged out. Waiting for processes to exit. Dec 13 09:12:46.958923 systemd[1]: Started sshd@13-146.190.151.20:22-147.75.109.163:33854.service - OpenSSH per-connection server daemon (147.75.109.163:33854). Dec 13 09:12:46.961320 systemd-logind[1446]: Removed session 13. Dec 13 09:12:47.019557 sshd[3966]: Accepted publickey for core from 147.75.109.163 port 33854 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo Dec 13 09:12:47.021616 sshd[3966]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:12:47.029277 systemd-logind[1446]: New session 14 of user core. Dec 13 09:12:47.032840 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 09:12:47.236755 sshd[3966]: pam_unix(sshd:session): session closed for user core Dec 13 09:12:47.250216 systemd[1]: sshd@13-146.190.151.20:22-147.75.109.163:33854.service: Deactivated successfully. Dec 13 09:12:47.253643 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 09:12:47.255061 systemd-logind[1446]: Session 14 logged out. Waiting for processes to exit. Dec 13 09:12:47.267929 systemd[1]: Started sshd@14-146.190.151.20:22-147.75.109.163:33860.service - OpenSSH per-connection server daemon (147.75.109.163:33860). Dec 13 09:12:47.272579 systemd-logind[1446]: Removed session 14. Dec 13 09:12:47.329243 sshd[3977]: Accepted publickey for core from 147.75.109.163 port 33860 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo Dec 13 09:12:47.331183 sshd[3977]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:12:47.337913 systemd-logind[1446]: New session 15 of user core. Dec 13 09:12:47.344834 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 09:12:47.497341 sshd[3977]: pam_unix(sshd:session): session closed for user core Dec 13 09:12:47.502074 systemd[1]: sshd@14-146.190.151.20:22-147.75.109.163:33860.service: Deactivated successfully. Dec 13 09:12:47.505463 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 09:12:47.507408 systemd-logind[1446]: Session 15 logged out. Waiting for processes to exit. Dec 13 09:12:47.509287 systemd-logind[1446]: Removed session 15. Dec 13 09:12:52.529867 systemd[1]: Started sshd@15-146.190.151.20:22-147.75.109.163:33868.service - OpenSSH per-connection server daemon (147.75.109.163:33868). Dec 13 09:12:52.594291 sshd[3991]: Accepted publickey for core from 147.75.109.163 port 33868 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo Dec 13 09:12:52.597223 sshd[3991]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:12:52.605601 systemd-logind[1446]: New session 16 of user core. Dec 13 09:12:52.614939 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 09:12:52.793938 sshd[3991]: pam_unix(sshd:session): session closed for user core Dec 13 09:12:52.799926 systemd[1]: sshd@15-146.190.151.20:22-147.75.109.163:33868.service: Deactivated successfully. Dec 13 09:12:52.805146 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 09:12:52.807991 systemd-logind[1446]: Session 16 logged out. Waiting for processes to exit. Dec 13 09:12:52.810203 systemd-logind[1446]: Removed session 16. Dec 13 09:12:53.013286 kubelet[2513]: E1213 09:12:53.012670 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:12:54.011274 kubelet[2513]: E1213 09:12:54.011145 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:12:57.820057 systemd[1]: Started sshd@16-146.190.151.20:22-147.75.109.163:44894.service - OpenSSH per-connection server daemon (147.75.109.163:44894). Dec 13 09:12:57.884557 sshd[4004]: Accepted publickey for core from 147.75.109.163 port 44894 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo Dec 13 09:12:57.886108 sshd[4004]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:12:57.892475 systemd-logind[1446]: New session 17 of user core. Dec 13 09:12:57.902854 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 09:12:58.071894 sshd[4004]: pam_unix(sshd:session): session closed for user core Dec 13 09:12:58.084815 systemd[1]: sshd@16-146.190.151.20:22-147.75.109.163:44894.service: Deactivated successfully. Dec 13 09:12:58.089972 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 09:12:58.093011 systemd-logind[1446]: Session 17 logged out. Waiting for processes to exit. Dec 13 09:12:58.100119 systemd[1]: Started sshd@17-146.190.151.20:22-147.75.109.163:44906.service - OpenSSH per-connection server daemon (147.75.109.163:44906). Dec 13 09:12:58.104910 systemd-logind[1446]: Removed session 17. Dec 13 09:12:58.159410 sshd[4016]: Accepted publickey for core from 147.75.109.163 port 44906 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo Dec 13 09:12:58.162327 sshd[4016]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:12:58.170735 systemd-logind[1446]: New session 18 of user core. Dec 13 09:12:58.176998 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 09:12:58.538724 sshd[4016]: pam_unix(sshd:session): session closed for user core Dec 13 09:12:58.551930 systemd[1]: sshd@17-146.190.151.20:22-147.75.109.163:44906.service: Deactivated successfully. Dec 13 09:12:58.554932 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 09:12:58.557011 systemd-logind[1446]: Session 18 logged out. Waiting for processes to exit. Dec 13 09:12:58.563105 systemd[1]: Started sshd@18-146.190.151.20:22-147.75.109.163:44914.service - OpenSSH per-connection server daemon (147.75.109.163:44914). Dec 13 09:12:58.567168 systemd-logind[1446]: Removed session 18. Dec 13 09:12:58.641169 sshd[4027]: Accepted publickey for core from 147.75.109.163 port 44914 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo Dec 13 09:12:58.643646 sshd[4027]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:12:58.650316 systemd-logind[1446]: New session 19 of user core. Dec 13 09:12:58.661912 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 09:13:01.154233 sshd[4027]: pam_unix(sshd:session): session closed for user core Dec 13 09:13:01.186206 systemd[1]: Started sshd@19-146.190.151.20:22-147.75.109.163:44918.service - OpenSSH per-connection server daemon (147.75.109.163:44918). Dec 13 09:13:01.187968 systemd[1]: sshd@18-146.190.151.20:22-147.75.109.163:44914.service: Deactivated successfully. Dec 13 09:13:01.194314 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 09:13:01.200171 systemd-logind[1446]: Session 19 logged out. Waiting for processes to exit. Dec 13 09:13:01.211348 systemd-logind[1446]: Removed session 19. Dec 13 09:13:01.301459 sshd[4041]: Accepted publickey for core from 147.75.109.163 port 44918 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo Dec 13 09:13:01.304375 sshd[4041]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:13:01.330916 systemd-logind[1446]: New session 20 of user core. Dec 13 09:13:01.340463 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 09:13:02.190646 sshd[4041]: pam_unix(sshd:session): session closed for user core Dec 13 09:13:02.205728 systemd[1]: sshd@19-146.190.151.20:22-147.75.109.163:44918.service: Deactivated successfully. Dec 13 09:13:02.213844 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 09:13:02.216854 systemd-logind[1446]: Session 20 logged out. Waiting for processes to exit. Dec 13 09:13:02.225721 systemd[1]: Started sshd@20-146.190.151.20:22-147.75.109.163:44934.service - OpenSSH per-connection server daemon (147.75.109.163:44934). Dec 13 09:13:02.284911 systemd-logind[1446]: Removed session 20. Dec 13 09:13:02.338123 sshd[4056]: Accepted publickey for core from 147.75.109.163 port 44934 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo Dec 13 09:13:02.342162 sshd[4056]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:13:02.350614 systemd-logind[1446]: New session 21 of user core. Dec 13 09:13:02.358143 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 09:13:02.545425 sshd[4056]: pam_unix(sshd:session): session closed for user core Dec 13 09:13:02.554149 systemd-logind[1446]: Session 21 logged out. Waiting for processes to exit. Dec 13 09:13:02.557739 systemd[1]: sshd@20-146.190.151.20:22-147.75.109.163:44934.service: Deactivated successfully. Dec 13 09:13:02.562536 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 09:13:02.565040 systemd-logind[1446]: Removed session 21. Dec 13 09:13:06.012541 kubelet[2513]: E1213 09:13:06.011826 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:13:07.577088 systemd[1]: Started sshd@21-146.190.151.20:22-147.75.109.163:54560.service - OpenSSH per-connection server daemon (147.75.109.163:54560). Dec 13 09:13:07.624362 sshd[4069]: Accepted publickey for core from 147.75.109.163 port 54560 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo Dec 13 09:13:07.627029 sshd[4069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:13:07.633240 systemd-logind[1446]: New session 22 of user core. Dec 13 09:13:07.641989 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 09:13:07.814794 sshd[4069]: pam_unix(sshd:session): session closed for user core Dec 13 09:13:07.820226 systemd[1]: sshd@21-146.190.151.20:22-147.75.109.163:54560.service: Deactivated successfully. Dec 13 09:13:07.827296 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 09:13:07.832101 systemd-logind[1446]: Session 22 logged out. Waiting for processes to exit. Dec 13 09:13:07.834148 systemd-logind[1446]: Removed session 22. Dec 13 09:13:11.017208 kubelet[2513]: E1213 09:13:11.014419 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:13:12.839248 systemd[1]: Started sshd@22-146.190.151.20:22-147.75.109.163:54576.service - OpenSSH per-connection server daemon (147.75.109.163:54576). Dec 13 09:13:12.925600 sshd[4085]: Accepted publickey for core from 147.75.109.163 port 54576 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo Dec 13 09:13:12.929574 sshd[4085]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:13:12.937971 systemd-logind[1446]: New session 23 of user core. Dec 13 09:13:12.940845 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 13 09:13:13.028834 kubelet[2513]: E1213 09:13:13.028764 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:13:13.186798 sshd[4085]: pam_unix(sshd:session): session closed for user core Dec 13 09:13:13.197156 systemd[1]: sshd@22-146.190.151.20:22-147.75.109.163:54576.service: Deactivated successfully. Dec 13 09:13:13.202019 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 09:13:13.205292 systemd-logind[1446]: Session 23 logged out. Waiting for processes to exit. Dec 13 09:13:13.209362 systemd-logind[1446]: Removed session 23. Dec 13 09:13:18.013594 kubelet[2513]: E1213 09:13:18.011129 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:13:18.013594 kubelet[2513]: E1213 09:13:18.012165 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:13:18.207320 systemd[1]: Started sshd@23-146.190.151.20:22-147.75.109.163:57202.service - OpenSSH per-connection server daemon (147.75.109.163:57202). Dec 13 09:13:18.291697 sshd[4100]: Accepted publickey for core from 147.75.109.163 port 57202 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo Dec 13 09:13:18.294284 sshd[4100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:13:18.306910 systemd-logind[1446]: New session 24 of user core. Dec 13 09:13:18.322473 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 13 09:13:18.613798 sshd[4100]: pam_unix(sshd:session): session closed for user core Dec 13 09:13:18.623510 systemd[1]: sshd@23-146.190.151.20:22-147.75.109.163:57202.service: Deactivated successfully. Dec 13 09:13:18.627836 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 09:13:18.631781 systemd-logind[1446]: Session 24 logged out. Waiting for processes to exit. Dec 13 09:13:18.636750 systemd-logind[1446]: Removed session 24. Dec 13 09:13:23.637639 systemd[1]: Started sshd@24-146.190.151.20:22-147.75.109.163:57208.service - OpenSSH per-connection server daemon (147.75.109.163:57208). Dec 13 09:13:23.687459 sshd[4115]: Accepted publickey for core from 147.75.109.163 port 57208 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo Dec 13 09:13:23.689669 sshd[4115]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:13:23.699612 systemd-logind[1446]: New session 25 of user core. Dec 13 09:13:23.710103 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 13 09:13:23.878377 sshd[4115]: pam_unix(sshd:session): session closed for user core Dec 13 09:13:23.891666 systemd[1]: sshd@24-146.190.151.20:22-147.75.109.163:57208.service: Deactivated successfully. Dec 13 09:13:23.900370 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 09:13:23.903942 systemd-logind[1446]: Session 25 logged out. Waiting for processes to exit. Dec 13 09:13:23.914278 systemd[1]: Started sshd@25-146.190.151.20:22-147.75.109.163:57222.service - OpenSSH per-connection server daemon (147.75.109.163:57222). Dec 13 09:13:23.917581 systemd-logind[1446]: Removed session 25. Dec 13 09:13:23.987881 sshd[4127]: Accepted publickey for core from 147.75.109.163 port 57222 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo Dec 13 09:13:23.994036 sshd[4127]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:13:24.002461 systemd-logind[1446]: New session 26 of user core. Dec 13 09:13:24.008889 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 13 09:13:26.324454 systemd[1]: run-containerd-runc-k8s.io-423f208f78fe90a485e0d0c98305ab89219dafc70091f0815e284846cacfe30f-runc.tTJirM.mount: Deactivated successfully. Dec 13 09:13:26.354286 containerd[1469]: time="2024-12-13T09:13:26.353167948Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 09:13:26.377965 containerd[1469]: time="2024-12-13T09:13:26.377899253Z" level=info msg="StopContainer for \"423f208f78fe90a485e0d0c98305ab89219dafc70091f0815e284846cacfe30f\" with timeout 2 (s)" Dec 13 09:13:26.378121 containerd[1469]: time="2024-12-13T09:13:26.378101205Z" level=info msg="StopContainer for \"5df1ca3c980974eadbce73cb6ca97cca9eb79b0e84708cec3c518df7b6773e3a\" with timeout 30 (s)" Dec 13 09:13:26.378722 containerd[1469]: time="2024-12-13T09:13:26.378610853Z" level=info msg="Stop container \"423f208f78fe90a485e0d0c98305ab89219dafc70091f0815e284846cacfe30f\" with signal terminated" Dec 13 09:13:26.379102 containerd[1469]: time="2024-12-13T09:13:26.378684754Z" level=info msg="Stop container \"5df1ca3c980974eadbce73cb6ca97cca9eb79b0e84708cec3c518df7b6773e3a\" with signal terminated" Dec 13 09:13:26.396467 systemd-networkd[1358]: lxc_health: Link DOWN Dec 13 09:13:26.396790 systemd-networkd[1358]: lxc_health: Lost carrier Dec 13 09:13:26.407905 systemd[1]: cri-containerd-5df1ca3c980974eadbce73cb6ca97cca9eb79b0e84708cec3c518df7b6773e3a.scope: Deactivated successfully. Dec 13 09:13:26.446226 systemd[1]: cri-containerd-423f208f78fe90a485e0d0c98305ab89219dafc70091f0815e284846cacfe30f.scope: Deactivated successfully. Dec 13 09:13:26.446558 systemd[1]: cri-containerd-423f208f78fe90a485e0d0c98305ab89219dafc70091f0815e284846cacfe30f.scope: Consumed 10.096s CPU time. Dec 13 09:13:26.466685 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5df1ca3c980974eadbce73cb6ca97cca9eb79b0e84708cec3c518df7b6773e3a-rootfs.mount: Deactivated successfully. Dec 13 09:13:26.476991 containerd[1469]: time="2024-12-13T09:13:26.476675445Z" level=info msg="shim disconnected" id=5df1ca3c980974eadbce73cb6ca97cca9eb79b0e84708cec3c518df7b6773e3a namespace=k8s.io Dec 13 09:13:26.477776 containerd[1469]: time="2024-12-13T09:13:26.477379805Z" level=warning msg="cleaning up after shim disconnected" id=5df1ca3c980974eadbce73cb6ca97cca9eb79b0e84708cec3c518df7b6773e3a namespace=k8s.io Dec 13 09:13:26.477776 containerd[1469]: time="2024-12-13T09:13:26.477643422Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 09:13:26.492584 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-423f208f78fe90a485e0d0c98305ab89219dafc70091f0815e284846cacfe30f-rootfs.mount: Deactivated successfully. Dec 13 09:13:26.502289 containerd[1469]: time="2024-12-13T09:13:26.502218509Z" level=info msg="shim disconnected" id=423f208f78fe90a485e0d0c98305ab89219dafc70091f0815e284846cacfe30f namespace=k8s.io Dec 13 09:13:26.502289 containerd[1469]: time="2024-12-13T09:13:26.502279586Z" level=warning msg="cleaning up after shim disconnected" id=423f208f78fe90a485e0d0c98305ab89219dafc70091f0815e284846cacfe30f namespace=k8s.io Dec 13 09:13:26.502289 containerd[1469]: time="2024-12-13T09:13:26.502289075Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 09:13:26.513894 containerd[1469]: time="2024-12-13T09:13:26.513843347Z" level=info msg="StopContainer for \"5df1ca3c980974eadbce73cb6ca97cca9eb79b0e84708cec3c518df7b6773e3a\" returns successfully" Dec 13 09:13:26.519857 containerd[1469]: time="2024-12-13T09:13:26.519581630Z" level=info msg="StopPodSandbox for \"01a95862e00d8efa5c2ff145e48902cbb956f258a5982e8553c38828611a00a5\"" Dec 13 09:13:26.519857 containerd[1469]: time="2024-12-13T09:13:26.519672180Z" level=info msg="Container to stop \"5df1ca3c980974eadbce73cb6ca97cca9eb79b0e84708cec3c518df7b6773e3a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 09:13:26.525171 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-01a95862e00d8efa5c2ff145e48902cbb956f258a5982e8553c38828611a00a5-shm.mount: Deactivated successfully. Dec 13 09:13:26.535685 containerd[1469]: time="2024-12-13T09:13:26.535580360Z" level=info msg="StopContainer for \"423f208f78fe90a485e0d0c98305ab89219dafc70091f0815e284846cacfe30f\" returns successfully" Dec 13 09:13:26.536557 containerd[1469]: time="2024-12-13T09:13:26.536480937Z" level=info msg="StopPodSandbox for \"7459b9df1d0a2f7cfb466b5a7c2a25127a4e5e614f5cf24b682a8219783e30d5\"" Dec 13 09:13:26.536715 containerd[1469]: time="2024-12-13T09:13:26.536569357Z" level=info msg="Container to stop \"393ab94d5d1419b129e37905c78e8b95c82a7aa1a3794f969d2b024c616efb66\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 09:13:26.536715 containerd[1469]: time="2024-12-13T09:13:26.536586223Z" level=info msg="Container to stop \"423f208f78fe90a485e0d0c98305ab89219dafc70091f0815e284846cacfe30f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 09:13:26.536715 containerd[1469]: time="2024-12-13T09:13:26.536597039Z" level=info msg="Container to stop \"09201d2ae86bfbc45c401ac2f7f1375629f0e126f3f453cd49fb7ba361a630f1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 09:13:26.536715 containerd[1469]: time="2024-12-13T09:13:26.536609554Z" level=info msg="Container to stop \"e2a8f5d41e541aa1140caf67f0552c9481ae577ac733f288ce8c1a7d6c5a89fe\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 09:13:26.536715 containerd[1469]: time="2024-12-13T09:13:26.536620184Z" level=info msg="Container to stop \"9a2867a414bc4f20255f9bc8cef1fd9dd72141497792ee8f88f4adeae3776317\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 09:13:26.543067 systemd[1]: cri-containerd-01a95862e00d8efa5c2ff145e48902cbb956f258a5982e8553c38828611a00a5.scope: Deactivated successfully. Dec 13 09:13:26.553043 systemd[1]: cri-containerd-7459b9df1d0a2f7cfb466b5a7c2a25127a4e5e614f5cf24b682a8219783e30d5.scope: Deactivated successfully. Dec 13 09:13:26.591238 containerd[1469]: time="2024-12-13T09:13:26.590749998Z" level=info msg="shim disconnected" id=01a95862e00d8efa5c2ff145e48902cbb956f258a5982e8553c38828611a00a5 namespace=k8s.io Dec 13 09:13:26.591238 containerd[1469]: time="2024-12-13T09:13:26.590829870Z" level=warning msg="cleaning up after shim disconnected" id=01a95862e00d8efa5c2ff145e48902cbb956f258a5982e8553c38828611a00a5 namespace=k8s.io Dec 13 09:13:26.591238 containerd[1469]: time="2024-12-13T09:13:26.590846739Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 09:13:26.603413 containerd[1469]: time="2024-12-13T09:13:26.603029231Z" level=info msg="shim disconnected" id=7459b9df1d0a2f7cfb466b5a7c2a25127a4e5e614f5cf24b682a8219783e30d5 namespace=k8s.io Dec 13 09:13:26.603413 containerd[1469]: time="2024-12-13T09:13:26.603092816Z" level=warning msg="cleaning up after shim disconnected" id=7459b9df1d0a2f7cfb466b5a7c2a25127a4e5e614f5cf24b682a8219783e30d5 namespace=k8s.io Dec 13 09:13:26.603413 containerd[1469]: time="2024-12-13T09:13:26.603102265Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 09:13:26.615098 containerd[1469]: time="2024-12-13T09:13:26.614802532Z" level=warning msg="cleanup warnings time=\"2024-12-13T09:13:26Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 09:13:26.626202 kubelet[2513]: I1213 09:13:26.626063 2513 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01a95862e00d8efa5c2ff145e48902cbb956f258a5982e8553c38828611a00a5" Dec 13 09:13:26.633881 containerd[1469]: time="2024-12-13T09:13:26.633653634Z" level=info msg="TearDown network for sandbox \"01a95862e00d8efa5c2ff145e48902cbb956f258a5982e8553c38828611a00a5\" successfully" Dec 13 09:13:26.633881 containerd[1469]: time="2024-12-13T09:13:26.633700507Z" level=info msg="StopPodSandbox for \"01a95862e00d8efa5c2ff145e48902cbb956f258a5982e8553c38828611a00a5\" returns successfully" Dec 13 09:13:26.639775 containerd[1469]: time="2024-12-13T09:13:26.639721317Z" level=info msg="TearDown network for sandbox \"7459b9df1d0a2f7cfb466b5a7c2a25127a4e5e614f5cf24b682a8219783e30d5\" successfully" Dec 13 09:13:26.639775 containerd[1469]: time="2024-12-13T09:13:26.639770746Z" level=info msg="StopPodSandbox for \"7459b9df1d0a2f7cfb466b5a7c2a25127a4e5e614f5cf24b682a8219783e30d5\" returns successfully" Dec 13 09:13:26.763534 kubelet[2513]: I1213 09:13:26.762626 2513 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/56c6e77a-9013-47b8-99a9-4dc5b9930b0c-bpf-maps\") pod \"56c6e77a-9013-47b8-99a9-4dc5b9930b0c\" (UID: \"56c6e77a-9013-47b8-99a9-4dc5b9930b0c\") " Dec 13 09:13:26.763534 kubelet[2513]: I1213 09:13:26.762716 2513 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/56c6e77a-9013-47b8-99a9-4dc5b9930b0c-hubble-tls\") pod \"56c6e77a-9013-47b8-99a9-4dc5b9930b0c\" (UID: \"56c6e77a-9013-47b8-99a9-4dc5b9930b0c\") " Dec 13 09:13:26.763534 kubelet[2513]: I1213 09:13:26.762745 2513 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/56c6e77a-9013-47b8-99a9-4dc5b9930b0c-cni-path\") pod \"56c6e77a-9013-47b8-99a9-4dc5b9930b0c\" (UID: \"56c6e77a-9013-47b8-99a9-4dc5b9930b0c\") " Dec 13 09:13:26.763534 kubelet[2513]: I1213 09:13:26.762779 2513 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/56c6e77a-9013-47b8-99a9-4dc5b9930b0c-cilium-config-path\") pod \"56c6e77a-9013-47b8-99a9-4dc5b9930b0c\" (UID: \"56c6e77a-9013-47b8-99a9-4dc5b9930b0c\") " Dec 13 09:13:26.763534 kubelet[2513]: I1213 09:13:26.762807 2513 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/56c6e77a-9013-47b8-99a9-4dc5b9930b0c-xtables-lock\") pod \"56c6e77a-9013-47b8-99a9-4dc5b9930b0c\" (UID: \"56c6e77a-9013-47b8-99a9-4dc5b9930b0c\") " Dec 13 09:13:26.763534 kubelet[2513]: I1213 09:13:26.762832 2513 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/56c6e77a-9013-47b8-99a9-4dc5b9930b0c-host-proc-sys-kernel\") pod \"56c6e77a-9013-47b8-99a9-4dc5b9930b0c\" (UID: \"56c6e77a-9013-47b8-99a9-4dc5b9930b0c\") " Dec 13 09:13:26.763853 kubelet[2513]: I1213 09:13:26.762857 2513 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgzhm\" (UniqueName: \"kubernetes.io/projected/56c6e77a-9013-47b8-99a9-4dc5b9930b0c-kube-api-access-zgzhm\") pod \"56c6e77a-9013-47b8-99a9-4dc5b9930b0c\" (UID: \"56c6e77a-9013-47b8-99a9-4dc5b9930b0c\") " Dec 13 09:13:26.763853 kubelet[2513]: I1213 09:13:26.762881 2513 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/56c6e77a-9013-47b8-99a9-4dc5b9930b0c-hostproc\") pod \"56c6e77a-9013-47b8-99a9-4dc5b9930b0c\" (UID: \"56c6e77a-9013-47b8-99a9-4dc5b9930b0c\") " Dec 13 09:13:26.763853 kubelet[2513]: I1213 09:13:26.762903 2513 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/56c6e77a-9013-47b8-99a9-4dc5b9930b0c-lib-modules\") pod \"56c6e77a-9013-47b8-99a9-4dc5b9930b0c\" (UID: \"56c6e77a-9013-47b8-99a9-4dc5b9930b0c\") " Dec 13 09:13:26.763853 kubelet[2513]: I1213 09:13:26.762926 2513 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/56c6e77a-9013-47b8-99a9-4dc5b9930b0c-cilium-run\") pod \"56c6e77a-9013-47b8-99a9-4dc5b9930b0c\" (UID: \"56c6e77a-9013-47b8-99a9-4dc5b9930b0c\") " Dec 13 09:13:26.763853 kubelet[2513]: I1213 09:13:26.762973 2513 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ca543a1e-a48e-4fd7-b1da-fc54da14712c-cilium-config-path\") pod \"ca543a1e-a48e-4fd7-b1da-fc54da14712c\" (UID: \"ca543a1e-a48e-4fd7-b1da-fc54da14712c\") " Dec 13 09:13:26.763853 kubelet[2513]: I1213 09:13:26.762997 2513 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/56c6e77a-9013-47b8-99a9-4dc5b9930b0c-cilium-cgroup\") pod \"56c6e77a-9013-47b8-99a9-4dc5b9930b0c\" (UID: \"56c6e77a-9013-47b8-99a9-4dc5b9930b0c\") " Dec 13 09:13:26.764011 kubelet[2513]: I1213 09:13:26.763024 2513 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/56c6e77a-9013-47b8-99a9-4dc5b9930b0c-clustermesh-secrets\") pod \"56c6e77a-9013-47b8-99a9-4dc5b9930b0c\" (UID: \"56c6e77a-9013-47b8-99a9-4dc5b9930b0c\") " Dec 13 09:13:26.764011 kubelet[2513]: I1213 09:13:26.763047 2513 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/56c6e77a-9013-47b8-99a9-4dc5b9930b0c-host-proc-sys-net\") pod \"56c6e77a-9013-47b8-99a9-4dc5b9930b0c\" (UID: \"56c6e77a-9013-47b8-99a9-4dc5b9930b0c\") " Dec 13 09:13:26.764011 kubelet[2513]: I1213 09:13:26.763070 2513 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/56c6e77a-9013-47b8-99a9-4dc5b9930b0c-etc-cni-netd\") pod \"56c6e77a-9013-47b8-99a9-4dc5b9930b0c\" (UID: \"56c6e77a-9013-47b8-99a9-4dc5b9930b0c\") " Dec 13 09:13:26.764011 kubelet[2513]: I1213 09:13:26.763089 2513 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2frjb\" (UniqueName: \"kubernetes.io/projected/ca543a1e-a48e-4fd7-b1da-fc54da14712c-kube-api-access-2frjb\") pod \"ca543a1e-a48e-4fd7-b1da-fc54da14712c\" (UID: \"ca543a1e-a48e-4fd7-b1da-fc54da14712c\") " Dec 13 09:13:26.768151 kubelet[2513]: I1213 09:13:26.768070 2513 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca543a1e-a48e-4fd7-b1da-fc54da14712c-kube-api-access-2frjb" (OuterVolumeSpecName: "kube-api-access-2frjb") pod "ca543a1e-a48e-4fd7-b1da-fc54da14712c" (UID: "ca543a1e-a48e-4fd7-b1da-fc54da14712c"). InnerVolumeSpecName "kube-api-access-2frjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 09:13:26.769598 kubelet[2513]: I1213 09:13:26.768852 2513 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56c6e77a-9013-47b8-99a9-4dc5b9930b0c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "56c6e77a-9013-47b8-99a9-4dc5b9930b0c" (UID: "56c6e77a-9013-47b8-99a9-4dc5b9930b0c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 09:13:26.769598 kubelet[2513]: I1213 09:13:26.768954 2513 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56c6e77a-9013-47b8-99a9-4dc5b9930b0c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "56c6e77a-9013-47b8-99a9-4dc5b9930b0c" (UID: "56c6e77a-9013-47b8-99a9-4dc5b9930b0c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 09:13:26.769598 kubelet[2513]: I1213 09:13:26.768978 2513 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56c6e77a-9013-47b8-99a9-4dc5b9930b0c-cni-path" (OuterVolumeSpecName: "cni-path") pod "56c6e77a-9013-47b8-99a9-4dc5b9930b0c" (UID: "56c6e77a-9013-47b8-99a9-4dc5b9930b0c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 09:13:26.771702 kubelet[2513]: I1213 09:13:26.771655 2513 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56c6e77a-9013-47b8-99a9-4dc5b9930b0c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "56c6e77a-9013-47b8-99a9-4dc5b9930b0c" (UID: "56c6e77a-9013-47b8-99a9-4dc5b9930b0c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 09:13:26.771807 kubelet[2513]: I1213 09:13:26.771720 2513 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56c6e77a-9013-47b8-99a9-4dc5b9930b0c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "56c6e77a-9013-47b8-99a9-4dc5b9930b0c" (UID: "56c6e77a-9013-47b8-99a9-4dc5b9930b0c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 09:13:26.772293 kubelet[2513]: I1213 09:13:26.772256 2513 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56c6e77a-9013-47b8-99a9-4dc5b9930b0c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "56c6e77a-9013-47b8-99a9-4dc5b9930b0c" (UID: "56c6e77a-9013-47b8-99a9-4dc5b9930b0c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 09:13:26.772410 kubelet[2513]: I1213 09:13:26.772395 2513 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56c6e77a-9013-47b8-99a9-4dc5b9930b0c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "56c6e77a-9013-47b8-99a9-4dc5b9930b0c" (UID: "56c6e77a-9013-47b8-99a9-4dc5b9930b0c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 09:13:26.772480 kubelet[2513]: I1213 09:13:26.772461 2513 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56c6e77a-9013-47b8-99a9-4dc5b9930b0c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "56c6e77a-9013-47b8-99a9-4dc5b9930b0c" (UID: "56c6e77a-9013-47b8-99a9-4dc5b9930b0c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 09:13:26.774738 kubelet[2513]: I1213 09:13:26.774687 2513 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca543a1e-a48e-4fd7-b1da-fc54da14712c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ca543a1e-a48e-4fd7-b1da-fc54da14712c" (UID: "ca543a1e-a48e-4fd7-b1da-fc54da14712c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 09:13:26.774873 kubelet[2513]: I1213 09:13:26.774756 2513 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56c6e77a-9013-47b8-99a9-4dc5b9930b0c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "56c6e77a-9013-47b8-99a9-4dc5b9930b0c" (UID: "56c6e77a-9013-47b8-99a9-4dc5b9930b0c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 09:13:26.776749 kubelet[2513]: I1213 09:13:26.776702 2513 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56c6e77a-9013-47b8-99a9-4dc5b9930b0c-kube-api-access-zgzhm" (OuterVolumeSpecName: "kube-api-access-zgzhm") pod "56c6e77a-9013-47b8-99a9-4dc5b9930b0c" (UID: "56c6e77a-9013-47b8-99a9-4dc5b9930b0c"). InnerVolumeSpecName "kube-api-access-zgzhm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 09:13:26.776942 kubelet[2513]: I1213 09:13:26.776917 2513 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56c6e77a-9013-47b8-99a9-4dc5b9930b0c-hostproc" (OuterVolumeSpecName: "hostproc") pod "56c6e77a-9013-47b8-99a9-4dc5b9930b0c" (UID: "56c6e77a-9013-47b8-99a9-4dc5b9930b0c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 09:13:26.777049 kubelet[2513]: I1213 09:13:26.777033 2513 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56c6e77a-9013-47b8-99a9-4dc5b9930b0c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "56c6e77a-9013-47b8-99a9-4dc5b9930b0c" (UID: "56c6e77a-9013-47b8-99a9-4dc5b9930b0c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 09:13:26.777134 kubelet[2513]: I1213 09:13:26.777115 2513 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56c6e77a-9013-47b8-99a9-4dc5b9930b0c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "56c6e77a-9013-47b8-99a9-4dc5b9930b0c" (UID: "56c6e77a-9013-47b8-99a9-4dc5b9930b0c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 09:13:26.778365 kubelet[2513]: I1213 09:13:26.778323 2513 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56c6e77a-9013-47b8-99a9-4dc5b9930b0c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "56c6e77a-9013-47b8-99a9-4dc5b9930b0c" (UID: "56c6e77a-9013-47b8-99a9-4dc5b9930b0c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 09:13:26.864546 kubelet[2513]: I1213 09:13:26.864332 2513 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/56c6e77a-9013-47b8-99a9-4dc5b9930b0c-hubble-tls\") on node \"ci-4081.2.1-5-05f51c210a\" DevicePath \"\"" Dec 13 09:13:26.864546 kubelet[2513]: I1213 09:13:26.864385 2513 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/56c6e77a-9013-47b8-99a9-4dc5b9930b0c-cni-path\") on node \"ci-4081.2.1-5-05f51c210a\" DevicePath \"\"" Dec 13 09:13:26.864546 kubelet[2513]: I1213 09:13:26.864398 2513 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/56c6e77a-9013-47b8-99a9-4dc5b9930b0c-cilium-config-path\") on node \"ci-4081.2.1-5-05f51c210a\" DevicePath \"\"" Dec 13 09:13:26.864546 kubelet[2513]: I1213 09:13:26.864410 2513 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/56c6e77a-9013-47b8-99a9-4dc5b9930b0c-xtables-lock\") on node \"ci-4081.2.1-5-05f51c210a\" DevicePath \"\"" Dec 13 09:13:26.864546 kubelet[2513]: I1213 09:13:26.864420 2513 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/56c6e77a-9013-47b8-99a9-4dc5b9930b0c-host-proc-sys-kernel\") on node \"ci-4081.2.1-5-05f51c210a\" DevicePath \"\"" Dec 13 09:13:26.864546 kubelet[2513]: I1213 09:13:26.864432 2513 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-zgzhm\" (UniqueName: \"kubernetes.io/projected/56c6e77a-9013-47b8-99a9-4dc5b9930b0c-kube-api-access-zgzhm\") on node \"ci-4081.2.1-5-05f51c210a\" DevicePath \"\"" Dec 13 09:13:26.864546 kubelet[2513]: I1213 09:13:26.864446 2513 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/56c6e77a-9013-47b8-99a9-4dc5b9930b0c-hostproc\") on node \"ci-4081.2.1-5-05f51c210a\" DevicePath \"\"" Dec 13 09:13:26.864546 kubelet[2513]: I1213 09:13:26.864460 2513 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/56c6e77a-9013-47b8-99a9-4dc5b9930b0c-lib-modules\") on node \"ci-4081.2.1-5-05f51c210a\" DevicePath \"\"" Dec 13 09:13:26.864990 kubelet[2513]: I1213 09:13:26.864480 2513 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/56c6e77a-9013-47b8-99a9-4dc5b9930b0c-cilium-run\") on node \"ci-4081.2.1-5-05f51c210a\" DevicePath \"\"" Dec 13 09:13:26.864990 kubelet[2513]: I1213 09:13:26.864521 2513 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ca543a1e-a48e-4fd7-b1da-fc54da14712c-cilium-config-path\") on node \"ci-4081.2.1-5-05f51c210a\" DevicePath \"\"" Dec 13 09:13:26.864990 kubelet[2513]: I1213 09:13:26.864535 2513 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/56c6e77a-9013-47b8-99a9-4dc5b9930b0c-cilium-cgroup\") on node \"ci-4081.2.1-5-05f51c210a\" DevicePath \"\"" Dec 13 09:13:26.864990 kubelet[2513]: I1213 09:13:26.864551 2513 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/56c6e77a-9013-47b8-99a9-4dc5b9930b0c-clustermesh-secrets\") on node \"ci-4081.2.1-5-05f51c210a\" DevicePath \"\"" Dec 13 09:13:26.864990 kubelet[2513]: I1213 09:13:26.864565 2513 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/56c6e77a-9013-47b8-99a9-4dc5b9930b0c-host-proc-sys-net\") on node \"ci-4081.2.1-5-05f51c210a\" DevicePath \"\"" Dec 13 09:13:26.864990 kubelet[2513]: I1213 09:13:26.864579 2513 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/56c6e77a-9013-47b8-99a9-4dc5b9930b0c-etc-cni-netd\") on node \"ci-4081.2.1-5-05f51c210a\" DevicePath \"\"" Dec 13 09:13:26.864990 kubelet[2513]: I1213 09:13:26.864594 2513 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-2frjb\" (UniqueName: \"kubernetes.io/projected/ca543a1e-a48e-4fd7-b1da-fc54da14712c-kube-api-access-2frjb\") on node \"ci-4081.2.1-5-05f51c210a\" DevicePath \"\"" Dec 13 09:13:26.864990 kubelet[2513]: I1213 09:13:26.864607 2513 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/56c6e77a-9013-47b8-99a9-4dc5b9930b0c-bpf-maps\") on node \"ci-4081.2.1-5-05f51c210a\" DevicePath \"\"" Dec 13 09:13:27.022790 systemd[1]: Removed slice kubepods-burstable-pod56c6e77a_9013_47b8_99a9_4dc5b9930b0c.slice - libcontainer container kubepods-burstable-pod56c6e77a_9013_47b8_99a9_4dc5b9930b0c.slice. Dec 13 09:13:27.023257 systemd[1]: kubepods-burstable-pod56c6e77a_9013_47b8_99a9_4dc5b9930b0c.slice: Consumed 10.209s CPU time. Dec 13 09:13:27.026085 systemd[1]: Removed slice kubepods-besteffort-podca543a1e_a48e_4fd7_b1da_fc54da14712c.slice - libcontainer container kubepods-besteffort-podca543a1e_a48e_4fd7_b1da_fc54da14712c.slice. Dec 13 09:13:27.316902 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-01a95862e00d8efa5c2ff145e48902cbb956f258a5982e8553c38828611a00a5-rootfs.mount: Deactivated successfully. Dec 13 09:13:27.317071 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7459b9df1d0a2f7cfb466b5a7c2a25127a4e5e614f5cf24b682a8219783e30d5-rootfs.mount: Deactivated successfully. Dec 13 09:13:27.317160 systemd[1]: var-lib-kubelet-pods-ca543a1e\x2da48e\x2d4fd7\x2db1da\x2dfc54da14712c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2frjb.mount: Deactivated successfully. Dec 13 09:13:27.317272 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7459b9df1d0a2f7cfb466b5a7c2a25127a4e5e614f5cf24b682a8219783e30d5-shm.mount: Deactivated successfully. Dec 13 09:13:27.317372 systemd[1]: var-lib-kubelet-pods-56c6e77a\x2d9013\x2d47b8\x2d99a9\x2d4dc5b9930b0c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzgzhm.mount: Deactivated successfully. Dec 13 09:13:27.317475 systemd[1]: var-lib-kubelet-pods-56c6e77a\x2d9013\x2d47b8\x2d99a9\x2d4dc5b9930b0c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 09:13:27.317596 systemd[1]: var-lib-kubelet-pods-56c6e77a\x2d9013\x2d47b8\x2d99a9\x2d4dc5b9930b0c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 09:13:27.636808 kubelet[2513]: I1213 09:13:27.636682 2513 scope.go:117] "RemoveContainer" containerID="423f208f78fe90a485e0d0c98305ab89219dafc70091f0815e284846cacfe30f" Dec 13 09:13:27.659333 containerd[1469]: time="2024-12-13T09:13:27.657122114Z" level=info msg="RemoveContainer for \"423f208f78fe90a485e0d0c98305ab89219dafc70091f0815e284846cacfe30f\"" Dec 13 09:13:27.666827 containerd[1469]: time="2024-12-13T09:13:27.666762477Z" level=info msg="RemoveContainer for \"423f208f78fe90a485e0d0c98305ab89219dafc70091f0815e284846cacfe30f\" returns successfully" Dec 13 09:13:27.667596 kubelet[2513]: I1213 09:13:27.667571 2513 scope.go:117] "RemoveContainer" containerID="9a2867a414bc4f20255f9bc8cef1fd9dd72141497792ee8f88f4adeae3776317" Dec 13 09:13:27.668953 containerd[1469]: time="2024-12-13T09:13:27.668905414Z" level=info msg="RemoveContainer for \"9a2867a414bc4f20255f9bc8cef1fd9dd72141497792ee8f88f4adeae3776317\"" Dec 13 09:13:27.672711 containerd[1469]: time="2024-12-13T09:13:27.672631131Z" level=info msg="RemoveContainer for \"9a2867a414bc4f20255f9bc8cef1fd9dd72141497792ee8f88f4adeae3776317\" returns successfully" Dec 13 09:13:27.674761 kubelet[2513]: I1213 09:13:27.674718 2513 scope.go:117] "RemoveContainer" containerID="393ab94d5d1419b129e37905c78e8b95c82a7aa1a3794f969d2b024c616efb66" Dec 13 09:13:27.680013 containerd[1469]: time="2024-12-13T09:13:27.679171922Z" level=info msg="RemoveContainer for \"393ab94d5d1419b129e37905c78e8b95c82a7aa1a3794f969d2b024c616efb66\"" Dec 13 09:13:27.682944 containerd[1469]: time="2024-12-13T09:13:27.682872945Z" level=info msg="RemoveContainer for \"393ab94d5d1419b129e37905c78e8b95c82a7aa1a3794f969d2b024c616efb66\" returns successfully" Dec 13 09:13:27.684347 kubelet[2513]: I1213 09:13:27.684311 2513 scope.go:117] "RemoveContainer" containerID="e2a8f5d41e541aa1140caf67f0552c9481ae577ac733f288ce8c1a7d6c5a89fe" Dec 13 09:13:27.686532 containerd[1469]: time="2024-12-13T09:13:27.686455736Z" level=info msg="RemoveContainer for \"e2a8f5d41e541aa1140caf67f0552c9481ae577ac733f288ce8c1a7d6c5a89fe\"" Dec 13 09:13:27.690053 containerd[1469]: time="2024-12-13T09:13:27.689854006Z" level=info msg="RemoveContainer for \"e2a8f5d41e541aa1140caf67f0552c9481ae577ac733f288ce8c1a7d6c5a89fe\" returns successfully" Dec 13 09:13:27.692088 kubelet[2513]: I1213 09:13:27.690170 2513 scope.go:117] "RemoveContainer" containerID="09201d2ae86bfbc45c401ac2f7f1375629f0e126f3f453cd49fb7ba361a630f1" Dec 13 09:13:27.695773 containerd[1469]: time="2024-12-13T09:13:27.695711306Z" level=info msg="RemoveContainer for \"09201d2ae86bfbc45c401ac2f7f1375629f0e126f3f453cd49fb7ba361a630f1\"" Dec 13 09:13:27.699485 containerd[1469]: time="2024-12-13T09:13:27.699389575Z" level=info msg="RemoveContainer for \"09201d2ae86bfbc45c401ac2f7f1375629f0e126f3f453cd49fb7ba361a630f1\" returns successfully" Dec 13 09:13:28.202380 sshd[4127]: pam_unix(sshd:session): session closed for user core Dec 13 09:13:28.215100 systemd[1]: sshd@25-146.190.151.20:22-147.75.109.163:57222.service: Deactivated successfully. Dec 13 09:13:28.218269 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 09:13:28.218703 systemd[1]: session-26.scope: Consumed 1.527s CPU time. Dec 13 09:13:28.220698 systemd-logind[1446]: Session 26 logged out. Waiting for processes to exit. Dec 13 09:13:28.226165 systemd[1]: Started sshd@26-146.190.151.20:22-147.75.109.163:42610.service - OpenSSH per-connection server daemon (147.75.109.163:42610). Dec 13 09:13:28.228444 systemd-logind[1446]: Removed session 26. Dec 13 09:13:28.300586 sshd[4288]: Accepted publickey for core from 147.75.109.163 port 42610 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo Dec 13 09:13:28.302754 sshd[4288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:13:28.313732 systemd-logind[1446]: New session 27 of user core. Dec 13 09:13:28.322978 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 13 09:13:29.019718 kubelet[2513]: I1213 09:13:29.019101 2513 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56c6e77a-9013-47b8-99a9-4dc5b9930b0c" path="/var/lib/kubelet/pods/56c6e77a-9013-47b8-99a9-4dc5b9930b0c/volumes" Dec 13 09:13:29.021179 kubelet[2513]: I1213 09:13:29.020729 2513 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca543a1e-a48e-4fd7-b1da-fc54da14712c" path="/var/lib/kubelet/pods/ca543a1e-a48e-4fd7-b1da-fc54da14712c/volumes" Dec 13 09:13:29.022195 sshd[4288]: pam_unix(sshd:session): session closed for user core Dec 13 09:13:29.039023 systemd[1]: sshd@26-146.190.151.20:22-147.75.109.163:42610.service: Deactivated successfully. Dec 13 09:13:29.046753 systemd[1]: session-27.scope: Deactivated successfully. Dec 13 09:13:29.053774 systemd-logind[1446]: Session 27 logged out. Waiting for processes to exit. Dec 13 09:13:29.066021 systemd[1]: Started sshd@27-146.190.151.20:22-147.75.109.163:42620.service - OpenSSH per-connection server daemon (147.75.109.163:42620). Dec 13 09:13:29.067299 systemd-logind[1446]: Removed session 27. Dec 13 09:13:29.071845 kubelet[2513]: E1213 09:13:29.071110 2513 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="56c6e77a-9013-47b8-99a9-4dc5b9930b0c" containerName="mount-cgroup" Dec 13 09:13:29.072437 kubelet[2513]: E1213 09:13:29.072004 2513 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ca543a1e-a48e-4fd7-b1da-fc54da14712c" containerName="cilium-operator" Dec 13 09:13:29.072437 kubelet[2513]: E1213 09:13:29.072030 2513 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="56c6e77a-9013-47b8-99a9-4dc5b9930b0c" containerName="clean-cilium-state" Dec 13 09:13:29.072437 kubelet[2513]: E1213 09:13:29.072043 2513 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="56c6e77a-9013-47b8-99a9-4dc5b9930b0c" containerName="apply-sysctl-overwrites" Dec 13 09:13:29.072437 kubelet[2513]: E1213 09:13:29.072049 2513 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="56c6e77a-9013-47b8-99a9-4dc5b9930b0c" containerName="mount-bpf-fs" Dec 13 09:13:29.072437 kubelet[2513]: E1213 09:13:29.072057 2513 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="56c6e77a-9013-47b8-99a9-4dc5b9930b0c" containerName="cilium-agent" Dec 13 09:13:29.072437 kubelet[2513]: I1213 09:13:29.072105 2513 memory_manager.go:354] "RemoveStaleState removing state" podUID="56c6e77a-9013-47b8-99a9-4dc5b9930b0c" containerName="cilium-agent" Dec 13 09:13:29.072437 kubelet[2513]: I1213 09:13:29.072112 2513 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca543a1e-a48e-4fd7-b1da-fc54da14712c" containerName="cilium-operator" Dec 13 09:13:29.124933 systemd[1]: Created slice kubepods-burstable-pod3f28b657_ce60_4acc_a268_9dd6ed3fe053.slice - libcontainer container kubepods-burstable-pod3f28b657_ce60_4acc_a268_9dd6ed3fe053.slice. Dec 13 09:13:29.154742 sshd[4300]: Accepted publickey for core from 147.75.109.163 port 42620 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo Dec 13 09:13:29.157359 sshd[4300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:13:29.166679 systemd-logind[1446]: New session 28 of user core. Dec 13 09:13:29.169713 systemd[1]: Started session-28.scope - Session 28 of User core. Dec 13 09:13:29.185704 kubelet[2513]: I1213 09:13:29.185645 2513 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3f28b657-ce60-4acc-a268-9dd6ed3fe053-bpf-maps\") pod \"cilium-mbtgc\" (UID: \"3f28b657-ce60-4acc-a268-9dd6ed3fe053\") " pod="kube-system/cilium-mbtgc" Dec 13 09:13:29.186024 kubelet[2513]: I1213 09:13:29.185716 2513 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3f28b657-ce60-4acc-a268-9dd6ed3fe053-cilium-config-path\") pod \"cilium-mbtgc\" (UID: \"3f28b657-ce60-4acc-a268-9dd6ed3fe053\") " pod="kube-system/cilium-mbtgc" Dec 13 09:13:29.186024 kubelet[2513]: I1213 09:13:29.185751 2513 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3f28b657-ce60-4acc-a268-9dd6ed3fe053-cilium-run\") pod \"cilium-mbtgc\" (UID: \"3f28b657-ce60-4acc-a268-9dd6ed3fe053\") " pod="kube-system/cilium-mbtgc" Dec 13 09:13:29.186024 kubelet[2513]: I1213 09:13:29.185878 2513 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3f28b657-ce60-4acc-a268-9dd6ed3fe053-hostproc\") pod \"cilium-mbtgc\" (UID: \"3f28b657-ce60-4acc-a268-9dd6ed3fe053\") " pod="kube-system/cilium-mbtgc" Dec 13 09:13:29.186024 kubelet[2513]: I1213 09:13:29.185911 2513 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3f28b657-ce60-4acc-a268-9dd6ed3fe053-cilium-cgroup\") pod \"cilium-mbtgc\" (UID: \"3f28b657-ce60-4acc-a268-9dd6ed3fe053\") " pod="kube-system/cilium-mbtgc" Dec 13 09:13:29.186024 kubelet[2513]: I1213 09:13:29.185937 2513 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3f28b657-ce60-4acc-a268-9dd6ed3fe053-etc-cni-netd\") pod \"cilium-mbtgc\" (UID: \"3f28b657-ce60-4acc-a268-9dd6ed3fe053\") " pod="kube-system/cilium-mbtgc" Dec 13 09:13:29.186024 kubelet[2513]: I1213 09:13:29.185963 2513 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3f28b657-ce60-4acc-a268-9dd6ed3fe053-clustermesh-secrets\") pod \"cilium-mbtgc\" (UID: \"3f28b657-ce60-4acc-a268-9dd6ed3fe053\") " pod="kube-system/cilium-mbtgc" Dec 13 09:13:29.186259 kubelet[2513]: I1213 09:13:29.186000 2513 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3f28b657-ce60-4acc-a268-9dd6ed3fe053-cilium-ipsec-secrets\") pod \"cilium-mbtgc\" (UID: \"3f28b657-ce60-4acc-a268-9dd6ed3fe053\") " pod="kube-system/cilium-mbtgc" Dec 13 09:13:29.186259 kubelet[2513]: I1213 09:13:29.186028 2513 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3f28b657-ce60-4acc-a268-9dd6ed3fe053-hubble-tls\") pod \"cilium-mbtgc\" (UID: \"3f28b657-ce60-4acc-a268-9dd6ed3fe053\") " pod="kube-system/cilium-mbtgc" Dec 13 09:13:29.186259 kubelet[2513]: I1213 09:13:29.186056 2513 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3f28b657-ce60-4acc-a268-9dd6ed3fe053-host-proc-sys-kernel\") pod \"cilium-mbtgc\" (UID: \"3f28b657-ce60-4acc-a268-9dd6ed3fe053\") " pod="kube-system/cilium-mbtgc" Dec 13 09:13:29.186259 kubelet[2513]: I1213 09:13:29.186089 2513 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3f28b657-ce60-4acc-a268-9dd6ed3fe053-cni-path\") pod \"cilium-mbtgc\" (UID: \"3f28b657-ce60-4acc-a268-9dd6ed3fe053\") " pod="kube-system/cilium-mbtgc" Dec 13 09:13:29.186259 kubelet[2513]: I1213 09:13:29.186112 2513 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpmw2\" (UniqueName: \"kubernetes.io/projected/3f28b657-ce60-4acc-a268-9dd6ed3fe053-kube-api-access-wpmw2\") pod \"cilium-mbtgc\" (UID: \"3f28b657-ce60-4acc-a268-9dd6ed3fe053\") " pod="kube-system/cilium-mbtgc" Dec 13 09:13:29.186259 kubelet[2513]: I1213 09:13:29.186139 2513 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3f28b657-ce60-4acc-a268-9dd6ed3fe053-lib-modules\") pod \"cilium-mbtgc\" (UID: \"3f28b657-ce60-4acc-a268-9dd6ed3fe053\") " pod="kube-system/cilium-mbtgc" Dec 13 09:13:29.187392 kubelet[2513]: I1213 09:13:29.186162 2513 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3f28b657-ce60-4acc-a268-9dd6ed3fe053-xtables-lock\") pod \"cilium-mbtgc\" (UID: \"3f28b657-ce60-4acc-a268-9dd6ed3fe053\") " pod="kube-system/cilium-mbtgc" Dec 13 09:13:29.187392 kubelet[2513]: I1213 09:13:29.186190 2513 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3f28b657-ce60-4acc-a268-9dd6ed3fe053-host-proc-sys-net\") pod \"cilium-mbtgc\" (UID: \"3f28b657-ce60-4acc-a268-9dd6ed3fe053\") " pod="kube-system/cilium-mbtgc" Dec 13 09:13:29.236464 sshd[4300]: pam_unix(sshd:session): session closed for user core Dec 13 09:13:29.247141 systemd[1]: sshd@27-146.190.151.20:22-147.75.109.163:42620.service: Deactivated successfully. Dec 13 09:13:29.251953 systemd[1]: session-28.scope: Deactivated successfully. Dec 13 09:13:29.254713 systemd-logind[1446]: Session 28 logged out. Waiting for processes to exit. Dec 13 09:13:29.267609 systemd[1]: Started sshd@28-146.190.151.20:22-147.75.109.163:42634.service - OpenSSH per-connection server daemon (147.75.109.163:42634). Dec 13 09:13:29.270358 systemd-logind[1446]: Removed session 28. Dec 13 09:13:29.364964 sshd[4308]: Accepted publickey for core from 147.75.109.163 port 42634 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo Dec 13 09:13:29.367444 sshd[4308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:13:29.375632 systemd-logind[1446]: New session 29 of user core. Dec 13 09:13:29.381939 systemd[1]: Started session-29.scope - Session 29 of User core. Dec 13 09:13:29.437572 kubelet[2513]: E1213 09:13:29.436835 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:13:29.438793 containerd[1469]: time="2024-12-13T09:13:29.438727940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mbtgc,Uid:3f28b657-ce60-4acc-a268-9dd6ed3fe053,Namespace:kube-system,Attempt:0,}" Dec 13 09:13:29.474121 containerd[1469]: time="2024-12-13T09:13:29.473193441Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 09:13:29.474121 containerd[1469]: time="2024-12-13T09:13:29.473286449Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 09:13:29.474121 containerd[1469]: time="2024-12-13T09:13:29.473333422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:13:29.474121 containerd[1469]: time="2024-12-13T09:13:29.473517497Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:13:29.508737 systemd[1]: Started cri-containerd-2d330701f1907902f8ad345d819f93084c7d9426d79f808c733c2908bfc5b076.scope - libcontainer container 2d330701f1907902f8ad345d819f93084c7d9426d79f808c733c2908bfc5b076. Dec 13 09:13:29.555972 containerd[1469]: time="2024-12-13T09:13:29.554025431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mbtgc,Uid:3f28b657-ce60-4acc-a268-9dd6ed3fe053,Namespace:kube-system,Attempt:0,} returns sandbox id \"2d330701f1907902f8ad345d819f93084c7d9426d79f808c733c2908bfc5b076\"" Dec 13 09:13:29.557194 kubelet[2513]: E1213 09:13:29.557165 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:13:29.567436 containerd[1469]: time="2024-12-13T09:13:29.567378326Z" level=info msg="CreateContainer within sandbox \"2d330701f1907902f8ad345d819f93084c7d9426d79f808c733c2908bfc5b076\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 09:13:29.583161 containerd[1469]: time="2024-12-13T09:13:29.582948662Z" level=info msg="CreateContainer within sandbox \"2d330701f1907902f8ad345d819f93084c7d9426d79f808c733c2908bfc5b076\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c00b35a01b280f7b980fbb60fd7cea77bdb3f1b5cee42e8a20bb0f910b4f4e31\"" Dec 13 09:13:29.587032 containerd[1469]: time="2024-12-13T09:13:29.586905107Z" level=info msg="StartContainer for \"c00b35a01b280f7b980fbb60fd7cea77bdb3f1b5cee42e8a20bb0f910b4f4e31\"" Dec 13 09:13:29.622765 systemd[1]: Started cri-containerd-c00b35a01b280f7b980fbb60fd7cea77bdb3f1b5cee42e8a20bb0f910b4f4e31.scope - libcontainer container c00b35a01b280f7b980fbb60fd7cea77bdb3f1b5cee42e8a20bb0f910b4f4e31. Dec 13 09:13:29.667193 containerd[1469]: time="2024-12-13T09:13:29.667144086Z" level=info msg="StartContainer for \"c00b35a01b280f7b980fbb60fd7cea77bdb3f1b5cee42e8a20bb0f910b4f4e31\" returns successfully" Dec 13 09:13:29.683934 systemd[1]: cri-containerd-c00b35a01b280f7b980fbb60fd7cea77bdb3f1b5cee42e8a20bb0f910b4f4e31.scope: Deactivated successfully. Dec 13 09:13:29.727606 containerd[1469]: time="2024-12-13T09:13:29.727534710Z" level=info msg="shim disconnected" id=c00b35a01b280f7b980fbb60fd7cea77bdb3f1b5cee42e8a20bb0f910b4f4e31 namespace=k8s.io Dec 13 09:13:29.727943 containerd[1469]: time="2024-12-13T09:13:29.727921959Z" level=warning msg="cleaning up after shim disconnected" id=c00b35a01b280f7b980fbb60fd7cea77bdb3f1b5cee42e8a20bb0f910b4f4e31 namespace=k8s.io Dec 13 09:13:29.728041 containerd[1469]: time="2024-12-13T09:13:29.728026945Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 09:13:30.655832 kubelet[2513]: E1213 09:13:30.654655 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:13:30.663269 containerd[1469]: time="2024-12-13T09:13:30.662843507Z" level=info msg="CreateContainer within sandbox \"2d330701f1907902f8ad345d819f93084c7d9426d79f808c733c2908bfc5b076\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 09:13:30.704182 containerd[1469]: time="2024-12-13T09:13:30.704091041Z" level=info msg="CreateContainer within sandbox \"2d330701f1907902f8ad345d819f93084c7d9426d79f808c733c2908bfc5b076\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"959a9bdae8d24bd65413188ab05c749709cb4ff4d90177153b08deb84a64c851\"" Dec 13 09:13:30.709543 containerd[1469]: time="2024-12-13T09:13:30.706431908Z" level=info msg="StartContainer for \"959a9bdae8d24bd65413188ab05c749709cb4ff4d90177153b08deb84a64c851\"" Dec 13 09:13:30.774750 systemd[1]: Started cri-containerd-959a9bdae8d24bd65413188ab05c749709cb4ff4d90177153b08deb84a64c851.scope - libcontainer container 959a9bdae8d24bd65413188ab05c749709cb4ff4d90177153b08deb84a64c851. Dec 13 09:13:30.856580 containerd[1469]: time="2024-12-13T09:13:30.856459973Z" level=info msg="StartContainer for \"959a9bdae8d24bd65413188ab05c749709cb4ff4d90177153b08deb84a64c851\" returns successfully" Dec 13 09:13:30.873257 systemd[1]: cri-containerd-959a9bdae8d24bd65413188ab05c749709cb4ff4d90177153b08deb84a64c851.scope: Deactivated successfully. Dec 13 09:13:30.911157 containerd[1469]: time="2024-12-13T09:13:30.910953278Z" level=info msg="shim disconnected" id=959a9bdae8d24bd65413188ab05c749709cb4ff4d90177153b08deb84a64c851 namespace=k8s.io Dec 13 09:13:30.911157 containerd[1469]: time="2024-12-13T09:13:30.911020982Z" level=warning msg="cleaning up after shim disconnected" id=959a9bdae8d24bd65413188ab05c749709cb4ff4d90177153b08deb84a64c851 namespace=k8s.io Dec 13 09:13:30.911157 containerd[1469]: time="2024-12-13T09:13:30.911030252Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 09:13:31.251198 kubelet[2513]: E1213 09:13:31.251132 2513 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 09:13:31.300752 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-959a9bdae8d24bd65413188ab05c749709cb4ff4d90177153b08deb84a64c851-rootfs.mount: Deactivated successfully. Dec 13 09:13:31.660290 kubelet[2513]: E1213 09:13:31.659800 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:13:31.663381 containerd[1469]: time="2024-12-13T09:13:31.663211619Z" level=info msg="CreateContainer within sandbox \"2d330701f1907902f8ad345d819f93084c7d9426d79f808c733c2908bfc5b076\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 09:13:31.699221 containerd[1469]: time="2024-12-13T09:13:31.696594906Z" level=info msg="CreateContainer within sandbox \"2d330701f1907902f8ad345d819f93084c7d9426d79f808c733c2908bfc5b076\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"969abf91c1f574af2416f78fcf06c00a007793d9d64937d93c8a0f7a96ce5367\"" Dec 13 09:13:31.702270 containerd[1469]: time="2024-12-13T09:13:31.700462530Z" level=info msg="StartContainer for \"969abf91c1f574af2416f78fcf06c00a007793d9d64937d93c8a0f7a96ce5367\"" Dec 13 09:13:31.799888 systemd[1]: Started cri-containerd-969abf91c1f574af2416f78fcf06c00a007793d9d64937d93c8a0f7a96ce5367.scope - libcontainer container 969abf91c1f574af2416f78fcf06c00a007793d9d64937d93c8a0f7a96ce5367. Dec 13 09:13:31.851979 containerd[1469]: time="2024-12-13T09:13:31.851167200Z" level=info msg="StartContainer for \"969abf91c1f574af2416f78fcf06c00a007793d9d64937d93c8a0f7a96ce5367\" returns successfully" Dec 13 09:13:31.856579 systemd[1]: cri-containerd-969abf91c1f574af2416f78fcf06c00a007793d9d64937d93c8a0f7a96ce5367.scope: Deactivated successfully. Dec 13 09:13:31.906002 containerd[1469]: time="2024-12-13T09:13:31.905884206Z" level=info msg="shim disconnected" id=969abf91c1f574af2416f78fcf06c00a007793d9d64937d93c8a0f7a96ce5367 namespace=k8s.io Dec 13 09:13:31.906002 containerd[1469]: time="2024-12-13T09:13:31.905985642Z" level=warning msg="cleaning up after shim disconnected" id=969abf91c1f574af2416f78fcf06c00a007793d9d64937d93c8a0f7a96ce5367 namespace=k8s.io Dec 13 09:13:31.906002 containerd[1469]: time="2024-12-13T09:13:31.906001006Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 09:13:31.927060 containerd[1469]: time="2024-12-13T09:13:31.926903708Z" level=warning msg="cleanup warnings time=\"2024-12-13T09:13:31Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 09:13:32.300825 systemd[1]: run-containerd-runc-k8s.io-969abf91c1f574af2416f78fcf06c00a007793d9d64937d93c8a0f7a96ce5367-runc.8Oluaq.mount: Deactivated successfully. Dec 13 09:13:32.300951 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-969abf91c1f574af2416f78fcf06c00a007793d9d64937d93c8a0f7a96ce5367-rootfs.mount: Deactivated successfully. Dec 13 09:13:32.667024 kubelet[2513]: E1213 09:13:32.664930 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:13:32.670608 containerd[1469]: time="2024-12-13T09:13:32.670554025Z" level=info msg="CreateContainer within sandbox \"2d330701f1907902f8ad345d819f93084c7d9426d79f808c733c2908bfc5b076\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 09:13:32.698911 containerd[1469]: time="2024-12-13T09:13:32.698835728Z" level=info msg="CreateContainer within sandbox \"2d330701f1907902f8ad345d819f93084c7d9426d79f808c733c2908bfc5b076\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d9c8a0a12379144c0ce4576db160f3f720b5cee3d0ec17aac1b8077d17f4e7ef\"" Dec 13 09:13:32.702341 containerd[1469]: time="2024-12-13T09:13:32.700666968Z" level=info msg="StartContainer for \"d9c8a0a12379144c0ce4576db160f3f720b5cee3d0ec17aac1b8077d17f4e7ef\"" Dec 13 09:13:32.757906 systemd[1]: Started cri-containerd-d9c8a0a12379144c0ce4576db160f3f720b5cee3d0ec17aac1b8077d17f4e7ef.scope - libcontainer container d9c8a0a12379144c0ce4576db160f3f720b5cee3d0ec17aac1b8077d17f4e7ef. Dec 13 09:13:32.793302 systemd[1]: cri-containerd-d9c8a0a12379144c0ce4576db160f3f720b5cee3d0ec17aac1b8077d17f4e7ef.scope: Deactivated successfully. Dec 13 09:13:32.800859 containerd[1469]: time="2024-12-13T09:13:32.800782799Z" level=info msg="StartContainer for \"d9c8a0a12379144c0ce4576db160f3f720b5cee3d0ec17aac1b8077d17f4e7ef\" returns successfully" Dec 13 09:13:32.848715 containerd[1469]: time="2024-12-13T09:13:32.848357433Z" level=info msg="shim disconnected" id=d9c8a0a12379144c0ce4576db160f3f720b5cee3d0ec17aac1b8077d17f4e7ef namespace=k8s.io Dec 13 09:13:32.848715 containerd[1469]: time="2024-12-13T09:13:32.848439167Z" level=warning msg="cleaning up after shim disconnected" id=d9c8a0a12379144c0ce4576db160f3f720b5cee3d0ec17aac1b8077d17f4e7ef namespace=k8s.io Dec 13 09:13:32.848715 containerd[1469]: time="2024-12-13T09:13:32.848453935Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 09:13:33.256547 kubelet[2513]: I1213 09:13:33.256284 2513 setters.go:600] "Node became not ready" node="ci-4081.2.1-5-05f51c210a" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T09:13:33Z","lastTransitionTime":"2024-12-13T09:13:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 09:13:33.301073 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d9c8a0a12379144c0ce4576db160f3f720b5cee3d0ec17aac1b8077d17f4e7ef-rootfs.mount: Deactivated successfully. Dec 13 09:13:33.671676 kubelet[2513]: E1213 09:13:33.670617 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:13:33.678553 containerd[1469]: time="2024-12-13T09:13:33.676635754Z" level=info msg="CreateContainer within sandbox \"2d330701f1907902f8ad345d819f93084c7d9426d79f808c733c2908bfc5b076\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 09:13:33.745555 containerd[1469]: time="2024-12-13T09:13:33.745252971Z" level=info msg="CreateContainer within sandbox \"2d330701f1907902f8ad345d819f93084c7d9426d79f808c733c2908bfc5b076\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"66274bfab514933ba7cb0511220ec245330f52d3ec237e678304b467a25dadf3\"" Dec 13 09:13:33.749837 containerd[1469]: time="2024-12-13T09:13:33.748143403Z" level=info msg="StartContainer for \"66274bfab514933ba7cb0511220ec245330f52d3ec237e678304b467a25dadf3\"" Dec 13 09:13:33.813965 systemd[1]: Started cri-containerd-66274bfab514933ba7cb0511220ec245330f52d3ec237e678304b467a25dadf3.scope - libcontainer container 66274bfab514933ba7cb0511220ec245330f52d3ec237e678304b467a25dadf3. Dec 13 09:13:33.864063 containerd[1469]: time="2024-12-13T09:13:33.863588250Z" level=info msg="StartContainer for \"66274bfab514933ba7cb0511220ec245330f52d3ec237e678304b467a25dadf3\" returns successfully" Dec 13 09:13:34.431860 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 09:13:34.679148 kubelet[2513]: E1213 09:13:34.679103 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:13:35.682694 kubelet[2513]: E1213 09:13:35.682049 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:13:36.686831 kubelet[2513]: E1213 09:13:36.686703 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:13:39.052333 systemd-networkd[1358]: lxc_health: Link UP Dec 13 09:13:39.059818 systemd-networkd[1358]: lxc_health: Gained carrier Dec 13 09:13:39.442945 kubelet[2513]: E1213 09:13:39.442593 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:13:39.470687 kubelet[2513]: I1213 09:13:39.470595 2513 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mbtgc" podStartSLOduration=10.470569 podStartE2EDuration="10.470569s" podCreationTimestamp="2024-12-13 09:13:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 09:13:34.713437878 +0000 UTC m=+113.914315697" watchObservedRunningTime="2024-12-13 09:13:39.470569 +0000 UTC m=+118.671446814" Dec 13 09:13:39.697411 kubelet[2513]: E1213 09:13:39.697237 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:13:40.508800 systemd-networkd[1358]: lxc_health: Gained IPv6LL Dec 13 09:13:40.700266 kubelet[2513]: E1213 09:13:40.700197 2513 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:13:40.964144 kubelet[2513]: E1213 09:13:40.963926 2513 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:44850->127.0.0.1:46623: write tcp 127.0.0.1:44850->127.0.0.1:46623: write: broken pipe Dec 13 09:13:40.964144 kubelet[2513]: E1213 09:13:40.963997 2513 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 127.0.0.1:44850->127.0.0.1:46623: read tcp 127.0.0.1:44850->127.0.0.1:46623: read: connection reset by peer Dec 13 09:13:41.035258 kubelet[2513]: I1213 09:13:41.035043 2513 scope.go:117] "RemoveContainer" containerID="5df1ca3c980974eadbce73cb6ca97cca9eb79b0e84708cec3c518df7b6773e3a" Dec 13 09:13:41.036595 containerd[1469]: time="2024-12-13T09:13:41.036543399Z" level=info msg="RemoveContainer for \"5df1ca3c980974eadbce73cb6ca97cca9eb79b0e84708cec3c518df7b6773e3a\"" Dec 13 09:13:41.045268 containerd[1469]: time="2024-12-13T09:13:41.045154840Z" level=info msg="RemoveContainer for \"5df1ca3c980974eadbce73cb6ca97cca9eb79b0e84708cec3c518df7b6773e3a\" returns successfully" Dec 13 09:13:41.048693 containerd[1469]: time="2024-12-13T09:13:41.048634623Z" level=info msg="StopPodSandbox for \"7459b9df1d0a2f7cfb466b5a7c2a25127a4e5e614f5cf24b682a8219783e30d5\"" Dec 13 09:13:41.048850 containerd[1469]: time="2024-12-13T09:13:41.048759676Z" level=info msg="TearDown network for sandbox \"7459b9df1d0a2f7cfb466b5a7c2a25127a4e5e614f5cf24b682a8219783e30d5\" successfully" Dec 13 09:13:41.048850 containerd[1469]: time="2024-12-13T09:13:41.048772914Z" level=info msg="StopPodSandbox for \"7459b9df1d0a2f7cfb466b5a7c2a25127a4e5e614f5cf24b682a8219783e30d5\" returns successfully" Dec 13 09:13:41.049330 containerd[1469]: time="2024-12-13T09:13:41.049292758Z" level=info msg="RemovePodSandbox for \"7459b9df1d0a2f7cfb466b5a7c2a25127a4e5e614f5cf24b682a8219783e30d5\"" Dec 13 09:13:41.049330 containerd[1469]: time="2024-12-13T09:13:41.049330316Z" level=info msg="Forcibly stopping sandbox \"7459b9df1d0a2f7cfb466b5a7c2a25127a4e5e614f5cf24b682a8219783e30d5\"" Dec 13 09:13:41.049440 containerd[1469]: time="2024-12-13T09:13:41.049389789Z" level=info msg="TearDown network for sandbox \"7459b9df1d0a2f7cfb466b5a7c2a25127a4e5e614f5cf24b682a8219783e30d5\" successfully" Dec 13 09:13:41.054804 containerd[1469]: time="2024-12-13T09:13:41.054743456Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7459b9df1d0a2f7cfb466b5a7c2a25127a4e5e614f5cf24b682a8219783e30d5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 09:13:41.054987 containerd[1469]: time="2024-12-13T09:13:41.054824156Z" level=info msg="RemovePodSandbox \"7459b9df1d0a2f7cfb466b5a7c2a25127a4e5e614f5cf24b682a8219783e30d5\" returns successfully" Dec 13 09:13:41.055537 containerd[1469]: time="2024-12-13T09:13:41.055461551Z" level=info msg="StopPodSandbox for \"01a95862e00d8efa5c2ff145e48902cbb956f258a5982e8553c38828611a00a5\"" Dec 13 09:13:41.055664 containerd[1469]: time="2024-12-13T09:13:41.055603235Z" level=info msg="TearDown network for sandbox \"01a95862e00d8efa5c2ff145e48902cbb956f258a5982e8553c38828611a00a5\" successfully" Dec 13 09:13:41.055664 containerd[1469]: time="2024-12-13T09:13:41.055622775Z" level=info msg="StopPodSandbox for \"01a95862e00d8efa5c2ff145e48902cbb956f258a5982e8553c38828611a00a5\" returns successfully" Dec 13 09:13:41.056730 containerd[1469]: time="2024-12-13T09:13:41.056028569Z" level=info msg="RemovePodSandbox for \"01a95862e00d8efa5c2ff145e48902cbb956f258a5982e8553c38828611a00a5\"" Dec 13 09:13:41.056730 containerd[1469]: time="2024-12-13T09:13:41.056055924Z" level=info msg="Forcibly stopping sandbox \"01a95862e00d8efa5c2ff145e48902cbb956f258a5982e8553c38828611a00a5\"" Dec 13 09:13:41.056730 containerd[1469]: time="2024-12-13T09:13:41.056119133Z" level=info msg="TearDown network for sandbox \"01a95862e00d8efa5c2ff145e48902cbb956f258a5982e8553c38828611a00a5\" successfully" Dec 13 09:13:41.060117 containerd[1469]: time="2024-12-13T09:13:41.060053859Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"01a95862e00d8efa5c2ff145e48902cbb956f258a5982e8553c38828611a00a5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 09:13:41.060271 containerd[1469]: time="2024-12-13T09:13:41.060155105Z" level=info msg="RemovePodSandbox \"01a95862e00d8efa5c2ff145e48902cbb956f258a5982e8553c38828611a00a5\" returns successfully" Dec 13 09:13:45.487689 sshd[4308]: pam_unix(sshd:session): session closed for user core Dec 13 09:13:45.495627 systemd[1]: sshd@28-146.190.151.20:22-147.75.109.163:42634.service: Deactivated successfully. Dec 13 09:13:45.500373 systemd[1]: session-29.scope: Deactivated successfully. Dec 13 09:13:45.504458 systemd-logind[1446]: Session 29 logged out. Waiting for processes to exit. Dec 13 09:13:45.506453 systemd-logind[1446]: Removed session 29.