Mar 17 17:53:13.130822 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Mon Mar 17 16:09:25 -00 2025 Mar 17 17:53:13.130883 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=2a4a0f64c0160ed10b339be09fdc9d7e265b13f78aefc87616e79bf13c00bb1c Mar 17 17:53:13.130907 kernel: BIOS-provided physical RAM map: Mar 17 17:53:13.130919 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 17 17:53:13.130929 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 17 17:53:13.130941 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 17 17:53:13.130955 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Mar 17 17:53:13.130967 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Mar 17 17:53:13.130979 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 17 17:53:13.130990 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 17 17:53:13.131007 kernel: NX (Execute Disable) protection: active Mar 17 17:53:13.131029 kernel: APIC: Static calls initialized Mar 17 17:53:13.131042 kernel: SMBIOS 2.8 present. Mar 17 17:53:13.131054 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Mar 17 17:53:13.131069 kernel: Hypervisor detected: KVM Mar 17 17:53:13.131082 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 17 17:53:13.131104 kernel: kvm-clock: using sched offset of 3728761728 cycles Mar 17 17:53:13.131119 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 17 17:53:13.131134 kernel: tsc: Detected 2494.140 MHz processor Mar 17 17:53:13.131148 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 17 17:53:13.131162 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 17 17:53:13.131175 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Mar 17 17:53:13.131189 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 17 17:53:13.131202 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 17 17:53:13.131221 kernel: ACPI: Early table checksum verification disabled Mar 17 17:53:13.131234 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Mar 17 17:53:13.131248 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:53:13.131262 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:53:13.131275 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:53:13.131288 kernel: ACPI: FACS 0x000000007FFE0000 000040 Mar 17 17:53:13.131301 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:53:13.131315 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:53:13.131328 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:53:13.131345 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:53:13.131358 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Mar 17 17:53:13.131371 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Mar 17 17:53:13.131385 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Mar 17 17:53:13.131398 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Mar 17 17:53:13.131412 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Mar 17 17:53:13.131428 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Mar 17 17:53:13.131449 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Mar 17 17:53:13.131467 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Mar 17 17:53:13.131484 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Mar 17 17:53:13.131498 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Mar 17 17:53:13.131513 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Mar 17 17:53:13.131647 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Mar 17 17:53:13.131664 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Mar 17 17:53:13.131685 kernel: Zone ranges: Mar 17 17:53:13.131700 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 17 17:53:13.131714 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Mar 17 17:53:13.131729 kernel: Normal empty Mar 17 17:53:13.131776 kernel: Movable zone start for each node Mar 17 17:53:13.131790 kernel: Early memory node ranges Mar 17 17:53:13.131805 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 17 17:53:13.131819 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Mar 17 17:53:13.131834 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Mar 17 17:53:13.131854 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 17 17:53:13.131869 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 17 17:53:13.131887 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Mar 17 17:53:13.131901 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 17 17:53:13.131916 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 17 17:53:13.131931 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 17 17:53:13.131945 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 17 17:53:13.131960 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 17 17:53:13.131975 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 17 17:53:13.131989 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 17 17:53:13.132009 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 17 17:53:13.132023 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 17 17:53:13.132037 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 17 17:53:13.132052 kernel: TSC deadline timer available Mar 17 17:53:13.132066 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Mar 17 17:53:13.132080 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 17 17:53:13.132094 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Mar 17 17:53:13.132112 kernel: Booting paravirtualized kernel on KVM Mar 17 17:53:13.132127 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 17 17:53:13.132147 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Mar 17 17:53:13.132161 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Mar 17 17:53:13.132176 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Mar 17 17:53:13.132190 kernel: pcpu-alloc: [0] 0 1 Mar 17 17:53:13.132204 kernel: kvm-guest: PV spinlocks disabled, no host support Mar 17 17:53:13.132220 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=2a4a0f64c0160ed10b339be09fdc9d7e265b13f78aefc87616e79bf13c00bb1c Mar 17 17:53:13.132235 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 17 17:53:13.132249 kernel: random: crng init done Mar 17 17:53:13.132268 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 17 17:53:13.132283 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Mar 17 17:53:13.132295 kernel: Fallback order for Node 0: 0 Mar 17 17:53:13.132308 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Mar 17 17:53:13.132322 kernel: Policy zone: DMA32 Mar 17 17:53:13.132337 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 17 17:53:13.132353 kernel: Memory: 1969156K/2096612K available (14336K kernel code, 2303K rwdata, 22860K rodata, 43476K init, 1596K bss, 127196K reserved, 0K cma-reserved) Mar 17 17:53:13.132367 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 17 17:53:13.132381 kernel: Kernel/User page tables isolation: enabled Mar 17 17:53:13.132401 kernel: ftrace: allocating 37910 entries in 149 pages Mar 17 17:53:13.132416 kernel: ftrace: allocated 149 pages with 4 groups Mar 17 17:53:13.132431 kernel: Dynamic Preempt: voluntary Mar 17 17:53:13.132445 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 17 17:53:13.132469 kernel: rcu: RCU event tracing is enabled. Mar 17 17:53:13.132484 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 17 17:53:13.132499 kernel: Trampoline variant of Tasks RCU enabled. Mar 17 17:53:13.132513 kernel: Rude variant of Tasks RCU enabled. Mar 17 17:53:13.132527 kernel: Tracing variant of Tasks RCU enabled. Mar 17 17:53:13.132547 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 17 17:53:13.132563 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 17 17:53:13.132578 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Mar 17 17:53:13.132596 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 17 17:53:13.132609 kernel: Console: colour VGA+ 80x25 Mar 17 17:53:13.132620 kernel: printk: console [tty0] enabled Mar 17 17:53:13.132633 kernel: printk: console [ttyS0] enabled Mar 17 17:53:13.132645 kernel: ACPI: Core revision 20230628 Mar 17 17:53:13.132659 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 17 17:53:13.132677 kernel: APIC: Switch to symmetric I/O mode setup Mar 17 17:53:13.132690 kernel: x2apic enabled Mar 17 17:53:13.132703 kernel: APIC: Switched APIC routing to: physical x2apic Mar 17 17:53:13.132716 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 17 17:53:13.132730 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Mar 17 17:53:13.132770 kernel: Calibrating delay loop (skipped) preset value.. 4988.28 BogoMIPS (lpj=2494140) Mar 17 17:53:13.132783 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Mar 17 17:53:13.132797 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Mar 17 17:53:13.132829 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 17 17:53:13.132844 kernel: Spectre V2 : Mitigation: Retpolines Mar 17 17:53:13.132859 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Mar 17 17:53:13.132877 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Mar 17 17:53:13.132891 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Mar 17 17:53:13.132905 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 17 17:53:13.132920 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Mar 17 17:53:13.132934 kernel: MDS: Mitigation: Clear CPU buffers Mar 17 17:53:13.132949 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Mar 17 17:53:13.132971 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 17 17:53:13.132986 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 17 17:53:13.133000 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 17 17:53:13.133015 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 17 17:53:13.133030 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Mar 17 17:53:13.133044 kernel: Freeing SMP alternatives memory: 32K Mar 17 17:53:13.133059 kernel: pid_max: default: 32768 minimum: 301 Mar 17 17:53:13.133074 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 17 17:53:13.133093 kernel: landlock: Up and running. Mar 17 17:53:13.133108 kernel: SELinux: Initializing. Mar 17 17:53:13.133122 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 17 17:53:13.133136 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 17 17:53:13.133151 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Mar 17 17:53:13.133166 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 17 17:53:13.133180 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 17 17:53:13.133195 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 17 17:53:13.133210 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Mar 17 17:53:13.133228 kernel: signal: max sigframe size: 1776 Mar 17 17:53:13.133242 kernel: rcu: Hierarchical SRCU implementation. Mar 17 17:53:13.133257 kernel: rcu: Max phase no-delay instances is 400. Mar 17 17:53:13.133269 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 17 17:53:13.133283 kernel: smp: Bringing up secondary CPUs ... Mar 17 17:53:13.133297 kernel: smpboot: x86: Booting SMP configuration: Mar 17 17:53:13.133312 kernel: .... node #0, CPUs: #1 Mar 17 17:53:13.133330 kernel: smp: Brought up 1 node, 2 CPUs Mar 17 17:53:13.133344 kernel: smpboot: Max logical packages: 1 Mar 17 17:53:13.133363 kernel: smpboot: Total of 2 processors activated (9976.56 BogoMIPS) Mar 17 17:53:13.133376 kernel: devtmpfs: initialized Mar 17 17:53:13.133391 kernel: x86/mm: Memory block size: 128MB Mar 17 17:53:13.133406 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 17 17:53:13.133420 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 17 17:53:13.133435 kernel: pinctrl core: initialized pinctrl subsystem Mar 17 17:53:13.133448 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 17 17:53:13.133461 kernel: audit: initializing netlink subsys (disabled) Mar 17 17:53:13.133476 kernel: audit: type=2000 audit(1742233991.710:1): state=initialized audit_enabled=0 res=1 Mar 17 17:53:13.133495 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 17 17:53:13.133509 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 17 17:53:13.133522 kernel: cpuidle: using governor menu Mar 17 17:53:13.133536 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 17 17:53:13.133549 kernel: dca service started, version 1.12.1 Mar 17 17:53:13.133562 kernel: PCI: Using configuration type 1 for base access Mar 17 17:53:13.133574 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 17 17:53:13.133587 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 17 17:53:13.133599 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 17 17:53:13.133619 kernel: ACPI: Added _OSI(Module Device) Mar 17 17:53:13.133631 kernel: ACPI: Added _OSI(Processor Device) Mar 17 17:53:13.133643 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 17 17:53:13.133657 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 17 17:53:13.133670 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 17 17:53:13.133684 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 17 17:53:13.133698 kernel: ACPI: Interpreter enabled Mar 17 17:53:13.133712 kernel: ACPI: PM: (supports S0 S5) Mar 17 17:53:13.133725 kernel: ACPI: Using IOAPIC for interrupt routing Mar 17 17:53:13.133778 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 17 17:53:13.133792 kernel: PCI: Using E820 reservations for host bridge windows Mar 17 17:53:13.133807 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Mar 17 17:53:13.133822 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 17 17:53:13.134171 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Mar 17 17:53:13.134386 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Mar 17 17:53:13.134562 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Mar 17 17:53:13.134594 kernel: acpiphp: Slot [3] registered Mar 17 17:53:13.134609 kernel: acpiphp: Slot [4] registered Mar 17 17:53:13.134624 kernel: acpiphp: Slot [5] registered Mar 17 17:53:13.134639 kernel: acpiphp: Slot [6] registered Mar 17 17:53:13.134654 kernel: acpiphp: Slot [7] registered Mar 17 17:53:13.134668 kernel: acpiphp: Slot [8] registered Mar 17 17:53:13.134684 kernel: acpiphp: Slot [9] registered Mar 17 17:53:13.134699 kernel: acpiphp: Slot [10] registered Mar 17 17:53:13.134715 kernel: acpiphp: Slot [11] registered Mar 17 17:53:13.134731 kernel: acpiphp: Slot [12] registered Mar 17 17:53:13.136827 kernel: acpiphp: Slot [13] registered Mar 17 17:53:13.136843 kernel: acpiphp: Slot [14] registered Mar 17 17:53:13.136858 kernel: acpiphp: Slot [15] registered Mar 17 17:53:13.136870 kernel: acpiphp: Slot [16] registered Mar 17 17:53:13.136883 kernel: acpiphp: Slot [17] registered Mar 17 17:53:13.136896 kernel: acpiphp: Slot [18] registered Mar 17 17:53:13.136909 kernel: acpiphp: Slot [19] registered Mar 17 17:53:13.136921 kernel: acpiphp: Slot [20] registered Mar 17 17:53:13.136934 kernel: acpiphp: Slot [21] registered Mar 17 17:53:13.136970 kernel: acpiphp: Slot [22] registered Mar 17 17:53:13.136983 kernel: acpiphp: Slot [23] registered Mar 17 17:53:13.136996 kernel: acpiphp: Slot [24] registered Mar 17 17:53:13.137009 kernel: acpiphp: Slot [25] registered Mar 17 17:53:13.137022 kernel: acpiphp: Slot [26] registered Mar 17 17:53:13.137034 kernel: acpiphp: Slot [27] registered Mar 17 17:53:13.137047 kernel: acpiphp: Slot [28] registered Mar 17 17:53:13.137060 kernel: acpiphp: Slot [29] registered Mar 17 17:53:13.137073 kernel: acpiphp: Slot [30] registered Mar 17 17:53:13.137089 kernel: acpiphp: Slot [31] registered Mar 17 17:53:13.137102 kernel: PCI host bridge to bus 0000:00 Mar 17 17:53:13.137346 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 17 17:53:13.137471 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 17 17:53:13.137590 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 17 17:53:13.137703 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Mar 17 17:53:13.137841 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Mar 17 17:53:13.137956 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 17 17:53:13.138121 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Mar 17 17:53:13.138286 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Mar 17 17:53:13.138435 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Mar 17 17:53:13.138562 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Mar 17 17:53:13.138690 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Mar 17 17:53:13.138870 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Mar 17 17:53:13.139011 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Mar 17 17:53:13.139142 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Mar 17 17:53:13.139289 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Mar 17 17:53:13.139422 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Mar 17 17:53:13.139570 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Mar 17 17:53:13.139717 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Mar 17 17:53:13.139878 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Mar 17 17:53:13.140028 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Mar 17 17:53:13.140162 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Mar 17 17:53:13.140283 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Mar 17 17:53:13.140419 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Mar 17 17:53:13.140543 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Mar 17 17:53:13.140667 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 17 17:53:13.142930 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Mar 17 17:53:13.143139 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Mar 17 17:53:13.143313 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Mar 17 17:53:13.143480 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Mar 17 17:53:13.143689 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 17 17:53:13.143906 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Mar 17 17:53:13.144072 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Mar 17 17:53:13.144254 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Mar 17 17:53:13.144438 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Mar 17 17:53:13.144602 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Mar 17 17:53:13.144800 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Mar 17 17:53:13.144970 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Mar 17 17:53:13.145178 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Mar 17 17:53:13.145344 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Mar 17 17:53:13.145519 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Mar 17 17:53:13.145678 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Mar 17 17:53:13.145906 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Mar 17 17:53:13.146087 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Mar 17 17:53:13.146255 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Mar 17 17:53:13.146422 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Mar 17 17:53:13.146615 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Mar 17 17:53:13.146806 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Mar 17 17:53:13.146969 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Mar 17 17:53:13.146991 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 17 17:53:13.147007 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 17 17:53:13.147024 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 17 17:53:13.147040 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 17 17:53:13.147057 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Mar 17 17:53:13.147082 kernel: iommu: Default domain type: Translated Mar 17 17:53:13.147098 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 17 17:53:13.147115 kernel: PCI: Using ACPI for IRQ routing Mar 17 17:53:13.147132 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 17 17:53:13.147149 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 17 17:53:13.147166 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Mar 17 17:53:13.147342 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Mar 17 17:53:13.147506 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Mar 17 17:53:13.147724 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 17 17:53:13.147760 kernel: vgaarb: loaded Mar 17 17:53:13.147776 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 17 17:53:13.147791 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 17 17:53:13.147807 kernel: clocksource: Switched to clocksource kvm-clock Mar 17 17:53:13.147822 kernel: VFS: Disk quotas dquot_6.6.0 Mar 17 17:53:13.147838 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 17 17:53:13.147854 kernel: pnp: PnP ACPI init Mar 17 17:53:13.147869 kernel: pnp: PnP ACPI: found 4 devices Mar 17 17:53:13.147891 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 17 17:53:13.147906 kernel: NET: Registered PF_INET protocol family Mar 17 17:53:13.147921 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 17 17:53:13.147936 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Mar 17 17:53:13.147951 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 17 17:53:13.147966 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 17 17:53:13.147982 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Mar 17 17:53:13.147997 kernel: TCP: Hash tables configured (established 16384 bind 16384) Mar 17 17:53:13.148012 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 17 17:53:13.148031 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 17 17:53:13.148046 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 17 17:53:13.148062 kernel: NET: Registered PF_XDP protocol family Mar 17 17:53:13.148228 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 17 17:53:13.148368 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 17 17:53:13.148510 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 17 17:53:13.148650 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Mar 17 17:53:13.148812 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Mar 17 17:53:13.148990 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Mar 17 17:53:13.149149 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Mar 17 17:53:13.149171 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Mar 17 17:53:13.149325 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 45260 usecs Mar 17 17:53:13.149345 kernel: PCI: CLS 0 bytes, default 64 Mar 17 17:53:13.149362 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Mar 17 17:53:13.149379 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Mar 17 17:53:13.149394 kernel: Initialise system trusted keyrings Mar 17 17:53:13.149415 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Mar 17 17:53:13.149431 kernel: Key type asymmetric registered Mar 17 17:53:13.149447 kernel: Asymmetric key parser 'x509' registered Mar 17 17:53:13.149462 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 17 17:53:13.149477 kernel: io scheduler mq-deadline registered Mar 17 17:53:13.149493 kernel: io scheduler kyber registered Mar 17 17:53:13.149509 kernel: io scheduler bfq registered Mar 17 17:53:13.149524 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 17 17:53:13.149541 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Mar 17 17:53:13.149557 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Mar 17 17:53:13.149576 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Mar 17 17:53:13.149591 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 17 17:53:13.149603 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 17 17:53:13.149617 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 17 17:53:13.149632 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 17 17:53:13.149645 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 17 17:53:13.150022 kernel: rtc_cmos 00:03: RTC can wake from S4 Mar 17 17:53:13.150049 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 17 17:53:13.150206 kernel: rtc_cmos 00:03: registered as rtc0 Mar 17 17:53:13.150346 kernel: rtc_cmos 00:03: setting system clock to 2025-03-17T17:53:12 UTC (1742233992) Mar 17 17:53:13.150481 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Mar 17 17:53:13.150500 kernel: intel_pstate: CPU model not supported Mar 17 17:53:13.150517 kernel: NET: Registered PF_INET6 protocol family Mar 17 17:53:13.150532 kernel: Segment Routing with IPv6 Mar 17 17:53:13.150547 kernel: In-situ OAM (IOAM) with IPv6 Mar 17 17:53:13.150564 kernel: NET: Registered PF_PACKET protocol family Mar 17 17:53:13.150585 kernel: Key type dns_resolver registered Mar 17 17:53:13.150601 kernel: IPI shorthand broadcast: enabled Mar 17 17:53:13.150617 kernel: sched_clock: Marking stable (1073031616, 113033490)->(1337718257, -151653151) Mar 17 17:53:13.150632 kernel: registered taskstats version 1 Mar 17 17:53:13.150648 kernel: Loading compiled-in X.509 certificates Mar 17 17:53:13.150663 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: 2d438fc13e28f87f3f580874887bade2e2b0c7dd' Mar 17 17:53:13.150678 kernel: Key type .fscrypt registered Mar 17 17:53:13.150693 kernel: Key type fscrypt-provisioning registered Mar 17 17:53:13.150708 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 17 17:53:13.150728 kernel: ima: Allocated hash algorithm: sha1 Mar 17 17:53:13.150757 kernel: ima: No architecture policies found Mar 17 17:53:13.150770 kernel: clk: Disabling unused clocks Mar 17 17:53:13.150783 kernel: Freeing unused kernel image (initmem) memory: 43476K Mar 17 17:53:13.150796 kernel: Write protecting the kernel read-only data: 38912k Mar 17 17:53:13.150840 kernel: Freeing unused kernel image (rodata/data gap) memory: 1716K Mar 17 17:53:13.150859 kernel: Run /init as init process Mar 17 17:53:13.150876 kernel: with arguments: Mar 17 17:53:13.150892 kernel: /init Mar 17 17:53:13.150911 kernel: with environment: Mar 17 17:53:13.150926 kernel: HOME=/ Mar 17 17:53:13.150944 kernel: TERM=linux Mar 17 17:53:13.150959 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 17 17:53:13.150989 systemd[1]: Successfully made /usr/ read-only. Mar 17 17:53:13.151012 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 17 17:53:13.151030 systemd[1]: Detected virtualization kvm. Mar 17 17:53:13.151045 systemd[1]: Detected architecture x86-64. Mar 17 17:53:13.151066 systemd[1]: Running in initrd. Mar 17 17:53:13.151082 systemd[1]: No hostname configured, using default hostname. Mar 17 17:53:13.151100 systemd[1]: Hostname set to . Mar 17 17:53:13.151116 systemd[1]: Initializing machine ID from VM UUID. Mar 17 17:53:13.151133 systemd[1]: Queued start job for default target initrd.target. Mar 17 17:53:13.151150 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:53:13.151167 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:53:13.151186 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 17 17:53:13.151208 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:53:13.151224 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 17 17:53:13.151244 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 17 17:53:13.151263 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 17 17:53:13.151281 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 17 17:53:13.151298 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:53:13.151337 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:53:13.151354 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:53:13.151372 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:53:13.151394 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:53:13.151410 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:53:13.151427 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:53:13.151448 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:53:13.151465 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 17 17:53:13.151483 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 17 17:53:13.151501 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:53:13.151518 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:53:13.151535 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:53:13.151553 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:53:13.151570 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 17 17:53:13.151591 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:53:13.151608 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 17 17:53:13.151642 systemd[1]: Starting systemd-fsck-usr.service... Mar 17 17:53:13.151660 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:53:13.151678 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:53:13.151695 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:53:13.151712 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 17 17:53:13.151819 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:53:13.151845 systemd[1]: Finished systemd-fsck-usr.service. Mar 17 17:53:13.151863 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 17 17:53:13.151961 systemd-journald[181]: Collecting audit messages is disabled. Mar 17 17:53:13.152005 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:53:13.152024 systemd-journald[181]: Journal started Mar 17 17:53:13.152066 systemd-journald[181]: Runtime Journal (/run/log/journal/94b973c6e6004d47bb508e26eac410f6) is 4.9M, max 39.3M, 34.4M free. Mar 17 17:53:13.106383 systemd-modules-load[182]: Inserted module 'overlay' Mar 17 17:53:13.212221 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 17 17:53:13.212265 kernel: Bridge firewalling registered Mar 17 17:53:13.212286 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:53:13.189991 systemd-modules-load[182]: Inserted module 'br_netfilter' Mar 17 17:53:13.212925 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:53:13.213879 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:53:13.221034 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:53:13.224019 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:53:13.226640 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:53:13.232210 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:53:13.258860 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:53:13.264197 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:53:13.273215 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:53:13.274837 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:53:13.279927 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:53:13.287937 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 17 17:53:13.333767 dracut-cmdline[220]: dracut-dracut-053 Mar 17 17:53:13.338878 dracut-cmdline[220]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=2a4a0f64c0160ed10b339be09fdc9d7e265b13f78aefc87616e79bf13c00bb1c Mar 17 17:53:13.343543 systemd-resolved[213]: Positive Trust Anchors: Mar 17 17:53:13.343556 systemd-resolved[213]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:53:13.343598 systemd-resolved[213]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:53:13.353193 systemd-resolved[213]: Defaulting to hostname 'linux'. Mar 17 17:53:13.355758 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:53:13.356363 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:53:13.520832 kernel: SCSI subsystem initialized Mar 17 17:53:13.539835 kernel: Loading iSCSI transport class v2.0-870. Mar 17 17:53:13.557783 kernel: iscsi: registered transport (tcp) Mar 17 17:53:13.598020 kernel: iscsi: registered transport (qla4xxx) Mar 17 17:53:13.598322 kernel: QLogic iSCSI HBA Driver Mar 17 17:53:13.704970 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 17 17:53:13.712290 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 17 17:53:13.750153 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 17 17:53:13.750259 kernel: device-mapper: uevent: version 1.0.3 Mar 17 17:53:13.750284 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 17 17:53:13.811813 kernel: raid6: avx2x4 gen() 12199 MB/s Mar 17 17:53:13.830919 kernel: raid6: avx2x2 gen() 13542 MB/s Mar 17 17:53:13.848052 kernel: raid6: avx2x1 gen() 8538 MB/s Mar 17 17:53:13.848137 kernel: raid6: using algorithm avx2x2 gen() 13542 MB/s Mar 17 17:53:13.866074 kernel: raid6: .... xor() 9819 MB/s, rmw enabled Mar 17 17:53:13.866174 kernel: raid6: using avx2x2 recovery algorithm Mar 17 17:53:13.907771 kernel: xor: automatically using best checksumming function avx Mar 17 17:53:14.158887 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 17 17:53:14.181117 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:53:14.197156 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:53:14.219428 systemd-udevd[404]: Using default interface naming scheme 'v255'. Mar 17 17:53:14.229925 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:53:14.241518 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 17 17:53:14.290448 dracut-pre-trigger[411]: rd.md=0: removing MD RAID activation Mar 17 17:53:14.358488 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:53:14.369052 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:53:14.474233 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:53:14.484906 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 17 17:53:14.519855 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 17 17:53:14.522217 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:53:14.524013 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:53:14.525165 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:53:14.534151 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 17 17:53:14.575281 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:53:14.629171 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Mar 17 17:53:14.718425 kernel: scsi host0: Virtio SCSI HBA Mar 17 17:53:14.718668 kernel: cryptd: max_cpu_qlen set to 1000 Mar 17 17:53:14.718705 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Mar 17 17:53:14.721059 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 17 17:53:14.721092 kernel: GPT:9289727 != 125829119 Mar 17 17:53:14.721113 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 17 17:53:14.721133 kernel: GPT:9289727 != 125829119 Mar 17 17:53:14.721152 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 17 17:53:14.721170 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 17:53:14.721190 kernel: AVX2 version of gcm_enc/dec engaged. Mar 17 17:53:14.721225 kernel: AES CTR mode by8 optimization enabled Mar 17 17:53:14.721242 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Mar 17 17:53:14.755070 kernel: virtio_blk virtio5: [vdb] 976 512-byte logical blocks (500 kB/488 KiB) Mar 17 17:53:14.755301 kernel: libata version 3.00 loaded. Mar 17 17:53:14.755324 kernel: ACPI: bus type USB registered Mar 17 17:53:14.759148 kernel: usbcore: registered new interface driver usbfs Mar 17 17:53:14.758381 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:53:14.758579 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:53:14.761260 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:53:14.762111 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:53:14.763924 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:53:14.766483 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:53:14.775495 kernel: usbcore: registered new interface driver hub Mar 17 17:53:14.778787 kernel: ata_piix 0000:00:01.1: version 2.13 Mar 17 17:53:14.832054 kernel: usbcore: registered new device driver usb Mar 17 17:53:14.832088 kernel: scsi host1: ata_piix Mar 17 17:53:14.832705 kernel: scsi host2: ata_piix Mar 17 17:53:14.833004 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Mar 17 17:53:14.833031 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Mar 17 17:53:14.777225 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:53:14.952869 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Mar 17 17:53:14.953279 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Mar 17 17:53:14.953488 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Mar 17 17:53:14.953711 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Mar 17 17:53:14.956024 kernel: hub 1-0:1.0: USB hub found Mar 17 17:53:14.956272 kernel: hub 1-0:1.0: 2 ports detected Mar 17 17:53:14.956507 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (465) Mar 17 17:53:14.956530 kernel: BTRFS: device fsid 16b3954e-2e86-4c7f-a948-d3d3817b1bdc devid 1 transid 42 /dev/vda3 scanned by (udev-worker) (460) Mar 17 17:53:14.782403 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 17 17:53:14.944491 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 17 17:53:14.969318 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 17 17:53:14.970351 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:53:14.998998 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 17 17:53:15.010335 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 17 17:53:15.011118 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 17 17:53:15.036035 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 17 17:53:15.040199 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:53:15.050506 disk-uuid[541]: Primary Header is updated. Mar 17 17:53:15.050506 disk-uuid[541]: Secondary Entries is updated. Mar 17 17:53:15.050506 disk-uuid[541]: Secondary Header is updated. Mar 17 17:53:15.061007 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 17:53:15.091653 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:53:16.084847 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 17:53:16.084929 disk-uuid[542]: The operation has completed successfully. Mar 17 17:53:16.175140 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 17 17:53:16.175310 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 17 17:53:16.264189 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 17 17:53:16.270800 sh[561]: Success Mar 17 17:53:16.294201 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Mar 17 17:53:16.433854 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 17 17:53:16.443081 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 17 17:53:16.444158 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 17 17:53:16.493600 kernel: BTRFS info (device dm-0): first mount of filesystem 16b3954e-2e86-4c7f-a948-d3d3817b1bdc Mar 17 17:53:16.493719 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:53:16.493760 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 17 17:53:16.493782 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 17 17:53:16.496254 kernel: BTRFS info (device dm-0): using free space tree Mar 17 17:53:16.508111 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 17 17:53:16.509703 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 17 17:53:16.520188 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 17 17:53:16.524042 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 17 17:53:16.551715 kernel: BTRFS info (device vda6): first mount of filesystem e64ce651-fa93-44de-893d-ff1e0bc9061f Mar 17 17:53:16.551846 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:53:16.553107 kernel: BTRFS info (device vda6): using free space tree Mar 17 17:53:16.559917 kernel: BTRFS info (device vda6): auto enabling async discard Mar 17 17:53:16.576880 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 17 17:53:16.577998 kernel: BTRFS info (device vda6): last unmount of filesystem e64ce651-fa93-44de-893d-ff1e0bc9061f Mar 17 17:53:16.586966 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 17 17:53:16.595238 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 17 17:53:16.768389 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:53:16.772520 ignition[654]: Ignition 2.20.0 Mar 17 17:53:16.772533 ignition[654]: Stage: fetch-offline Mar 17 17:53:16.778064 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:53:16.772578 ignition[654]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:53:16.778903 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:53:16.772587 ignition[654]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Mar 17 17:53:16.772725 ignition[654]: parsed url from cmdline: "" Mar 17 17:53:16.772729 ignition[654]: no config URL provided Mar 17 17:53:16.772756 ignition[654]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 17:53:16.772776 ignition[654]: no config at "/usr/lib/ignition/user.ign" Mar 17 17:53:16.772782 ignition[654]: failed to fetch config: resource requires networking Mar 17 17:53:16.772974 ignition[654]: Ignition finished successfully Mar 17 17:53:16.827584 systemd-networkd[753]: lo: Link UP Mar 17 17:53:16.827603 systemd-networkd[753]: lo: Gained carrier Mar 17 17:53:16.831161 systemd-networkd[753]: Enumeration completed Mar 17 17:53:16.831358 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:53:16.832395 systemd-networkd[753]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Mar 17 17:53:16.832401 systemd-networkd[753]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Mar 17 17:53:16.832938 systemd[1]: Reached target network.target - Network. Mar 17 17:53:16.840475 systemd-networkd[753]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:53:16.840483 systemd-networkd[753]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:53:16.841687 systemd-networkd[753]: eth0: Link UP Mar 17 17:53:16.841695 systemd-networkd[753]: eth0: Gained carrier Mar 17 17:53:16.841713 systemd-networkd[753]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Mar 17 17:53:16.845147 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 17 17:53:16.846540 systemd-networkd[753]: eth1: Link UP Mar 17 17:53:16.846549 systemd-networkd[753]: eth1: Gained carrier Mar 17 17:53:16.846589 systemd-networkd[753]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:53:16.860874 systemd-networkd[753]: eth1: DHCPv4 address 10.124.0.18/20 acquired from 169.254.169.253 Mar 17 17:53:16.875917 systemd-networkd[753]: eth0: DHCPv4 address 24.199.119.133/20, gateway 24.199.112.1 acquired from 169.254.169.253 Mar 17 17:53:16.891141 ignition[757]: Ignition 2.20.0 Mar 17 17:53:16.892546 ignition[757]: Stage: fetch Mar 17 17:53:16.892899 ignition[757]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:53:16.892918 ignition[757]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Mar 17 17:53:16.893108 ignition[757]: parsed url from cmdline: "" Mar 17 17:53:16.893114 ignition[757]: no config URL provided Mar 17 17:53:16.893123 ignition[757]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 17:53:16.893149 ignition[757]: no config at "/usr/lib/ignition/user.ign" Mar 17 17:53:16.893202 ignition[757]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Mar 17 17:53:16.925164 ignition[757]: GET result: OK Mar 17 17:53:16.925289 ignition[757]: parsing config with SHA512: ac48f0640eb4883e73a27b4e54a74c4218a03228992d483abc052e3cbfcba82fd71a8dbd2ddad9099ff7974d8e43590bc124256796b94ec74eefad1c3c3efbe4 Mar 17 17:53:16.937288 unknown[757]: fetched base config from "system" Mar 17 17:53:16.937300 unknown[757]: fetched base config from "system" Mar 17 17:53:16.937311 unknown[757]: fetched user config from "digitalocean" Mar 17 17:53:16.939093 ignition[757]: fetch: fetch complete Mar 17 17:53:16.939131 ignition[757]: fetch: fetch passed Mar 17 17:53:16.939242 ignition[757]: Ignition finished successfully Mar 17 17:53:16.942349 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 17 17:53:16.952056 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 17 17:53:16.983311 ignition[765]: Ignition 2.20.0 Mar 17 17:53:16.983327 ignition[765]: Stage: kargs Mar 17 17:53:16.983791 ignition[765]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:53:16.983814 ignition[765]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Mar 17 17:53:16.985339 ignition[765]: kargs: kargs passed Mar 17 17:53:16.985431 ignition[765]: Ignition finished successfully Mar 17 17:53:16.987478 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 17 17:53:17.008205 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 17 17:53:17.038798 ignition[772]: Ignition 2.20.0 Mar 17 17:53:17.038819 ignition[772]: Stage: disks Mar 17 17:53:17.039157 ignition[772]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:53:17.039180 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Mar 17 17:53:17.044202 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 17 17:53:17.041060 ignition[772]: disks: disks passed Mar 17 17:53:17.047832 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 17 17:53:17.041122 ignition[772]: Ignition finished successfully Mar 17 17:53:17.049108 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 17 17:53:17.049632 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:53:17.051289 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:53:17.052378 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:53:17.060123 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 17 17:53:17.096936 systemd-fsck[781]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 17 17:53:17.103964 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 17 17:53:17.492955 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 17 17:53:17.678895 kernel: EXT4-fs (vda9): mounted filesystem 21764504-a65e-45eb-84e1-376b55b62aba r/w with ordered data mode. Quota mode: none. Mar 17 17:53:17.680030 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 17 17:53:17.681293 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 17 17:53:17.694851 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:53:17.700118 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 17 17:53:17.708326 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... Mar 17 17:53:17.718022 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 17 17:53:17.726466 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (789) Mar 17 17:53:17.726535 kernel: BTRFS info (device vda6): first mount of filesystem e64ce651-fa93-44de-893d-ff1e0bc9061f Mar 17 17:53:17.726550 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:53:17.726597 kernel: BTRFS info (device vda6): using free space tree Mar 17 17:53:17.720469 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 17 17:53:17.720535 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:53:17.752790 kernel: BTRFS info (device vda6): auto enabling async discard Mar 17 17:53:17.754954 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 17 17:53:17.759381 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:53:17.780884 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 17 17:53:17.858912 coreos-metadata[792]: Mar 17 17:53:17.858 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Mar 17 17:53:17.883493 coreos-metadata[791]: Mar 17 17:53:17.883 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Mar 17 17:53:17.885295 coreos-metadata[792]: Mar 17 17:53:17.885 INFO Fetch successful Mar 17 17:53:17.894206 coreos-metadata[792]: Mar 17 17:53:17.894 INFO wrote hostname ci-4230.1.0-f-ebc70812f4 to /sysroot/etc/hostname Mar 17 17:53:17.895435 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 17 17:53:17.902285 coreos-metadata[791]: Mar 17 17:53:17.902 INFO Fetch successful Mar 17 17:53:17.906691 initrd-setup-root[820]: cut: /sysroot/etc/passwd: No such file or directory Mar 17 17:53:17.913576 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. Mar 17 17:53:17.913790 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. Mar 17 17:53:17.918803 initrd-setup-root[828]: cut: /sysroot/etc/group: No such file or directory Mar 17 17:53:17.923982 systemd-networkd[753]: eth0: Gained IPv6LL Mar 17 17:53:17.929151 initrd-setup-root[835]: cut: /sysroot/etc/shadow: No such file or directory Mar 17 17:53:17.943621 initrd-setup-root[842]: cut: /sysroot/etc/gshadow: No such file or directory Mar 17 17:53:18.141879 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 17 17:53:18.150069 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 17 17:53:18.153023 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 17 17:53:18.165796 kernel: BTRFS info (device vda6): last unmount of filesystem e64ce651-fa93-44de-893d-ff1e0bc9061f Mar 17 17:53:18.182363 systemd-networkd[753]: eth1: Gained IPv6LL Mar 17 17:53:18.206546 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 17 17:53:18.212014 ignition[909]: INFO : Ignition 2.20.0 Mar 17 17:53:18.212014 ignition[909]: INFO : Stage: mount Mar 17 17:53:18.214023 ignition[909]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:53:18.214023 ignition[909]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Mar 17 17:53:18.215455 ignition[909]: INFO : mount: mount passed Mar 17 17:53:18.215455 ignition[909]: INFO : Ignition finished successfully Mar 17 17:53:18.216851 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 17 17:53:18.221001 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 17 17:53:18.490403 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 17 17:53:18.508067 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:53:18.529781 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (921) Mar 17 17:53:18.530356 kernel: BTRFS info (device vda6): first mount of filesystem e64ce651-fa93-44de-893d-ff1e0bc9061f Mar 17 17:53:18.542320 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:53:18.542426 kernel: BTRFS info (device vda6): using free space tree Mar 17 17:53:18.547870 kernel: BTRFS info (device vda6): auto enabling async discard Mar 17 17:53:18.554175 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:53:18.587761 ignition[938]: INFO : Ignition 2.20.0 Mar 17 17:53:18.589224 ignition[938]: INFO : Stage: files Mar 17 17:53:18.589784 ignition[938]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:53:18.589784 ignition[938]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Mar 17 17:53:18.591932 ignition[938]: DEBUG : files: compiled without relabeling support, skipping Mar 17 17:53:18.593574 ignition[938]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 17 17:53:18.593574 ignition[938]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 17 17:53:18.608310 ignition[938]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 17 17:53:18.609485 ignition[938]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 17 17:53:18.611161 unknown[938]: wrote ssh authorized keys file for user: core Mar 17 17:53:18.612367 ignition[938]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 17 17:53:18.616170 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 17 17:53:18.616170 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Mar 17 17:53:18.664938 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 17 17:53:18.810444 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 17 17:53:18.810444 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 17:53:18.812502 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 17 17:53:19.282326 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 17 17:53:19.430310 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 17:53:19.430310 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 17 17:53:19.432963 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 17 17:53:19.432963 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 17 17:53:19.432963 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 17 17:53:19.432963 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 17:53:19.432963 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 17:53:19.432963 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 17:53:19.432963 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 17:53:19.432963 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:53:19.432963 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:53:19.432963 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Mar 17 17:53:19.432963 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Mar 17 17:53:19.432963 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Mar 17 17:53:19.432963 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Mar 17 17:53:19.876887 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 17 17:53:20.205098 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Mar 17 17:53:20.205098 ignition[938]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 17 17:53:20.206805 ignition[938]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 17:53:20.208037 ignition[938]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 17:53:20.208037 ignition[938]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 17 17:53:20.208037 ignition[938]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Mar 17 17:53:20.208037 ignition[938]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Mar 17 17:53:20.211497 ignition[938]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:53:20.211497 ignition[938]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:53:20.211497 ignition[938]: INFO : files: files passed Mar 17 17:53:20.211497 ignition[938]: INFO : Ignition finished successfully Mar 17 17:53:20.210582 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 17 17:53:20.220380 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 17 17:53:20.223118 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 17 17:53:20.245521 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 17 17:53:20.246274 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 17 17:53:20.262774 initrd-setup-root-after-ignition[967]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:53:20.262774 initrd-setup-root-after-ignition[967]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:53:20.266995 initrd-setup-root-after-ignition[971]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:53:20.269625 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:53:20.271329 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 17 17:53:20.282212 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 17 17:53:20.341772 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 17 17:53:20.341976 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 17 17:53:20.343525 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 17 17:53:20.344350 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 17 17:53:20.345359 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 17 17:53:20.352329 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 17 17:53:20.394843 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:53:20.406368 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 17 17:53:20.429475 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:53:20.431800 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:53:20.433588 systemd[1]: Stopped target timers.target - Timer Units. Mar 17 17:53:20.434636 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 17 17:53:20.434872 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:53:20.439016 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 17 17:53:20.440342 systemd[1]: Stopped target basic.target - Basic System. Mar 17 17:53:20.444044 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 17 17:53:20.444949 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:53:20.446265 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 17 17:53:20.449229 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 17 17:53:20.450421 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:53:20.452794 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 17 17:53:20.453921 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 17 17:53:20.454675 systemd[1]: Stopped target swap.target - Swaps. Mar 17 17:53:20.455253 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 17 17:53:20.459124 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:53:20.460403 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:53:20.461095 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:53:20.461722 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 17 17:53:20.462053 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:53:20.463447 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 17 17:53:20.463826 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 17 17:53:20.466035 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 17 17:53:20.466400 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:53:20.467814 systemd[1]: ignition-files.service: Deactivated successfully. Mar 17 17:53:20.468067 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 17 17:53:20.468817 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 17 17:53:20.469078 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 17 17:53:20.481027 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 17 17:53:20.483872 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 17 17:53:20.486559 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 17 17:53:20.495533 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:53:20.496946 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 17 17:53:20.497297 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:53:20.505884 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 17 17:53:20.506064 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 17 17:53:20.517799 ignition[991]: INFO : Ignition 2.20.0 Mar 17 17:53:20.517799 ignition[991]: INFO : Stage: umount Mar 17 17:53:20.517799 ignition[991]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:53:20.517799 ignition[991]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Mar 17 17:53:20.527820 ignition[991]: INFO : umount: umount passed Mar 17 17:53:20.527820 ignition[991]: INFO : Ignition finished successfully Mar 17 17:53:20.528999 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 17 17:53:20.529838 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 17 17:53:20.531297 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 17 17:53:20.531400 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 17 17:53:20.535514 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 17 17:53:20.535693 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 17 17:53:20.539247 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 17 17:53:20.539366 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 17 17:53:20.540475 systemd[1]: Stopped target network.target - Network. Mar 17 17:53:20.540923 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 17 17:53:20.541027 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:53:20.543220 systemd[1]: Stopped target paths.target - Path Units. Mar 17 17:53:20.546478 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 17 17:53:20.549834 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:53:20.550463 systemd[1]: Stopped target slices.target - Slice Units. Mar 17 17:53:20.551011 systemd[1]: Stopped target sockets.target - Socket Units. Mar 17 17:53:20.552442 systemd[1]: iscsid.socket: Deactivated successfully. Mar 17 17:53:20.552531 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:53:20.553774 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 17 17:53:20.553853 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:53:20.555940 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 17 17:53:20.556060 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 17 17:53:20.559121 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 17 17:53:20.559224 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 17 17:53:20.560009 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 17 17:53:20.562643 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 17 17:53:20.567416 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 17 17:53:20.569560 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 17 17:53:20.569726 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 17 17:53:20.579502 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 17 17:53:20.580490 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 17:53:20.580644 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 17 17:53:20.587723 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 17 17:53:20.588343 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 17 17:53:20.588566 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 17 17:53:20.592675 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 17 17:53:20.592844 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:53:20.595023 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 17 17:53:20.595129 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 17 17:53:20.604617 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 17 17:53:20.605238 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 17 17:53:20.605431 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:53:20.606283 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 17:53:20.606358 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:53:20.607719 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 17 17:53:20.607956 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 17 17:53:20.609071 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 17 17:53:20.609196 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:53:20.610655 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:53:20.614283 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 17 17:53:20.614398 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 17 17:53:20.631556 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 17 17:53:20.631807 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:53:20.636888 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 17 17:53:20.637006 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 17 17:53:20.637831 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 17 17:53:20.637902 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:53:20.638459 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 17 17:53:20.638532 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:53:20.642130 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 17 17:53:20.642244 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 17 17:53:20.644434 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:53:20.644549 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:53:20.651150 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 17 17:53:20.651840 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 17 17:53:20.651999 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:53:20.652803 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 17 17:53:20.652901 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:53:20.655977 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 17 17:53:20.656085 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:53:20.657344 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:53:20.657459 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:53:20.661230 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 17 17:53:20.661351 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 17 17:53:20.662063 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 17 17:53:20.668374 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 17 17:53:20.683648 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 17 17:53:20.684040 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 17 17:53:20.690832 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 17 17:53:20.702215 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 17 17:53:20.717320 systemd[1]: Switching root. Mar 17 17:53:20.757235 systemd-journald[181]: Journal stopped Mar 17 17:53:22.623888 systemd-journald[181]: Received SIGTERM from PID 1 (systemd). Mar 17 17:53:22.624049 kernel: SELinux: policy capability network_peer_controls=1 Mar 17 17:53:22.624072 kernel: SELinux: policy capability open_perms=1 Mar 17 17:53:22.624085 kernel: SELinux: policy capability extended_socket_class=1 Mar 17 17:53:22.624119 kernel: SELinux: policy capability always_check_network=0 Mar 17 17:53:22.624135 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 17 17:53:22.624157 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 17 17:53:22.624175 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 17 17:53:22.624192 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 17 17:53:22.624210 kernel: audit: type=1403 audit(1742234000.959:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 17 17:53:22.624234 systemd[1]: Successfully loaded SELinux policy in 49.371ms. Mar 17 17:53:22.624269 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 16.895ms. Mar 17 17:53:22.624315 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 17 17:53:22.624341 systemd[1]: Detected virtualization kvm. Mar 17 17:53:22.624361 systemd[1]: Detected architecture x86-64. Mar 17 17:53:22.624383 systemd[1]: Detected first boot. Mar 17 17:53:22.624406 systemd[1]: Hostname set to . Mar 17 17:53:22.624425 systemd[1]: Initializing machine ID from VM UUID. Mar 17 17:53:22.624443 zram_generator::config[1035]: No configuration found. Mar 17 17:53:22.624467 kernel: Guest personality initialized and is inactive Mar 17 17:53:22.624482 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Mar 17 17:53:22.624500 kernel: Initialized host personality Mar 17 17:53:22.624518 kernel: NET: Registered PF_VSOCK protocol family Mar 17 17:53:22.624539 systemd[1]: Populated /etc with preset unit settings. Mar 17 17:53:22.624560 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 17 17:53:22.624582 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 17 17:53:22.624606 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 17 17:53:22.624627 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 17 17:53:22.624643 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 17 17:53:22.624663 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 17 17:53:22.624684 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 17 17:53:22.624697 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 17 17:53:22.624711 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 17 17:53:22.624725 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 17 17:53:22.624738 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 17 17:53:22.624752 systemd[1]: Created slice user.slice - User and Session Slice. Mar 17 17:53:22.624841 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:53:22.624857 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:53:22.624876 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 17 17:53:22.624889 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 17 17:53:22.624907 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 17 17:53:22.624929 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:53:22.624947 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 17 17:53:22.624961 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:53:22.624978 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 17 17:53:22.624992 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 17 17:53:22.625018 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 17 17:53:22.625031 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 17 17:53:22.625045 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:53:22.625058 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:53:22.625088 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:53:22.625102 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:53:22.625114 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 17 17:53:22.625132 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 17 17:53:22.625146 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 17 17:53:22.625159 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:53:22.625173 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:53:22.627118 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:53:22.627167 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 17 17:53:22.627181 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 17 17:53:22.627194 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 17 17:53:22.627214 systemd[1]: Mounting media.mount - External Media Directory... Mar 17 17:53:22.627238 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:53:22.627252 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 17 17:53:22.627264 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 17 17:53:22.627278 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 17 17:53:22.627293 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 17 17:53:22.627307 systemd[1]: Reached target machines.target - Containers. Mar 17 17:53:22.627321 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 17 17:53:22.627334 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:53:22.627351 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:53:22.627364 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 17 17:53:22.627377 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:53:22.627390 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:53:22.627403 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:53:22.627420 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 17 17:53:22.627437 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:53:22.627452 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 17:53:22.627465 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 17 17:53:22.627490 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 17 17:53:22.627509 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 17 17:53:22.627528 systemd[1]: Stopped systemd-fsck-usr.service. Mar 17 17:53:22.627546 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 17:53:22.627560 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:53:22.627576 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:53:22.627613 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 17 17:53:22.627633 kernel: loop: module loaded Mar 17 17:53:22.627654 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 17 17:53:22.627680 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 17 17:53:22.627704 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:53:22.633238 systemd[1]: verity-setup.service: Deactivated successfully. Mar 17 17:53:22.633279 systemd[1]: Stopped verity-setup.service. Mar 17 17:53:22.633300 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:53:22.633321 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 17 17:53:22.633342 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 17 17:53:22.633361 systemd[1]: Mounted media.mount - External Media Directory. Mar 17 17:53:22.633380 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 17 17:53:22.633398 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 17 17:53:22.633426 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 17 17:53:22.633446 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:53:22.633466 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 17 17:53:22.633487 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 17 17:53:22.633508 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:53:22.633529 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:53:22.633550 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:53:22.633570 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:53:22.633597 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:53:22.633617 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:53:22.633639 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:53:22.633660 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 17 17:53:22.633682 kernel: ACPI: bus type drm_connector registered Mar 17 17:53:22.633705 kernel: fuse: init (API version 7.39) Mar 17 17:53:22.633828 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:53:22.633860 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:53:22.633879 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 17 17:53:22.633909 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 17 17:53:22.633929 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 17 17:53:22.633950 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 17 17:53:22.633971 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 17 17:53:22.633998 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 17:53:22.634018 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:53:22.635899 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 17 17:53:22.635994 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 17 17:53:22.636027 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 17 17:53:22.636079 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:53:22.636214 systemd-journald[1104]: Collecting audit messages is disabled. Mar 17 17:53:22.636289 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 17 17:53:22.636325 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:53:22.636356 systemd-journald[1104]: Journal started Mar 17 17:53:22.636409 systemd-journald[1104]: Runtime Journal (/run/log/journal/94b973c6e6004d47bb508e26eac410f6) is 4.9M, max 39.3M, 34.4M free. Mar 17 17:53:22.083060 systemd[1]: Queued start job for default target multi-user.target. Mar 17 17:53:22.096251 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 17 17:53:22.097034 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 17 17:53:22.639923 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 17 17:53:22.642825 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:53:22.656880 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:53:22.674850 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 17 17:53:22.684783 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 17 17:53:22.694821 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:53:22.720153 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 17 17:53:22.724727 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 17 17:53:22.726979 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 17 17:53:22.735186 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 17 17:53:22.753926 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:53:22.770509 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 17 17:53:22.807336 kernel: loop0: detected capacity change from 0 to 147912 Mar 17 17:53:22.842206 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 17 17:53:22.843996 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 17 17:53:22.860129 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 17 17:53:22.872301 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 17 17:53:22.922021 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 17 17:53:22.933572 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 17 17:53:22.942765 systemd-tmpfiles[1133]: ACLs are not supported, ignoring. Mar 17 17:53:22.942821 systemd-tmpfiles[1133]: ACLs are not supported, ignoring. Mar 17 17:53:22.969007 systemd-journald[1104]: Time spent on flushing to /var/log/journal/94b973c6e6004d47bb508e26eac410f6 is 137.463ms for 1014 entries. Mar 17 17:53:22.969007 systemd-journald[1104]: System Journal (/var/log/journal/94b973c6e6004d47bb508e26eac410f6) is 8M, max 195.6M, 187.6M free. Mar 17 17:53:23.135180 systemd-journald[1104]: Received client request to flush runtime journal. Mar 17 17:53:23.135270 kernel: loop1: detected capacity change from 0 to 205544 Mar 17 17:53:23.135296 kernel: loop2: detected capacity change from 0 to 138176 Mar 17 17:53:22.964707 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 17 17:53:22.967191 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:53:22.979035 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 17 17:53:23.070894 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:53:23.088568 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 17 17:53:23.102078 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 17 17:53:23.107409 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 17 17:53:23.143334 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:53:23.149588 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 17 17:53:23.211835 udevadm[1180]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 17 17:53:23.244990 systemd-tmpfiles[1181]: ACLs are not supported, ignoring. Mar 17 17:53:23.245027 systemd-tmpfiles[1181]: ACLs are not supported, ignoring. Mar 17 17:53:23.266449 kernel: loop3: detected capacity change from 0 to 8 Mar 17 17:53:23.268010 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:53:23.293820 kernel: loop4: detected capacity change from 0 to 147912 Mar 17 17:53:23.337775 kernel: loop5: detected capacity change from 0 to 205544 Mar 17 17:53:23.396795 kernel: loop6: detected capacity change from 0 to 138176 Mar 17 17:53:23.430820 kernel: loop7: detected capacity change from 0 to 8 Mar 17 17:53:23.434717 (sd-merge)[1190]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Mar 17 17:53:23.439949 (sd-merge)[1190]: Merged extensions into '/usr'. Mar 17 17:53:23.461081 systemd[1]: Reload requested from client PID 1132 ('systemd-sysext') (unit systemd-sysext.service)... Mar 17 17:53:23.461120 systemd[1]: Reloading... Mar 17 17:53:23.682927 zram_generator::config[1216]: No configuration found. Mar 17 17:53:23.913778 ldconfig[1128]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 17 17:53:24.106830 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:53:24.234523 systemd[1]: Reloading finished in 772 ms. Mar 17 17:53:24.253783 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 17 17:53:24.262709 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 17 17:53:24.280521 systemd[1]: Starting ensure-sysext.service... Mar 17 17:53:24.285191 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:53:24.335952 systemd[1]: Reload requested from client PID 1261 ('systemctl') (unit ensure-sysext.service)... Mar 17 17:53:24.335983 systemd[1]: Reloading... Mar 17 17:53:24.342108 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 17 17:53:24.343132 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 17 17:53:24.345267 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 17 17:53:24.346238 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Mar 17 17:53:24.346522 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Mar 17 17:53:24.354135 systemd-tmpfiles[1262]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:53:24.354366 systemd-tmpfiles[1262]: Skipping /boot Mar 17 17:53:24.378234 systemd-tmpfiles[1262]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:53:24.378646 systemd-tmpfiles[1262]: Skipping /boot Mar 17 17:53:24.594370 zram_generator::config[1294]: No configuration found. Mar 17 17:53:24.831975 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:53:24.959295 systemd[1]: Reloading finished in 622 ms. Mar 17 17:53:24.974548 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 17 17:53:24.996153 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:53:25.017471 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:53:25.022136 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 17 17:53:25.032462 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 17 17:53:25.039853 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:53:25.050281 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:53:25.061327 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 17 17:53:25.068648 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:53:25.070304 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:53:25.083354 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:53:25.094531 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:53:25.102282 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:53:25.103147 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:53:25.103407 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 17:53:25.103551 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:53:25.114185 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:53:25.114486 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:53:25.115998 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:53:25.116262 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 17:53:25.128347 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 17 17:53:25.130039 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:53:25.143019 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:53:25.143391 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:53:25.145792 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:53:25.148848 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:53:25.149105 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 17:53:25.149299 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:53:25.150596 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:53:25.152014 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:53:25.170463 systemd[1]: Finished ensure-sysext.service. Mar 17 17:53:25.190538 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 17 17:53:25.194176 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 17 17:53:25.214276 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 17 17:53:25.226872 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 17 17:53:25.228529 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:53:25.230129 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:53:25.241115 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:53:25.241767 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:53:25.243534 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:53:25.245232 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:53:25.247885 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 17 17:53:25.264408 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:53:25.265540 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:53:25.265905 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 17:53:25.282480 systemd-udevd[1341]: Using default interface naming scheme 'v255'. Mar 17 17:53:25.305431 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 17 17:53:25.330395 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 17 17:53:25.362417 augenrules[1382]: No rules Mar 17 17:53:25.366658 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:53:25.367097 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:53:25.372665 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:53:25.390172 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:53:25.590950 systemd-resolved[1340]: Positive Trust Anchors: Mar 17 17:53:25.590984 systemd-resolved[1340]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:53:25.591027 systemd-resolved[1340]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:53:25.609634 systemd-resolved[1340]: Using system hostname 'ci-4230.1.0-f-ebc70812f4'. Mar 17 17:53:25.622136 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:53:25.624986 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:53:25.629723 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 17 17:53:25.630667 systemd[1]: Reached target time-set.target - System Time Set. Mar 17 17:53:25.638572 systemd-networkd[1393]: lo: Link UP Mar 17 17:53:25.638584 systemd-networkd[1393]: lo: Gained carrier Mar 17 17:53:25.640209 systemd-networkd[1393]: Enumeration completed Mar 17 17:53:25.640370 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:53:25.640846 systemd-timesyncd[1357]: No network connectivity, watching for changes. Mar 17 17:53:25.641153 systemd[1]: Reached target network.target - Network. Mar 17 17:53:25.649159 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 17 17:53:25.659098 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 17 17:53:25.723807 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 17 17:53:25.737906 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 17 17:53:25.747794 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (1401) Mar 17 17:53:25.781868 systemd[1]: Condition check resulted in dev-disk-by\x2dlabel-config\x2d2.device - /dev/disk/by-label/config-2 being skipped. Mar 17 17:53:25.800903 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Mar 17 17:53:25.801453 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:53:25.801692 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:53:25.811451 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:53:25.822019 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:53:25.828976 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:53:25.830348 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:53:25.830414 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 17:53:25.830460 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 17:53:25.830482 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:53:25.832367 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:53:25.833150 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:53:25.837316 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:53:25.870629 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:53:25.872170 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:53:25.873782 kernel: ISO 9660 Extensions: RRIP_1991A Mar 17 17:53:25.879837 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Mar 17 17:53:25.881906 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:53:25.882261 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:53:25.887551 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:53:25.888132 systemd-networkd[1393]: eth0: Configuring with /run/systemd/network/10-6a:72:04:15:b7:4f.network. Mar 17 17:53:25.889261 systemd-networkd[1393]: eth0: Link UP Mar 17 17:53:25.889272 systemd-networkd[1393]: eth0: Gained carrier Mar 17 17:53:25.893261 systemd-timesyncd[1357]: Network configuration changed, trying to establish connection. Mar 17 17:53:25.926784 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 17 17:53:25.930953 systemd-networkd[1393]: eth1: Configuring with /run/systemd/network/10-ea:f5:4f:84:47:2c.network. Mar 17 17:53:25.932766 kernel: ACPI: button: Power Button [PWRF] Mar 17 17:53:25.934404 systemd-networkd[1393]: eth1: Link UP Mar 17 17:53:25.934414 systemd-networkd[1393]: eth1: Gained carrier Mar 17 17:53:25.946795 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Mar 17 17:53:25.956666 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 17 17:53:25.970077 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 17 17:53:25.993769 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 17 17:53:26.417984 systemd-timesyncd[1357]: Contacted time server 66.228.58.20:123 (1.flatcar.pool.ntp.org). Mar 17 17:53:26.418067 systemd-timesyncd[1357]: Initial clock synchronization to Mon 2025-03-17 17:53:26.417810 UTC. Mar 17 17:53:26.419941 systemd-resolved[1340]: Clock change detected. Flushing caches. Mar 17 17:53:26.428224 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 17 17:53:26.529025 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Mar 17 17:53:26.529160 kernel: mousedev: PS/2 mouse device common for all mice Mar 17 17:53:26.542299 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Mar 17 17:53:26.575886 kernel: Console: switching to colour dummy device 80x25 Mar 17 17:53:26.576039 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Mar 17 17:53:26.576069 kernel: [drm] features: -context_init Mar 17 17:53:26.576095 kernel: [drm] number of scanouts: 1 Mar 17 17:53:26.576118 kernel: [drm] number of cap sets: 0 Mar 17 17:53:26.615789 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:53:26.627510 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Mar 17 17:53:26.629127 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:53:26.629454 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:53:26.635253 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Mar 17 17:53:26.635382 kernel: Console: switching to colour frame buffer device 128x48 Mar 17 17:53:26.638357 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:53:26.653947 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Mar 17 17:53:26.701798 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:53:26.702238 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:53:26.709893 kernel: EDAC MC: Ver: 3.0.0 Mar 17 17:53:26.709225 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 17 17:53:26.723204 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:53:26.741900 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 17 17:53:26.755095 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 17 17:53:26.777171 lvm[1447]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:53:26.810072 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 17 17:53:26.810554 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:53:26.825323 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 17 17:53:26.829949 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:53:26.836564 lvm[1452]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:53:26.836331 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:53:26.844371 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 17 17:53:26.844595 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 17 17:53:26.845035 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 17 17:53:26.845322 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 17 17:53:26.845434 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 17 17:53:26.845522 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 17 17:53:26.845565 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:53:26.845660 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:53:26.847592 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 17 17:53:26.849523 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 17 17:53:26.854633 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 17 17:53:26.861126 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 17 17:53:26.863423 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 17 17:53:26.879588 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 17 17:53:26.882400 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 17 17:53:26.885002 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 17 17:53:26.889165 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 17 17:53:26.892234 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:53:26.893965 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:53:26.896198 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:53:26.896255 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:53:26.905055 systemd[1]: Starting containerd.service - containerd container runtime... Mar 17 17:53:26.910520 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 17 17:53:26.918525 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 17 17:53:26.936151 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 17 17:53:26.944102 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 17 17:53:26.944750 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 17 17:53:26.956154 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 17 17:53:26.958950 jq[1462]: false Mar 17 17:53:26.969015 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 17 17:53:26.977187 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 17 17:53:26.988508 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 17 17:53:26.999619 dbus-daemon[1459]: [system] SELinux support is enabled Mar 17 17:53:27.003223 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 17 17:53:27.008361 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 17 17:53:27.009329 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 17 17:53:27.026136 systemd[1]: Starting update-engine.service - Update Engine... Mar 17 17:53:27.038891 coreos-metadata[1458]: Mar 17 17:53:27.035 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Mar 17 17:53:27.036238 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 17 17:53:27.042922 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 17 17:53:27.055531 coreos-metadata[1458]: Mar 17 17:53:27.055 INFO Fetch successful Mar 17 17:53:27.064468 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 17 17:53:27.065950 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 17 17:53:27.076048 update_engine[1470]: I20250317 17:53:27.075907 1470 main.cc:92] Flatcar Update Engine starting Mar 17 17:53:27.076455 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 17 17:53:27.076930 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 17 17:53:27.096273 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 17 17:53:27.096342 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 17 17:53:27.099878 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 17 17:53:27.100019 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Mar 17 17:53:27.100057 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 17 17:53:27.107655 systemd[1]: Started update-engine.service - Update Engine. Mar 17 17:53:27.114889 update_engine[1470]: I20250317 17:53:27.110776 1470 update_check_scheduler.cc:74] Next update check in 10m6s Mar 17 17:53:27.116676 jq[1471]: true Mar 17 17:53:27.121195 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 17 17:53:27.134741 extend-filesystems[1463]: Found loop4 Mar 17 17:53:27.159385 extend-filesystems[1463]: Found loop5 Mar 17 17:53:27.159385 extend-filesystems[1463]: Found loop6 Mar 17 17:53:27.159385 extend-filesystems[1463]: Found loop7 Mar 17 17:53:27.159385 extend-filesystems[1463]: Found vda Mar 17 17:53:27.159385 extend-filesystems[1463]: Found vda1 Mar 17 17:53:27.159385 extend-filesystems[1463]: Found vda2 Mar 17 17:53:27.159385 extend-filesystems[1463]: Found vda3 Mar 17 17:53:27.159385 extend-filesystems[1463]: Found usr Mar 17 17:53:27.159385 extend-filesystems[1463]: Found vda4 Mar 17 17:53:27.159385 extend-filesystems[1463]: Found vda6 Mar 17 17:53:27.159385 extend-filesystems[1463]: Found vda7 Mar 17 17:53:27.159385 extend-filesystems[1463]: Found vda9 Mar 17 17:53:27.159385 extend-filesystems[1463]: Checking size of /dev/vda9 Mar 17 17:53:27.199620 (ntainerd)[1492]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 17 17:53:27.235062 systemd[1]: motdgen.service: Deactivated successfully. Mar 17 17:53:27.249585 tar[1475]: linux-amd64/helm Mar 17 17:53:27.237411 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 17 17:53:27.277356 jq[1484]: true Mar 17 17:53:27.292873 extend-filesystems[1463]: Resized partition /dev/vda9 Mar 17 17:53:27.313282 extend-filesystems[1505]: resize2fs 1.47.1 (20-May-2024) Mar 17 17:53:27.328001 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 17 17:53:27.345812 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Mar 17 17:53:27.337294 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 17 17:53:27.374428 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (1395) Mar 17 17:53:27.492702 systemd-logind[1469]: New seat seat0. Mar 17 17:53:27.518861 bash[1519]: Updated "/home/core/.ssh/authorized_keys" Mar 17 17:53:27.524376 systemd-logind[1469]: Watching system buttons on /dev/input/event1 (Power Button) Mar 17 17:53:27.524407 systemd-logind[1469]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 17 17:53:27.524990 systemd[1]: Started systemd-logind.service - User Login Management. Mar 17 17:53:27.531093 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 17 17:53:27.541351 systemd[1]: Starting sshkeys.service... Mar 17 17:53:27.555094 sshd_keygen[1493]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 17 17:53:27.681468 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Mar 17 17:53:27.682522 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 17 17:53:27.696432 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 17 17:53:27.713414 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 17 17:53:27.732177 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 17 17:53:27.752374 locksmithd[1482]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 17 17:53:27.768021 extend-filesystems[1505]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 17 17:53:27.768021 extend-filesystems[1505]: old_desc_blocks = 1, new_desc_blocks = 8 Mar 17 17:53:27.768021 extend-filesystems[1505]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Mar 17 17:53:27.778436 extend-filesystems[1463]: Resized filesystem in /dev/vda9 Mar 17 17:53:27.778436 extend-filesystems[1463]: Found vdb Mar 17 17:53:27.781036 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 17 17:53:27.781532 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 17 17:53:27.796281 systemd[1]: issuegen.service: Deactivated successfully. Mar 17 17:53:27.796632 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 17 17:53:27.815469 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 17 17:53:27.823650 coreos-metadata[1536]: Mar 17 17:53:27.823 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Mar 17 17:53:27.843925 coreos-metadata[1536]: Mar 17 17:53:27.842 INFO Fetch successful Mar 17 17:53:27.863961 unknown[1536]: wrote ssh authorized keys file for user: core Mar 17 17:53:27.880940 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 17 17:53:27.902394 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 17 17:53:27.919791 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 17 17:53:27.927140 systemd[1]: Reached target getty.target - Login Prompts. Mar 17 17:53:27.943872 update-ssh-keys[1552]: Updated "/home/core/.ssh/authorized_keys" Mar 17 17:53:27.943521 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 17 17:53:27.956903 systemd[1]: Finished sshkeys.service. Mar 17 17:53:28.017562 containerd[1492]: time="2025-03-17T17:53:28.017358467Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Mar 17 17:53:28.086990 containerd[1492]: time="2025-03-17T17:53:28.086544395Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:53:28.097762 containerd[1492]: time="2025-03-17T17:53:28.097690038Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.83-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:53:28.099877 containerd[1492]: time="2025-03-17T17:53:28.097975985Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 17 17:53:28.099877 containerd[1492]: time="2025-03-17T17:53:28.098021771Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 17 17:53:28.099877 containerd[1492]: time="2025-03-17T17:53:28.098304487Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 17 17:53:28.099877 containerd[1492]: time="2025-03-17T17:53:28.098342570Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 17 17:53:28.099877 containerd[1492]: time="2025-03-17T17:53:28.098447337Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:53:28.099877 containerd[1492]: time="2025-03-17T17:53:28.098473566Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:53:28.099877 containerd[1492]: time="2025-03-17T17:53:28.098893157Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:53:28.099877 containerd[1492]: time="2025-03-17T17:53:28.098926261Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 17 17:53:28.099877 containerd[1492]: time="2025-03-17T17:53:28.098949601Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:53:28.099877 containerd[1492]: time="2025-03-17T17:53:28.098970394Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 17 17:53:28.099877 containerd[1492]: time="2025-03-17T17:53:28.099119108Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:53:28.099877 containerd[1492]: time="2025-03-17T17:53:28.099413630Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:53:28.100392 containerd[1492]: time="2025-03-17T17:53:28.099654614Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:53:28.100392 containerd[1492]: time="2025-03-17T17:53:28.099676822Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 17 17:53:28.100392 containerd[1492]: time="2025-03-17T17:53:28.099804081Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 17 17:53:28.100661 containerd[1492]: time="2025-03-17T17:53:28.100627997Z" level=info msg="metadata content store policy set" policy=shared Mar 17 17:53:28.105908 containerd[1492]: time="2025-03-17T17:53:28.105845257Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 17 17:53:28.106180 containerd[1492]: time="2025-03-17T17:53:28.106158011Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 17 17:53:28.106318 containerd[1492]: time="2025-03-17T17:53:28.106302354Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 17 17:53:28.106396 containerd[1492]: time="2025-03-17T17:53:28.106382393Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 17 17:53:28.106466 containerd[1492]: time="2025-03-17T17:53:28.106451079Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 17 17:53:28.107102 containerd[1492]: time="2025-03-17T17:53:28.107063633Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 17 17:53:28.107685 containerd[1492]: time="2025-03-17T17:53:28.107652626Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 17 17:53:28.108069 containerd[1492]: time="2025-03-17T17:53:28.108045006Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 17 17:53:28.108193 containerd[1492]: time="2025-03-17T17:53:28.108176532Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 17 17:53:28.108287 containerd[1492]: time="2025-03-17T17:53:28.108272650Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 17 17:53:28.108375 containerd[1492]: time="2025-03-17T17:53:28.108361309Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 17 17:53:28.108459 containerd[1492]: time="2025-03-17T17:53:28.108444357Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 17 17:53:28.108627 containerd[1492]: time="2025-03-17T17:53:28.108532075Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 17 17:53:28.108627 containerd[1492]: time="2025-03-17T17:53:28.108561355Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 17 17:53:28.108851 containerd[1492]: time="2025-03-17T17:53:28.108709022Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 17 17:53:28.108851 containerd[1492]: time="2025-03-17T17:53:28.108737503Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 17 17:53:28.108851 containerd[1492]: time="2025-03-17T17:53:28.108757350Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 17 17:53:28.108851 containerd[1492]: time="2025-03-17T17:53:28.108788940Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 17 17:53:28.108851 containerd[1492]: time="2025-03-17T17:53:28.108822377Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 17 17:53:28.109294 containerd[1492]: time="2025-03-17T17:53:28.109070974Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 17 17:53:28.109294 containerd[1492]: time="2025-03-17T17:53:28.109099711Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 17 17:53:28.109294 containerd[1492]: time="2025-03-17T17:53:28.109135621Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 17 17:53:28.109294 containerd[1492]: time="2025-03-17T17:53:28.109156114Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 17 17:53:28.109294 containerd[1492]: time="2025-03-17T17:53:28.109176434Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 17 17:53:28.109294 containerd[1492]: time="2025-03-17T17:53:28.109221280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 17 17:53:28.109294 containerd[1492]: time="2025-03-17T17:53:28.109250944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 17 17:53:28.110113 containerd[1492]: time="2025-03-17T17:53:28.109699477Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 17 17:53:28.110113 containerd[1492]: time="2025-03-17T17:53:28.109783185Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 17 17:53:28.110113 containerd[1492]: time="2025-03-17T17:53:28.109818093Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 17 17:53:28.110113 containerd[1492]: time="2025-03-17T17:53:28.109886365Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 17 17:53:28.110113 containerd[1492]: time="2025-03-17T17:53:28.109909093Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 17 17:53:28.110113 containerd[1492]: time="2025-03-17T17:53:28.109930969Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 17 17:53:28.110113 containerd[1492]: time="2025-03-17T17:53:28.109982684Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 17 17:53:28.110113 containerd[1492]: time="2025-03-17T17:53:28.110003232Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 17 17:53:28.110113 containerd[1492]: time="2025-03-17T17:53:28.110054598Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 17 17:53:28.110942 containerd[1492]: time="2025-03-17T17:53:28.110250689Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 17 17:53:28.110942 containerd[1492]: time="2025-03-17T17:53:28.110721591Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 17 17:53:28.110942 containerd[1492]: time="2025-03-17T17:53:28.110746491Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 17 17:53:28.110942 containerd[1492]: time="2025-03-17T17:53:28.110770399Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 17 17:53:28.110942 containerd[1492]: time="2025-03-17T17:53:28.110799281Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 17 17:53:28.110942 containerd[1492]: time="2025-03-17T17:53:28.110822539Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 17 17:53:28.110942 containerd[1492]: time="2025-03-17T17:53:28.110893579Z" level=info msg="NRI interface is disabled by configuration." Mar 17 17:53:28.111577 containerd[1492]: time="2025-03-17T17:53:28.110912335Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 17 17:53:28.112108 containerd[1492]: time="2025-03-17T17:53:28.111950620Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 17 17:53:28.112108 containerd[1492]: time="2025-03-17T17:53:28.112048031Z" level=info msg="Connect containerd service" Mar 17 17:53:28.113338 containerd[1492]: time="2025-03-17T17:53:28.112564934Z" level=info msg="using legacy CRI server" Mar 17 17:53:28.113338 containerd[1492]: time="2025-03-17T17:53:28.112593698Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 17 17:53:28.113338 containerd[1492]: time="2025-03-17T17:53:28.112797384Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 17 17:53:28.114293 containerd[1492]: time="2025-03-17T17:53:28.114252656Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 17:53:28.114588 containerd[1492]: time="2025-03-17T17:53:28.114536009Z" level=info msg="Start subscribing containerd event" Mar 17 17:53:28.115404 containerd[1492]: time="2025-03-17T17:53:28.114770758Z" level=info msg="Start recovering state" Mar 17 17:53:28.115404 containerd[1492]: time="2025-03-17T17:53:28.114915814Z" level=info msg="Start event monitor" Mar 17 17:53:28.115404 containerd[1492]: time="2025-03-17T17:53:28.114939267Z" level=info msg="Start snapshots syncer" Mar 17 17:53:28.115404 containerd[1492]: time="2025-03-17T17:53:28.114953720Z" level=info msg="Start cni network conf syncer for default" Mar 17 17:53:28.115404 containerd[1492]: time="2025-03-17T17:53:28.114965666Z" level=info msg="Start streaming server" Mar 17 17:53:28.116034 containerd[1492]: time="2025-03-17T17:53:28.116004236Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 17 17:53:28.116205 containerd[1492]: time="2025-03-17T17:53:28.116184799Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 17 17:53:28.117594 containerd[1492]: time="2025-03-17T17:53:28.116605866Z" level=info msg="containerd successfully booted in 0.101972s" Mar 17 17:53:28.116783 systemd[1]: Started containerd.service - containerd container runtime. Mar 17 17:53:28.135090 systemd-networkd[1393]: eth1: Gained IPv6LL Mar 17 17:53:28.144211 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 17 17:53:28.148612 systemd[1]: Reached target network-online.target - Network is Online. Mar 17 17:53:28.160331 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:53:28.176228 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 17 17:53:28.240817 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 17 17:53:28.392499 systemd-networkd[1393]: eth0: Gained IPv6LL Mar 17 17:53:28.510942 tar[1475]: linux-amd64/LICENSE Mar 17 17:53:28.511621 tar[1475]: linux-amd64/README.md Mar 17 17:53:28.538483 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 17 17:53:29.441219 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:53:29.441484 (kubelet)[1582]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:53:29.444815 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 17 17:53:29.448265 systemd[1]: Startup finished in 1.254s (kernel) + 8.199s (initrd) + 8.117s (userspace) = 17.572s. Mar 17 17:53:30.279700 kubelet[1582]: E0317 17:53:30.279540 1582 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:53:30.282183 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:53:30.282370 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:53:30.283417 systemd[1]: kubelet.service: Consumed 1.381s CPU time, 238.8M memory peak. Mar 17 17:53:30.879634 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 17 17:53:30.896662 systemd[1]: Started sshd@0-24.199.119.133:22-139.178.68.195:49144.service - OpenSSH per-connection server daemon (139.178.68.195:49144). Mar 17 17:53:31.007647 sshd[1595]: Accepted publickey for core from 139.178.68.195 port 49144 ssh2: RSA SHA256:nAUKsK2l9wjXYeF+xS7MSq6cfWij0pIIBV4i7QqSfSE Mar 17 17:53:31.016064 sshd-session[1595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:53:31.035012 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 17 17:53:31.042638 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 17 17:53:31.050002 systemd-logind[1469]: New session 1 of user core. Mar 17 17:53:31.064796 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 17 17:53:31.077486 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 17 17:53:31.094440 (systemd)[1599]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 17 17:53:31.101524 systemd-logind[1469]: New session c1 of user core. Mar 17 17:53:31.329132 systemd[1599]: Queued start job for default target default.target. Mar 17 17:53:31.339243 systemd[1599]: Created slice app.slice - User Application Slice. Mar 17 17:53:31.339310 systemd[1599]: Reached target paths.target - Paths. Mar 17 17:53:31.339387 systemd[1599]: Reached target timers.target - Timers. Mar 17 17:53:31.342123 systemd[1599]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 17 17:53:31.362536 systemd[1599]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 17 17:53:31.362924 systemd[1599]: Reached target sockets.target - Sockets. Mar 17 17:53:31.362992 systemd[1599]: Reached target basic.target - Basic System. Mar 17 17:53:31.363036 systemd[1599]: Reached target default.target - Main User Target. Mar 17 17:53:31.363072 systemd[1599]: Startup finished in 246ms. Mar 17 17:53:31.363641 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 17 17:53:31.381199 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 17 17:53:31.464991 systemd[1]: Started sshd@1-24.199.119.133:22-139.178.68.195:49156.service - OpenSSH per-connection server daemon (139.178.68.195:49156). Mar 17 17:53:31.529763 sshd[1610]: Accepted publickey for core from 139.178.68.195 port 49156 ssh2: RSA SHA256:nAUKsK2l9wjXYeF+xS7MSq6cfWij0pIIBV4i7QqSfSE Mar 17 17:53:31.532405 sshd-session[1610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:53:31.543798 systemd-logind[1469]: New session 2 of user core. Mar 17 17:53:31.561161 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 17 17:53:31.631339 sshd[1612]: Connection closed by 139.178.68.195 port 49156 Mar 17 17:53:31.632175 sshd-session[1610]: pam_unix(sshd:session): session closed for user core Mar 17 17:53:31.645610 systemd[1]: sshd@1-24.199.119.133:22-139.178.68.195:49156.service: Deactivated successfully. Mar 17 17:53:31.649187 systemd[1]: session-2.scope: Deactivated successfully. Mar 17 17:53:31.650409 systemd-logind[1469]: Session 2 logged out. Waiting for processes to exit. Mar 17 17:53:31.657438 systemd[1]: Started sshd@2-24.199.119.133:22-139.178.68.195:49168.service - OpenSSH per-connection server daemon (139.178.68.195:49168). Mar 17 17:53:31.659131 systemd-logind[1469]: Removed session 2. Mar 17 17:53:31.745993 sshd[1617]: Accepted publickey for core from 139.178.68.195 port 49168 ssh2: RSA SHA256:nAUKsK2l9wjXYeF+xS7MSq6cfWij0pIIBV4i7QqSfSE Mar 17 17:53:31.750878 sshd-session[1617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:53:31.760783 systemd-logind[1469]: New session 3 of user core. Mar 17 17:53:31.779771 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 17 17:53:31.846096 sshd[1620]: Connection closed by 139.178.68.195 port 49168 Mar 17 17:53:31.847103 sshd-session[1617]: pam_unix(sshd:session): session closed for user core Mar 17 17:53:31.864388 systemd[1]: sshd@2-24.199.119.133:22-139.178.68.195:49168.service: Deactivated successfully. Mar 17 17:53:31.875931 systemd[1]: session-3.scope: Deactivated successfully. Mar 17 17:53:31.878524 systemd-logind[1469]: Session 3 logged out. Waiting for processes to exit. Mar 17 17:53:31.885419 systemd[1]: Started sshd@3-24.199.119.133:22-139.178.68.195:49180.service - OpenSSH per-connection server daemon (139.178.68.195:49180). Mar 17 17:53:31.888490 systemd-logind[1469]: Removed session 3. Mar 17 17:53:31.957416 sshd[1625]: Accepted publickey for core from 139.178.68.195 port 49180 ssh2: RSA SHA256:nAUKsK2l9wjXYeF+xS7MSq6cfWij0pIIBV4i7QqSfSE Mar 17 17:53:31.960018 sshd-session[1625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:53:31.969116 systemd-logind[1469]: New session 4 of user core. Mar 17 17:53:31.981241 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 17 17:53:32.052881 sshd[1628]: Connection closed by 139.178.68.195 port 49180 Mar 17 17:53:32.054141 sshd-session[1625]: pam_unix(sshd:session): session closed for user core Mar 17 17:53:32.065282 systemd[1]: sshd@3-24.199.119.133:22-139.178.68.195:49180.service: Deactivated successfully. Mar 17 17:53:32.068313 systemd[1]: session-4.scope: Deactivated successfully. Mar 17 17:53:32.073143 systemd-logind[1469]: Session 4 logged out. Waiting for processes to exit. Mar 17 17:53:32.078505 systemd[1]: Started sshd@4-24.199.119.133:22-139.178.68.195:49190.service - OpenSSH per-connection server daemon (139.178.68.195:49190). Mar 17 17:53:32.081562 systemd-logind[1469]: Removed session 4. Mar 17 17:53:32.151164 sshd[1633]: Accepted publickey for core from 139.178.68.195 port 49190 ssh2: RSA SHA256:nAUKsK2l9wjXYeF+xS7MSq6cfWij0pIIBV4i7QqSfSE Mar 17 17:53:32.152640 sshd-session[1633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:53:32.159726 systemd-logind[1469]: New session 5 of user core. Mar 17 17:53:32.170999 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 17 17:53:32.248960 sudo[1637]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 17 17:53:32.249491 sudo[1637]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:53:32.266055 sudo[1637]: pam_unix(sudo:session): session closed for user root Mar 17 17:53:32.270084 sshd[1636]: Connection closed by 139.178.68.195 port 49190 Mar 17 17:53:32.271218 sshd-session[1633]: pam_unix(sshd:session): session closed for user core Mar 17 17:53:32.289310 systemd[1]: sshd@4-24.199.119.133:22-139.178.68.195:49190.service: Deactivated successfully. Mar 17 17:53:32.292170 systemd[1]: session-5.scope: Deactivated successfully. Mar 17 17:53:32.296528 systemd-logind[1469]: Session 5 logged out. Waiting for processes to exit. Mar 17 17:53:32.306462 systemd[1]: Started sshd@5-24.199.119.133:22-139.178.68.195:49200.service - OpenSSH per-connection server daemon (139.178.68.195:49200). Mar 17 17:53:32.309321 systemd-logind[1469]: Removed session 5. Mar 17 17:53:32.371429 sshd[1642]: Accepted publickey for core from 139.178.68.195 port 49200 ssh2: RSA SHA256:nAUKsK2l9wjXYeF+xS7MSq6cfWij0pIIBV4i7QqSfSE Mar 17 17:53:32.374204 sshd-session[1642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:53:32.382282 systemd-logind[1469]: New session 6 of user core. Mar 17 17:53:32.388177 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 17 17:53:32.450094 sudo[1647]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 17 17:53:32.451064 sudo[1647]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:53:32.456096 sudo[1647]: pam_unix(sudo:session): session closed for user root Mar 17 17:53:32.464287 sudo[1646]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 17 17:53:32.464820 sudo[1646]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:53:32.488463 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:53:32.533783 augenrules[1669]: No rules Mar 17 17:53:32.534966 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:53:32.535280 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:53:32.537727 sudo[1646]: pam_unix(sudo:session): session closed for user root Mar 17 17:53:32.542899 sshd[1645]: Connection closed by 139.178.68.195 port 49200 Mar 17 17:53:32.543665 sshd-session[1642]: pam_unix(sshd:session): session closed for user core Mar 17 17:53:32.561500 systemd[1]: sshd@5-24.199.119.133:22-139.178.68.195:49200.service: Deactivated successfully. Mar 17 17:53:32.564743 systemd[1]: session-6.scope: Deactivated successfully. Mar 17 17:53:32.567933 systemd-logind[1469]: Session 6 logged out. Waiting for processes to exit. Mar 17 17:53:32.575569 systemd[1]: Started sshd@6-24.199.119.133:22-139.178.68.195:49216.service - OpenSSH per-connection server daemon (139.178.68.195:49216). Mar 17 17:53:32.577455 systemd-logind[1469]: Removed session 6. Mar 17 17:53:32.633979 sshd[1677]: Accepted publickey for core from 139.178.68.195 port 49216 ssh2: RSA SHA256:nAUKsK2l9wjXYeF+xS7MSq6cfWij0pIIBV4i7QqSfSE Mar 17 17:53:32.636270 sshd-session[1677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:53:32.645022 systemd-logind[1469]: New session 7 of user core. Mar 17 17:53:32.654246 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 17 17:53:32.719300 sudo[1681]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 17 17:53:32.719737 sudo[1681]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:53:33.293369 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 17 17:53:33.295529 (dockerd)[1699]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 17 17:53:33.810617 dockerd[1699]: time="2025-03-17T17:53:33.810544174Z" level=info msg="Starting up" Mar 17 17:53:33.961640 dockerd[1699]: time="2025-03-17T17:53:33.961577566Z" level=info msg="Loading containers: start." Mar 17 17:53:34.242966 kernel: Initializing XFRM netlink socket Mar 17 17:53:34.379052 systemd-networkd[1393]: docker0: Link UP Mar 17 17:53:34.382444 systemd[1]: Started sshd@7-24.199.119.133:22-115.113.173.34:40912.service - OpenSSH per-connection server daemon (115.113.173.34:40912). Mar 17 17:53:34.431743 dockerd[1699]: time="2025-03-17T17:53:34.431686154Z" level=info msg="Loading containers: done." Mar 17 17:53:34.456212 dockerd[1699]: time="2025-03-17T17:53:34.456126806Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 17 17:53:34.456416 dockerd[1699]: time="2025-03-17T17:53:34.456307747Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Mar 17 17:53:34.456581 dockerd[1699]: time="2025-03-17T17:53:34.456530208Z" level=info msg="Daemon has completed initialization" Mar 17 17:53:34.570874 dockerd[1699]: time="2025-03-17T17:53:34.570450056Z" level=info msg="API listen on /run/docker.sock" Mar 17 17:53:34.571870 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 17 17:53:35.459279 sshd[1839]: Invalid user demo from 115.113.173.34 port 40912 Mar 17 17:53:35.529883 containerd[1492]: time="2025-03-17T17:53:35.529756280Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.7\"" Mar 17 17:53:35.712036 sshd[1839]: Connection closed by invalid user demo 115.113.173.34 port 40912 [preauth] Mar 17 17:53:35.713993 systemd[1]: sshd@7-24.199.119.133:22-115.113.173.34:40912.service: Deactivated successfully. Mar 17 17:53:36.212379 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2153409717.mount: Deactivated successfully. Mar 17 17:53:37.582909 containerd[1492]: time="2025-03-17T17:53:37.582339310Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:53:37.585427 containerd[1492]: time="2025-03-17T17:53:37.585350710Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.7: active requests=0, bytes read=27959268" Mar 17 17:53:37.586339 containerd[1492]: time="2025-03-17T17:53:37.586243851Z" level=info msg="ImageCreate event name:\"sha256:f084bc047a8cf7c8484d47c51e70e646dde3977d916f282feb99207b7b9241af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:53:37.590353 containerd[1492]: time="2025-03-17T17:53:37.590260177Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:22c19cc70fe5806d0a2cb28a6b6b33fd34e6f9e50616bdf6d53649bcfafbc277\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:53:37.592405 containerd[1492]: time="2025-03-17T17:53:37.591755567Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.7\" with image id \"sha256:f084bc047a8cf7c8484d47c51e70e646dde3977d916f282feb99207b7b9241af\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:22c19cc70fe5806d0a2cb28a6b6b33fd34e6f9e50616bdf6d53649bcfafbc277\", size \"27956068\" in 2.061918728s" Mar 17 17:53:37.592405 containerd[1492]: time="2025-03-17T17:53:37.591806772Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.7\" returns image reference \"sha256:f084bc047a8cf7c8484d47c51e70e646dde3977d916f282feb99207b7b9241af\"" Mar 17 17:53:37.594190 containerd[1492]: time="2025-03-17T17:53:37.594025899Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.7\"" Mar 17 17:53:39.757371 containerd[1492]: time="2025-03-17T17:53:39.757282781Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:53:39.759085 containerd[1492]: time="2025-03-17T17:53:39.759009891Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.7: active requests=0, bytes read=24713776" Mar 17 17:53:39.762368 containerd[1492]: time="2025-03-17T17:53:39.762249907Z" level=info msg="ImageCreate event name:\"sha256:652dcad615a9a0c252c253860d5b5b7bfebd3efe159dc033a8555bc15a6d1985\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:53:39.767830 containerd[1492]: time="2025-03-17T17:53:39.767700016Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6abe7a0accecf29db6ebab18a10f844678ffed693d79e2e51a18a6f2b4530cbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:53:39.770101 containerd[1492]: time="2025-03-17T17:53:39.769043650Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.7\" with image id \"sha256:652dcad615a9a0c252c253860d5b5b7bfebd3efe159dc033a8555bc15a6d1985\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6abe7a0accecf29db6ebab18a10f844678ffed693d79e2e51a18a6f2b4530cbb\", size \"26201384\" in 2.174977382s" Mar 17 17:53:39.770101 containerd[1492]: time="2025-03-17T17:53:39.769160639Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.7\" returns image reference \"sha256:652dcad615a9a0c252c253860d5b5b7bfebd3efe159dc033a8555bc15a6d1985\"" Mar 17 17:53:39.770958 containerd[1492]: time="2025-03-17T17:53:39.770921871Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.7\"" Mar 17 17:53:40.534060 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 17 17:53:40.542540 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:53:40.724337 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:53:40.729283 (kubelet)[1968]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:53:40.835019 kubelet[1968]: E0317 17:53:40.834645 1968 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:53:40.841707 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:53:40.841943 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:53:40.842431 systemd[1]: kubelet.service: Consumed 220ms CPU time, 96M memory peak. Mar 17 17:53:41.525870 containerd[1492]: time="2025-03-17T17:53:41.525327807Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:53:41.529055 containerd[1492]: time="2025-03-17T17:53:41.528924407Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.7: active requests=0, bytes read=18780368" Mar 17 17:53:41.530303 containerd[1492]: time="2025-03-17T17:53:41.530194998Z" level=info msg="ImageCreate event name:\"sha256:7f1f6a63d8aa14cf61d0045e912ad312b4ade24637cecccc933b163582eae68c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:53:41.541171 containerd[1492]: time="2025-03-17T17:53:41.541004058Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:fb80249bcb77ee72b1c9fa5b70bc28a83ed107c9ca71957841ad91db379963bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:53:41.542804 containerd[1492]: time="2025-03-17T17:53:41.542489207Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.7\" with image id \"sha256:7f1f6a63d8aa14cf61d0045e912ad312b4ade24637cecccc933b163582eae68c\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:fb80249bcb77ee72b1c9fa5b70bc28a83ed107c9ca71957841ad91db379963bf\", size \"20267994\" in 1.771351478s" Mar 17 17:53:41.542804 containerd[1492]: time="2025-03-17T17:53:41.542546973Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.7\" returns image reference \"sha256:7f1f6a63d8aa14cf61d0045e912ad312b4ade24637cecccc933b163582eae68c\"" Mar 17 17:53:41.544477 containerd[1492]: time="2025-03-17T17:53:41.543608034Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.7\"" Mar 17 17:53:41.721703 systemd-resolved[1340]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Mar 17 17:53:43.031511 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2951397555.mount: Deactivated successfully. Mar 17 17:53:43.895265 containerd[1492]: time="2025-03-17T17:53:43.895123560Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:53:43.897063 containerd[1492]: time="2025-03-17T17:53:43.896975845Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.7: active requests=0, bytes read=30354630" Mar 17 17:53:43.898389 containerd[1492]: time="2025-03-17T17:53:43.898016497Z" level=info msg="ImageCreate event name:\"sha256:dcfc039c372ea285997a302d60e58a75b80905b4c4dba969993b9b22e8ac66d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:53:43.901366 containerd[1492]: time="2025-03-17T17:53:43.901279583Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e5839270c96c3ad1bea1dce4935126d3281297527f3655408d2970aa4b5cf178\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:53:43.903271 containerd[1492]: time="2025-03-17T17:53:43.902808056Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.7\" with image id \"sha256:dcfc039c372ea285997a302d60e58a75b80905b4c4dba969993b9b22e8ac66d1\", repo tag \"registry.k8s.io/kube-proxy:v1.31.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:e5839270c96c3ad1bea1dce4935126d3281297527f3655408d2970aa4b5cf178\", size \"30353649\" in 2.359148004s" Mar 17 17:53:43.903271 containerd[1492]: time="2025-03-17T17:53:43.902889882Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.7\" returns image reference \"sha256:dcfc039c372ea285997a302d60e58a75b80905b4c4dba969993b9b22e8ac66d1\"" Mar 17 17:53:43.903826 containerd[1492]: time="2025-03-17T17:53:43.903784800Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Mar 17 17:53:44.447896 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3798974281.mount: Deactivated successfully. Mar 17 17:53:44.776000 systemd-resolved[1340]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Mar 17 17:53:45.638730 containerd[1492]: time="2025-03-17T17:53:45.638552287Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:53:45.640233 containerd[1492]: time="2025-03-17T17:53:45.639772285Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Mar 17 17:53:45.641198 containerd[1492]: time="2025-03-17T17:53:45.641148285Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:53:45.644969 containerd[1492]: time="2025-03-17T17:53:45.644914568Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:53:45.646798 containerd[1492]: time="2025-03-17T17:53:45.646731693Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.742759149s" Mar 17 17:53:45.646798 containerd[1492]: time="2025-03-17T17:53:45.646790761Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Mar 17 17:53:45.647655 containerd[1492]: time="2025-03-17T17:53:45.647623685Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 17 17:53:46.177644 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3150817336.mount: Deactivated successfully. Mar 17 17:53:46.182441 containerd[1492]: time="2025-03-17T17:53:46.182368779Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:53:46.183736 containerd[1492]: time="2025-03-17T17:53:46.183671837Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Mar 17 17:53:46.185861 containerd[1492]: time="2025-03-17T17:53:46.184122913Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:53:46.187319 containerd[1492]: time="2025-03-17T17:53:46.187259930Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:53:46.188986 containerd[1492]: time="2025-03-17T17:53:46.188927782Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 541.259632ms" Mar 17 17:53:46.189226 containerd[1492]: time="2025-03-17T17:53:46.189191145Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Mar 17 17:53:46.190184 containerd[1492]: time="2025-03-17T17:53:46.190139278Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Mar 17 17:53:46.721653 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount358065385.mount: Deactivated successfully. Mar 17 17:53:48.897806 containerd[1492]: time="2025-03-17T17:53:48.897730267Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:53:48.900987 containerd[1492]: time="2025-03-17T17:53:48.900900675Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56779973" Mar 17 17:53:48.902940 containerd[1492]: time="2025-03-17T17:53:48.902248917Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:53:48.909736 containerd[1492]: time="2025-03-17T17:53:48.909653786Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.719191386s" Mar 17 17:53:48.910098 containerd[1492]: time="2025-03-17T17:53:48.910026025Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Mar 17 17:53:48.910186 containerd[1492]: time="2025-03-17T17:53:48.909977103Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:53:51.093076 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 17 17:53:51.106027 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:53:51.276137 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:53:51.292011 (kubelet)[2112]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:53:51.355920 kubelet[2112]: E0317 17:53:51.354676 2112 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:53:51.360212 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:53:51.360599 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:53:51.361231 systemd[1]: kubelet.service: Consumed 188ms CPU time, 95.9M memory peak. Mar 17 17:53:52.830592 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:53:52.831090 systemd[1]: kubelet.service: Consumed 188ms CPU time, 95.9M memory peak. Mar 17 17:53:52.846266 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:53:52.890147 systemd[1]: Reload requested from client PID 2127 ('systemctl') (unit session-7.scope)... Mar 17 17:53:52.890172 systemd[1]: Reloading... Mar 17 17:53:53.111591 zram_generator::config[2175]: No configuration found. Mar 17 17:53:53.317338 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:53:53.509211 systemd[1]: Reloading finished in 617 ms. Mar 17 17:53:53.599168 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:53:53.606599 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:53:53.612367 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 17:53:53.612751 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:53:53.612854 systemd[1]: kubelet.service: Consumed 138ms CPU time, 82.6M memory peak. Mar 17 17:53:53.622269 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:53:53.851133 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:53:53.862516 (kubelet)[2227]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:53:53.946391 kubelet[2227]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:53:53.946391 kubelet[2227]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 17:53:53.946391 kubelet[2227]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:53:53.947360 kubelet[2227]: I0317 17:53:53.947203 2227 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 17:53:54.552241 kubelet[2227]: I0317 17:53:54.552170 2227 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Mar 17 17:53:54.553865 kubelet[2227]: I0317 17:53:54.552501 2227 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 17:53:54.553865 kubelet[2227]: I0317 17:53:54.552953 2227 server.go:929] "Client rotation is on, will bootstrap in background" Mar 17 17:53:54.597877 kubelet[2227]: I0317 17:53:54.597784 2227 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 17:53:54.599682 kubelet[2227]: E0317 17:53:54.598888 2227 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://24.199.119.133:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 24.199.119.133:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:53:54.619646 kubelet[2227]: E0317 17:53:54.619281 2227 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 17 17:53:54.619646 kubelet[2227]: I0317 17:53:54.619332 2227 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 17 17:53:54.625949 kubelet[2227]: I0317 17:53:54.625901 2227 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 17:53:54.628116 kubelet[2227]: I0317 17:53:54.627750 2227 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 17 17:53:54.628319 kubelet[2227]: I0317 17:53:54.628252 2227 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 17:53:54.628623 kubelet[2227]: I0317 17:53:54.628308 2227 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.1.0-f-ebc70812f4","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 17 17:53:54.628623 kubelet[2227]: I0317 17:53:54.628580 2227 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 17:53:54.628623 kubelet[2227]: I0317 17:53:54.628598 2227 container_manager_linux.go:300] "Creating device plugin manager" Mar 17 17:53:54.628948 kubelet[2227]: I0317 17:53:54.628788 2227 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:53:54.631926 kubelet[2227]: I0317 17:53:54.631234 2227 kubelet.go:408] "Attempting to sync node with API server" Mar 17 17:53:54.631926 kubelet[2227]: I0317 17:53:54.631308 2227 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 17:53:54.631926 kubelet[2227]: I0317 17:53:54.631386 2227 kubelet.go:314] "Adding apiserver pod source" Mar 17 17:53:54.631926 kubelet[2227]: I0317 17:53:54.631404 2227 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 17:53:54.641365 kubelet[2227]: I0317 17:53:54.641176 2227 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 17:53:54.643624 kubelet[2227]: I0317 17:53:54.643429 2227 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 17:53:54.645631 kubelet[2227]: W0317 17:53:54.644398 2227 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 17 17:53:54.645631 kubelet[2227]: I0317 17:53:54.645192 2227 server.go:1269] "Started kubelet" Mar 17 17:53:54.645631 kubelet[2227]: W0317 17:53:54.645365 2227 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://24.199.119.133:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.0-f-ebc70812f4&limit=500&resourceVersion=0": dial tcp 24.199.119.133:6443: connect: connection refused Mar 17 17:53:54.645631 kubelet[2227]: E0317 17:53:54.645438 2227 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://24.199.119.133:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.0-f-ebc70812f4&limit=500&resourceVersion=0\": dial tcp 24.199.119.133:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:53:54.648741 kubelet[2227]: W0317 17:53:54.648610 2227 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://24.199.119.133:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 24.199.119.133:6443: connect: connection refused Mar 17 17:53:54.648741 kubelet[2227]: E0317 17:53:54.648702 2227 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://24.199.119.133:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 24.199.119.133:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:53:54.649019 kubelet[2227]: I0317 17:53:54.648752 2227 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 17:53:54.658722 kubelet[2227]: I0317 17:53:54.658673 2227 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 17:53:54.659156 kubelet[2227]: I0317 17:53:54.659063 2227 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 17:53:54.659812 kubelet[2227]: I0317 17:53:54.659756 2227 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 17:53:54.665111 kubelet[2227]: E0317 17:53:54.660246 2227 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://24.199.119.133:6443/api/v1/namespaces/default/events\": dial tcp 24.199.119.133:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.1.0-f-ebc70812f4.182da89fae9ed0b6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.1.0-f-ebc70812f4,UID:ci-4230.1.0-f-ebc70812f4,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.1.0-f-ebc70812f4,},FirstTimestamp:2025-03-17 17:53:54.645160118 +0000 UTC m=+0.773344479,LastTimestamp:2025-03-17 17:53:54.645160118 +0000 UTC m=+0.773344479,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.1.0-f-ebc70812f4,}" Mar 17 17:53:54.670342 kubelet[2227]: I0317 17:53:54.667056 2227 server.go:460] "Adding debug handlers to kubelet server" Mar 17 17:53:54.670342 kubelet[2227]: I0317 17:53:54.670268 2227 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 17 17:53:54.670813 kubelet[2227]: I0317 17:53:54.670782 2227 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 17 17:53:54.671445 kubelet[2227]: E0317 17:53:54.671401 2227 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.0-f-ebc70812f4\" not found" Mar 17 17:53:54.675870 kubelet[2227]: I0317 17:53:54.675063 2227 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 17 17:53:54.675870 kubelet[2227]: I0317 17:53:54.675172 2227 reconciler.go:26] "Reconciler: start to sync state" Mar 17 17:53:54.675870 kubelet[2227]: W0317 17:53:54.675725 2227 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://24.199.119.133:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 24.199.119.133:6443: connect: connection refused Mar 17 17:53:54.675870 kubelet[2227]: E0317 17:53:54.675800 2227 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://24.199.119.133:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 24.199.119.133:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:53:54.678693 kubelet[2227]: E0317 17:53:54.678258 2227 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://24.199.119.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.0-f-ebc70812f4?timeout=10s\": dial tcp 24.199.119.133:6443: connect: connection refused" interval="200ms" Mar 17 17:53:54.678693 kubelet[2227]: I0317 17:53:54.678496 2227 factory.go:221] Registration of the systemd container factory successfully Mar 17 17:53:54.679118 kubelet[2227]: I0317 17:53:54.679088 2227 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 17:53:54.688219 kubelet[2227]: I0317 17:53:54.688175 2227 factory.go:221] Registration of the containerd container factory successfully Mar 17 17:53:54.721162 kubelet[2227]: I0317 17:53:54.721072 2227 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 17:53:54.727263 kubelet[2227]: I0317 17:53:54.727207 2227 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 17:53:54.727263 kubelet[2227]: I0317 17:53:54.727277 2227 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 17:53:54.727531 kubelet[2227]: I0317 17:53:54.727308 2227 kubelet.go:2321] "Starting kubelet main sync loop" Mar 17 17:53:54.727531 kubelet[2227]: E0317 17:53:54.727388 2227 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 17:53:54.743970 kubelet[2227]: W0317 17:53:54.743914 2227 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://24.199.119.133:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 24.199.119.133:6443: connect: connection refused Mar 17 17:53:54.744199 kubelet[2227]: E0317 17:53:54.743982 2227 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://24.199.119.133:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 24.199.119.133:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:53:54.746062 kubelet[2227]: I0317 17:53:54.745906 2227 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 17:53:54.746062 kubelet[2227]: I0317 17:53:54.745931 2227 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 17:53:54.746062 kubelet[2227]: I0317 17:53:54.745957 2227 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:53:54.749112 kubelet[2227]: I0317 17:53:54.749065 2227 policy_none.go:49] "None policy: Start" Mar 17 17:53:54.750731 kubelet[2227]: I0317 17:53:54.750616 2227 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 17:53:54.750731 kubelet[2227]: I0317 17:53:54.750666 2227 state_mem.go:35] "Initializing new in-memory state store" Mar 17 17:53:54.763734 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 17 17:53:54.772021 kubelet[2227]: E0317 17:53:54.771958 2227 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.0-f-ebc70812f4\" not found" Mar 17 17:53:54.780912 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 17 17:53:54.790357 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 17 17:53:54.809746 kubelet[2227]: I0317 17:53:54.807614 2227 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 17:53:54.812032 kubelet[2227]: I0317 17:53:54.811996 2227 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 17 17:53:54.813336 kubelet[2227]: I0317 17:53:54.813254 2227 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 17:53:54.814392 kubelet[2227]: I0317 17:53:54.814353 2227 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 17:53:54.817868 kubelet[2227]: E0317 17:53:54.817798 2227 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230.1.0-f-ebc70812f4\" not found" Mar 17 17:53:54.848769 systemd[1]: Created slice kubepods-burstable-podc0b1fa3167e4df8ac7c9a43263527ca4.slice - libcontainer container kubepods-burstable-podc0b1fa3167e4df8ac7c9a43263527ca4.slice. Mar 17 17:53:54.868304 systemd[1]: Created slice kubepods-burstable-pod3778147ca01ba067b4646d528ce0613d.slice - libcontainer container kubepods-burstable-pod3778147ca01ba067b4646d528ce0613d.slice. Mar 17 17:53:54.879876 kubelet[2227]: E0317 17:53:54.879200 2227 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://24.199.119.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.0-f-ebc70812f4?timeout=10s\": dial tcp 24.199.119.133:6443: connect: connection refused" interval="400ms" Mar 17 17:53:54.880419 systemd[1]: Created slice kubepods-burstable-pod5e6a6d406c01e12f4867bbc2b0be3876.slice - libcontainer container kubepods-burstable-pod5e6a6d406c01e12f4867bbc2b0be3876.slice. Mar 17 17:53:54.916357 kubelet[2227]: I0317 17:53:54.915939 2227 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.1.0-f-ebc70812f4" Mar 17 17:53:54.916633 kubelet[2227]: E0317 17:53:54.916464 2227 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://24.199.119.133:6443/api/v1/nodes\": dial tcp 24.199.119.133:6443: connect: connection refused" node="ci-4230.1.0-f-ebc70812f4" Mar 17 17:53:54.932157 kubelet[2227]: E0317 17:53:54.932041 2227 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://24.199.119.133:6443/api/v1/namespaces/default/events\": dial tcp 24.199.119.133:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.1.0-f-ebc70812f4.182da89fae9ed0b6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.1.0-f-ebc70812f4,UID:ci-4230.1.0-f-ebc70812f4,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.1.0-f-ebc70812f4,},FirstTimestamp:2025-03-17 17:53:54.645160118 +0000 UTC m=+0.773344479,LastTimestamp:2025-03-17 17:53:54.645160118 +0000 UTC m=+0.773344479,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.1.0-f-ebc70812f4,}" Mar 17 17:53:54.976902 kubelet[2227]: I0317 17:53:54.976751 2227 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5e6a6d406c01e12f4867bbc2b0be3876-kubeconfig\") pod \"kube-scheduler-ci-4230.1.0-f-ebc70812f4\" (UID: \"5e6a6d406c01e12f4867bbc2b0be3876\") " pod="kube-system/kube-scheduler-ci-4230.1.0-f-ebc70812f4" Mar 17 17:53:54.977505 kubelet[2227]: I0317 17:53:54.976906 2227 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c0b1fa3167e4df8ac7c9a43263527ca4-k8s-certs\") pod \"kube-apiserver-ci-4230.1.0-f-ebc70812f4\" (UID: \"c0b1fa3167e4df8ac7c9a43263527ca4\") " pod="kube-system/kube-apiserver-ci-4230.1.0-f-ebc70812f4" Mar 17 17:53:54.977505 kubelet[2227]: I0317 17:53:54.976958 2227 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c0b1fa3167e4df8ac7c9a43263527ca4-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.1.0-f-ebc70812f4\" (UID: \"c0b1fa3167e4df8ac7c9a43263527ca4\") " pod="kube-system/kube-apiserver-ci-4230.1.0-f-ebc70812f4" Mar 17 17:53:54.977505 kubelet[2227]: I0317 17:53:54.976997 2227 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3778147ca01ba067b4646d528ce0613d-ca-certs\") pod \"kube-controller-manager-ci-4230.1.0-f-ebc70812f4\" (UID: \"3778147ca01ba067b4646d528ce0613d\") " pod="kube-system/kube-controller-manager-ci-4230.1.0-f-ebc70812f4" Mar 17 17:53:54.977505 kubelet[2227]: I0317 17:53:54.977032 2227 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3778147ca01ba067b4646d528ce0613d-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.1.0-f-ebc70812f4\" (UID: \"3778147ca01ba067b4646d528ce0613d\") " pod="kube-system/kube-controller-manager-ci-4230.1.0-f-ebc70812f4" Mar 17 17:53:54.977505 kubelet[2227]: I0317 17:53:54.977063 2227 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3778147ca01ba067b4646d528ce0613d-k8s-certs\") pod \"kube-controller-manager-ci-4230.1.0-f-ebc70812f4\" (UID: \"3778147ca01ba067b4646d528ce0613d\") " pod="kube-system/kube-controller-manager-ci-4230.1.0-f-ebc70812f4" Mar 17 17:53:54.977766 kubelet[2227]: I0317 17:53:54.977094 2227 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3778147ca01ba067b4646d528ce0613d-kubeconfig\") pod \"kube-controller-manager-ci-4230.1.0-f-ebc70812f4\" (UID: \"3778147ca01ba067b4646d528ce0613d\") " pod="kube-system/kube-controller-manager-ci-4230.1.0-f-ebc70812f4" Mar 17 17:53:54.977766 kubelet[2227]: I0317 17:53:54.977127 2227 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3778147ca01ba067b4646d528ce0613d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.1.0-f-ebc70812f4\" (UID: \"3778147ca01ba067b4646d528ce0613d\") " pod="kube-system/kube-controller-manager-ci-4230.1.0-f-ebc70812f4" Mar 17 17:53:54.977766 kubelet[2227]: I0317 17:53:54.977157 2227 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c0b1fa3167e4df8ac7c9a43263527ca4-ca-certs\") pod \"kube-apiserver-ci-4230.1.0-f-ebc70812f4\" (UID: \"c0b1fa3167e4df8ac7c9a43263527ca4\") " pod="kube-system/kube-apiserver-ci-4230.1.0-f-ebc70812f4" Mar 17 17:53:55.118355 kubelet[2227]: I0317 17:53:55.118078 2227 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.1.0-f-ebc70812f4" Mar 17 17:53:55.119583 kubelet[2227]: E0317 17:53:55.119205 2227 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://24.199.119.133:6443/api/v1/nodes\": dial tcp 24.199.119.133:6443: connect: connection refused" node="ci-4230.1.0-f-ebc70812f4" Mar 17 17:53:55.161201 kubelet[2227]: E0317 17:53:55.161136 2227 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:53:55.164041 containerd[1492]: time="2025-03-17T17:53:55.163943898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.1.0-f-ebc70812f4,Uid:c0b1fa3167e4df8ac7c9a43263527ca4,Namespace:kube-system,Attempt:0,}" Mar 17 17:53:55.172289 systemd-resolved[1340]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Mar 17 17:53:55.176922 kubelet[2227]: E0317 17:53:55.176447 2227 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:53:55.177614 containerd[1492]: time="2025-03-17T17:53:55.177553634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.1.0-f-ebc70812f4,Uid:3778147ca01ba067b4646d528ce0613d,Namespace:kube-system,Attempt:0,}" Mar 17 17:53:55.186235 kubelet[2227]: E0317 17:53:55.185785 2227 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:53:55.187779 containerd[1492]: time="2025-03-17T17:53:55.187590414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.1.0-f-ebc70812f4,Uid:5e6a6d406c01e12f4867bbc2b0be3876,Namespace:kube-system,Attempt:0,}" Mar 17 17:53:55.280923 kubelet[2227]: E0317 17:53:55.280802 2227 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://24.199.119.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.0-f-ebc70812f4?timeout=10s\": dial tcp 24.199.119.133:6443: connect: connection refused" interval="800ms" Mar 17 17:53:55.521150 kubelet[2227]: I0317 17:53:55.520754 2227 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.1.0-f-ebc70812f4" Mar 17 17:53:55.521287 kubelet[2227]: E0317 17:53:55.521194 2227 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://24.199.119.133:6443/api/v1/nodes\": dial tcp 24.199.119.133:6443: connect: connection refused" node="ci-4230.1.0-f-ebc70812f4" Mar 17 17:53:55.615701 kubelet[2227]: W0317 17:53:55.615526 2227 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://24.199.119.133:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 24.199.119.133:6443: connect: connection refused Mar 17 17:53:55.615701 kubelet[2227]: E0317 17:53:55.615640 2227 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://24.199.119.133:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 24.199.119.133:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:53:55.736938 kubelet[2227]: W0317 17:53:55.735223 2227 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://24.199.119.133:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 24.199.119.133:6443: connect: connection refused Mar 17 17:53:55.736938 kubelet[2227]: E0317 17:53:55.735457 2227 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://24.199.119.133:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 24.199.119.133:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:53:55.752513 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3330601953.mount: Deactivated successfully. Mar 17 17:53:55.761866 containerd[1492]: time="2025-03-17T17:53:55.760393548Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:53:55.766088 containerd[1492]: time="2025-03-17T17:53:55.765969925Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:53:55.769630 containerd[1492]: time="2025-03-17T17:53:55.769546620Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 17:53:55.772031 containerd[1492]: time="2025-03-17T17:53:55.771685390Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 17 17:53:55.773175 containerd[1492]: time="2025-03-17T17:53:55.772931816Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:53:55.778783 containerd[1492]: time="2025-03-17T17:53:55.778219162Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 17:53:55.783218 containerd[1492]: time="2025-03-17T17:53:55.783152462Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:53:55.785812 containerd[1492]: time="2025-03-17T17:53:55.784963098Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 606.905327ms" Mar 17 17:53:55.787926 containerd[1492]: time="2025-03-17T17:53:55.787538600Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:53:55.790658 containerd[1492]: time="2025-03-17T17:53:55.790513389Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 622.249898ms" Mar 17 17:53:55.791608 containerd[1492]: time="2025-03-17T17:53:55.791554140Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 603.790775ms" Mar 17 17:53:55.976092 kubelet[2227]: W0317 17:53:55.975878 2227 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://24.199.119.133:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.0-f-ebc70812f4&limit=500&resourceVersion=0": dial tcp 24.199.119.133:6443: connect: connection refused Mar 17 17:53:55.976092 kubelet[2227]: E0317 17:53:55.976030 2227 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://24.199.119.133:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.0-f-ebc70812f4&limit=500&resourceVersion=0\": dial tcp 24.199.119.133:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:53:55.999453 containerd[1492]: time="2025-03-17T17:53:55.999103588Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:53:55.999453 containerd[1492]: time="2025-03-17T17:53:55.999211178Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:53:55.999453 containerd[1492]: time="2025-03-17T17:53:55.999237105Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:53:56.003392 containerd[1492]: time="2025-03-17T17:53:55.994689181Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:53:56.003392 containerd[1492]: time="2025-03-17T17:53:55.999557178Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:53:56.003392 containerd[1492]: time="2025-03-17T17:53:55.999574808Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:53:56.003392 containerd[1492]: time="2025-03-17T17:53:55.999711760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:53:56.003392 containerd[1492]: time="2025-03-17T17:53:55.999372706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:53:56.004497 containerd[1492]: time="2025-03-17T17:53:56.003956434Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:53:56.004497 containerd[1492]: time="2025-03-17T17:53:56.004100560Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:53:56.004497 containerd[1492]: time="2025-03-17T17:53:56.004123344Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:53:56.005656 containerd[1492]: time="2025-03-17T17:53:56.004708538Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:53:56.049674 systemd[1]: Started cri-containerd-ee4842dbebedd87e6da3470d30e46f4522aef4aa3aa2b2fc3872d6727c062552.scope - libcontainer container ee4842dbebedd87e6da3470d30e46f4522aef4aa3aa2b2fc3872d6727c062552. Mar 17 17:53:56.072405 systemd[1]: Started cri-containerd-d389350d877258fb0467bc1b9e0d628727733aeaa50186bdd985ead907423a0f.scope - libcontainer container d389350d877258fb0467bc1b9e0d628727733aeaa50186bdd985ead907423a0f. Mar 17 17:53:56.083517 kubelet[2227]: E0317 17:53:56.083327 2227 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://24.199.119.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.0-f-ebc70812f4?timeout=10s\": dial tcp 24.199.119.133:6443: connect: connection refused" interval="1.6s" Mar 17 17:53:56.094361 systemd[1]: Started cri-containerd-b2a8ed81886edb0c66b21e4407267f3c5c1ef512c09dd51ba7b385848f67d767.scope - libcontainer container b2a8ed81886edb0c66b21e4407267f3c5c1ef512c09dd51ba7b385848f67d767. Mar 17 17:53:56.181763 containerd[1492]: time="2025-03-17T17:53:56.181717491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.1.0-f-ebc70812f4,Uid:3778147ca01ba067b4646d528ce0613d,Namespace:kube-system,Attempt:0,} returns sandbox id \"ee4842dbebedd87e6da3470d30e46f4522aef4aa3aa2b2fc3872d6727c062552\"" Mar 17 17:53:56.186277 kubelet[2227]: E0317 17:53:56.185908 2227 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:53:56.191526 kubelet[2227]: W0317 17:53:56.191455 2227 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://24.199.119.133:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 24.199.119.133:6443: connect: connection refused Mar 17 17:53:56.192148 kubelet[2227]: E0317 17:53:56.192019 2227 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://24.199.119.133:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 24.199.119.133:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:53:56.196999 containerd[1492]: time="2025-03-17T17:53:56.196889602Z" level=info msg="CreateContainer within sandbox \"ee4842dbebedd87e6da3470d30e46f4522aef4aa3aa2b2fc3872d6727c062552\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 17 17:53:56.206957 containerd[1492]: time="2025-03-17T17:53:56.205803957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.1.0-f-ebc70812f4,Uid:5e6a6d406c01e12f4867bbc2b0be3876,Namespace:kube-system,Attempt:0,} returns sandbox id \"d389350d877258fb0467bc1b9e0d628727733aeaa50186bdd985ead907423a0f\"" Mar 17 17:53:56.208431 kubelet[2227]: E0317 17:53:56.208306 2227 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:53:56.212017 containerd[1492]: time="2025-03-17T17:53:56.211763718Z" level=info msg="CreateContainer within sandbox \"d389350d877258fb0467bc1b9e0d628727733aeaa50186bdd985ead907423a0f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 17 17:53:56.224245 containerd[1492]: time="2025-03-17T17:53:56.224142613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.1.0-f-ebc70812f4,Uid:c0b1fa3167e4df8ac7c9a43263527ca4,Namespace:kube-system,Attempt:0,} returns sandbox id \"b2a8ed81886edb0c66b21e4407267f3c5c1ef512c09dd51ba7b385848f67d767\"" Mar 17 17:53:56.226047 kubelet[2227]: E0317 17:53:56.225755 2227 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:53:56.231478 containerd[1492]: time="2025-03-17T17:53:56.231298827Z" level=info msg="CreateContainer within sandbox \"b2a8ed81886edb0c66b21e4407267f3c5c1ef512c09dd51ba7b385848f67d767\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 17 17:53:56.234800 containerd[1492]: time="2025-03-17T17:53:56.234735488Z" level=info msg="CreateContainer within sandbox \"ee4842dbebedd87e6da3470d30e46f4522aef4aa3aa2b2fc3872d6727c062552\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"84a3301bf5522e90d44625109a1651a749c37fd4fb3a00f7b8d95a21ca7258fc\"" Mar 17 17:53:56.236260 containerd[1492]: time="2025-03-17T17:53:56.235690417Z" level=info msg="StartContainer for \"84a3301bf5522e90d44625109a1651a749c37fd4fb3a00f7b8d95a21ca7258fc\"" Mar 17 17:53:56.246489 containerd[1492]: time="2025-03-17T17:53:56.246418048Z" level=info msg="CreateContainer within sandbox \"d389350d877258fb0467bc1b9e0d628727733aeaa50186bdd985ead907423a0f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4d9cc62abed85eff911436d8d798a7a5c0982880e258f5b21e047ecde7574dae\"" Mar 17 17:53:56.249116 containerd[1492]: time="2025-03-17T17:53:56.249074190Z" level=info msg="StartContainer for \"4d9cc62abed85eff911436d8d798a7a5c0982880e258f5b21e047ecde7574dae\"" Mar 17 17:53:56.258361 containerd[1492]: time="2025-03-17T17:53:56.257492023Z" level=info msg="CreateContainer within sandbox \"b2a8ed81886edb0c66b21e4407267f3c5c1ef512c09dd51ba7b385848f67d767\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c9cec8566574f6a31aa717979be13907fbd0ed608a830f71808fb385c41facec\"" Mar 17 17:53:56.259209 containerd[1492]: time="2025-03-17T17:53:56.259159912Z" level=info msg="StartContainer for \"c9cec8566574f6a31aa717979be13907fbd0ed608a830f71808fb385c41facec\"" Mar 17 17:53:56.301205 systemd[1]: Started cri-containerd-84a3301bf5522e90d44625109a1651a749c37fd4fb3a00f7b8d95a21ca7258fc.scope - libcontainer container 84a3301bf5522e90d44625109a1651a749c37fd4fb3a00f7b8d95a21ca7258fc. Mar 17 17:53:56.324148 systemd[1]: Started cri-containerd-4d9cc62abed85eff911436d8d798a7a5c0982880e258f5b21e047ecde7574dae.scope - libcontainer container 4d9cc62abed85eff911436d8d798a7a5c0982880e258f5b21e047ecde7574dae. Mar 17 17:53:56.331181 kubelet[2227]: I0317 17:53:56.330134 2227 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.1.0-f-ebc70812f4" Mar 17 17:53:56.331181 kubelet[2227]: E0317 17:53:56.330807 2227 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://24.199.119.133:6443/api/v1/nodes\": dial tcp 24.199.119.133:6443: connect: connection refused" node="ci-4230.1.0-f-ebc70812f4" Mar 17 17:53:56.342584 systemd[1]: Started cri-containerd-c9cec8566574f6a31aa717979be13907fbd0ed608a830f71808fb385c41facec.scope - libcontainer container c9cec8566574f6a31aa717979be13907fbd0ed608a830f71808fb385c41facec. Mar 17 17:53:56.422870 containerd[1492]: time="2025-03-17T17:53:56.421900352Z" level=info msg="StartContainer for \"84a3301bf5522e90d44625109a1651a749c37fd4fb3a00f7b8d95a21ca7258fc\" returns successfully" Mar 17 17:53:56.455883 containerd[1492]: time="2025-03-17T17:53:56.453494804Z" level=info msg="StartContainer for \"4d9cc62abed85eff911436d8d798a7a5c0982880e258f5b21e047ecde7574dae\" returns successfully" Mar 17 17:53:56.475753 containerd[1492]: time="2025-03-17T17:53:56.475706562Z" level=info msg="StartContainer for \"c9cec8566574f6a31aa717979be13907fbd0ed608a830f71808fb385c41facec\" returns successfully" Mar 17 17:53:56.758983 kubelet[2227]: E0317 17:53:56.758629 2227 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:53:56.765881 kubelet[2227]: E0317 17:53:56.764407 2227 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:53:56.778969 kubelet[2227]: E0317 17:53:56.778928 2227 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:53:57.778936 kubelet[2227]: E0317 17:53:57.778887 2227 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:53:57.934810 kubelet[2227]: I0317 17:53:57.934164 2227 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.1.0-f-ebc70812f4" Mar 17 17:53:58.937392 kubelet[2227]: E0317 17:53:58.937318 2227 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230.1.0-f-ebc70812f4\" not found" node="ci-4230.1.0-f-ebc70812f4" Mar 17 17:53:59.094000 kubelet[2227]: I0317 17:53:59.091497 2227 kubelet_node_status.go:75] "Successfully registered node" node="ci-4230.1.0-f-ebc70812f4" Mar 17 17:53:59.266021 kubelet[2227]: E0317 17:53:59.265458 2227 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4230.1.0-f-ebc70812f4\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4230.1.0-f-ebc70812f4" Mar 17 17:53:59.266021 kubelet[2227]: E0317 17:53:59.265756 2227 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:53:59.650478 kubelet[2227]: I0317 17:53:59.650328 2227 apiserver.go:52] "Watching apiserver" Mar 17 17:53:59.675670 kubelet[2227]: I0317 17:53:59.675518 2227 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 17 17:54:00.461197 kubelet[2227]: W0317 17:54:00.459774 2227 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 17:54:00.461197 kubelet[2227]: E0317 17:54:00.460354 2227 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:54:00.789273 kubelet[2227]: E0317 17:54:00.787545 2227 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:54:01.348055 systemd[1]: Reload requested from client PID 2507 ('systemctl') (unit session-7.scope)... Mar 17 17:54:01.348087 systemd[1]: Reloading... Mar 17 17:54:01.548880 zram_generator::config[2551]: No configuration found. Mar 17 17:54:01.848676 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:54:02.151566 systemd[1]: Reloading finished in 802 ms. Mar 17 17:54:02.203874 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:54:02.226963 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 17:54:02.227667 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:54:02.227997 systemd[1]: kubelet.service: Consumed 1.302s CPU time, 113.5M memory peak. Mar 17 17:54:02.239572 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:54:02.552868 (kubelet)[2601]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:54:02.553248 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:54:02.779036 kubelet[2601]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:54:02.779036 kubelet[2601]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 17:54:02.779036 kubelet[2601]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:54:02.782799 kubelet[2601]: I0317 17:54:02.779979 2601 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 17:54:02.799911 kubelet[2601]: I0317 17:54:02.799787 2601 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Mar 17 17:54:02.799911 kubelet[2601]: I0317 17:54:02.799854 2601 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 17:54:02.800461 kubelet[2601]: I0317 17:54:02.800427 2601 server.go:929] "Client rotation is on, will bootstrap in background" Mar 17 17:54:02.804651 kubelet[2601]: I0317 17:54:02.804443 2601 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 17 17:54:02.809688 kubelet[2601]: I0317 17:54:02.809601 2601 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 17:54:02.832717 kubelet[2601]: E0317 17:54:02.832640 2601 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 17 17:54:02.832717 kubelet[2601]: I0317 17:54:02.832710 2601 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 17 17:54:02.849249 kubelet[2601]: I0317 17:54:02.849105 2601 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 17:54:02.850683 kubelet[2601]: I0317 17:54:02.849737 2601 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 17 17:54:02.850683 kubelet[2601]: I0317 17:54:02.850047 2601 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 17:54:02.850683 kubelet[2601]: I0317 17:54:02.850108 2601 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.1.0-f-ebc70812f4","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 17 17:54:02.850683 kubelet[2601]: I0317 17:54:02.850398 2601 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 17:54:02.851232 kubelet[2601]: I0317 17:54:02.850415 2601 container_manager_linux.go:300] "Creating device plugin manager" Mar 17 17:54:02.851232 kubelet[2601]: I0317 17:54:02.850493 2601 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:54:02.851908 kubelet[2601]: I0317 17:54:02.851420 2601 kubelet.go:408] "Attempting to sync node with API server" Mar 17 17:54:02.852720 kubelet[2601]: I0317 17:54:02.852621 2601 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 17:54:02.854734 kubelet[2601]: I0317 17:54:02.854699 2601 kubelet.go:314] "Adding apiserver pod source" Mar 17 17:54:02.855889 kubelet[2601]: I0317 17:54:02.855009 2601 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 17:54:02.862108 kubelet[2601]: I0317 17:54:02.862058 2601 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 17:54:02.868370 sudo[2615]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 17 17:54:02.869002 sudo[2615]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 17 17:54:02.876891 kubelet[2601]: I0317 17:54:02.875556 2601 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 17:54:02.887163 kubelet[2601]: I0317 17:54:02.887107 2601 server.go:1269] "Started kubelet" Mar 17 17:54:02.907968 kubelet[2601]: I0317 17:54:02.907335 2601 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 17:54:02.922354 kubelet[2601]: I0317 17:54:02.921653 2601 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 17:54:02.937888 kubelet[2601]: I0317 17:54:02.935094 2601 server.go:460] "Adding debug handlers to kubelet server" Mar 17 17:54:02.941547 kubelet[2601]: I0317 17:54:02.939671 2601 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 17:54:02.948306 kubelet[2601]: I0317 17:54:02.947345 2601 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 17:54:02.953625 kubelet[2601]: I0317 17:54:02.947750 2601 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 17 17:54:02.960272 kubelet[2601]: I0317 17:54:02.943633 2601 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 17 17:54:02.960272 kubelet[2601]: I0317 17:54:02.947765 2601 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 17 17:54:02.960272 kubelet[2601]: I0317 17:54:02.955771 2601 reconciler.go:26] "Reconciler: start to sync state" Mar 17 17:54:02.963640 kubelet[2601]: I0317 17:54:02.963521 2601 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 17:54:02.966679 kubelet[2601]: I0317 17:54:02.965483 2601 factory.go:221] Registration of the systemd container factory successfully Mar 17 17:54:02.969521 kubelet[2601]: I0317 17:54:02.968613 2601 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 17:54:02.976935 kubelet[2601]: I0317 17:54:02.966148 2601 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 17:54:02.976935 kubelet[2601]: I0317 17:54:02.973634 2601 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 17:54:02.976935 kubelet[2601]: I0317 17:54:02.973662 2601 kubelet.go:2321] "Starting kubelet main sync loop" Mar 17 17:54:02.976935 kubelet[2601]: E0317 17:54:02.973746 2601 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 17:54:02.982965 kubelet[2601]: E0317 17:54:02.981921 2601 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 17:54:02.985296 kubelet[2601]: I0317 17:54:02.985005 2601 factory.go:221] Registration of the containerd container factory successfully Mar 17 17:54:03.075186 kubelet[2601]: E0317 17:54:03.075009 2601 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 17 17:54:03.193614 kubelet[2601]: I0317 17:54:03.190305 2601 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 17:54:03.193614 kubelet[2601]: I0317 17:54:03.190346 2601 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 17:54:03.193614 kubelet[2601]: I0317 17:54:03.190383 2601 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:54:03.193614 kubelet[2601]: I0317 17:54:03.190683 2601 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 17 17:54:03.193614 kubelet[2601]: I0317 17:54:03.190729 2601 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 17 17:54:03.193614 kubelet[2601]: I0317 17:54:03.190764 2601 policy_none.go:49] "None policy: Start" Mar 17 17:54:03.200591 kubelet[2601]: I0317 17:54:03.200202 2601 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 17:54:03.201314 kubelet[2601]: I0317 17:54:03.201005 2601 state_mem.go:35] "Initializing new in-memory state store" Mar 17 17:54:03.202874 kubelet[2601]: I0317 17:54:03.202333 2601 state_mem.go:75] "Updated machine memory state" Mar 17 17:54:03.225479 kubelet[2601]: I0317 17:54:03.224547 2601 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 17:54:03.227418 kubelet[2601]: I0317 17:54:03.226209 2601 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 17 17:54:03.227418 kubelet[2601]: I0317 17:54:03.226234 2601 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 17:54:03.228285 kubelet[2601]: I0317 17:54:03.228246 2601 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 17:54:03.305761 kubelet[2601]: W0317 17:54:03.300960 2601 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 17:54:03.316184 kubelet[2601]: W0317 17:54:03.309618 2601 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 17:54:03.316184 kubelet[2601]: W0317 17:54:03.314290 2601 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 17:54:03.316184 kubelet[2601]: E0317 17:54:03.314525 2601 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4230.1.0-f-ebc70812f4\" already exists" pod="kube-system/kube-apiserver-ci-4230.1.0-f-ebc70812f4" Mar 17 17:54:03.352742 kubelet[2601]: I0317 17:54:03.352592 2601 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.1.0-f-ebc70812f4" Mar 17 17:54:03.358725 kubelet[2601]: I0317 17:54:03.358530 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3778147ca01ba067b4646d528ce0613d-ca-certs\") pod \"kube-controller-manager-ci-4230.1.0-f-ebc70812f4\" (UID: \"3778147ca01ba067b4646d528ce0613d\") " pod="kube-system/kube-controller-manager-ci-4230.1.0-f-ebc70812f4" Mar 17 17:54:03.358725 kubelet[2601]: I0317 17:54:03.358723 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3778147ca01ba067b4646d528ce0613d-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.1.0-f-ebc70812f4\" (UID: \"3778147ca01ba067b4646d528ce0613d\") " pod="kube-system/kube-controller-manager-ci-4230.1.0-f-ebc70812f4" Mar 17 17:54:03.361034 kubelet[2601]: I0317 17:54:03.358757 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3778147ca01ba067b4646d528ce0613d-k8s-certs\") pod \"kube-controller-manager-ci-4230.1.0-f-ebc70812f4\" (UID: \"3778147ca01ba067b4646d528ce0613d\") " pod="kube-system/kube-controller-manager-ci-4230.1.0-f-ebc70812f4" Mar 17 17:54:03.361034 kubelet[2601]: I0317 17:54:03.358785 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3778147ca01ba067b4646d528ce0613d-kubeconfig\") pod \"kube-controller-manager-ci-4230.1.0-f-ebc70812f4\" (UID: \"3778147ca01ba067b4646d528ce0613d\") " pod="kube-system/kube-controller-manager-ci-4230.1.0-f-ebc70812f4" Mar 17 17:54:03.361034 kubelet[2601]: I0317 17:54:03.358812 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5e6a6d406c01e12f4867bbc2b0be3876-kubeconfig\") pod \"kube-scheduler-ci-4230.1.0-f-ebc70812f4\" (UID: \"5e6a6d406c01e12f4867bbc2b0be3876\") " pod="kube-system/kube-scheduler-ci-4230.1.0-f-ebc70812f4" Mar 17 17:54:03.361034 kubelet[2601]: I0317 17:54:03.358857 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c0b1fa3167e4df8ac7c9a43263527ca4-ca-certs\") pod \"kube-apiserver-ci-4230.1.0-f-ebc70812f4\" (UID: \"c0b1fa3167e4df8ac7c9a43263527ca4\") " pod="kube-system/kube-apiserver-ci-4230.1.0-f-ebc70812f4" Mar 17 17:54:03.361034 kubelet[2601]: I0317 17:54:03.358878 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c0b1fa3167e4df8ac7c9a43263527ca4-k8s-certs\") pod \"kube-apiserver-ci-4230.1.0-f-ebc70812f4\" (UID: \"c0b1fa3167e4df8ac7c9a43263527ca4\") " pod="kube-system/kube-apiserver-ci-4230.1.0-f-ebc70812f4" Mar 17 17:54:03.361261 kubelet[2601]: I0317 17:54:03.358901 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c0b1fa3167e4df8ac7c9a43263527ca4-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.1.0-f-ebc70812f4\" (UID: \"c0b1fa3167e4df8ac7c9a43263527ca4\") " pod="kube-system/kube-apiserver-ci-4230.1.0-f-ebc70812f4" Mar 17 17:54:03.361261 kubelet[2601]: I0317 17:54:03.358926 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3778147ca01ba067b4646d528ce0613d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.1.0-f-ebc70812f4\" (UID: \"3778147ca01ba067b4646d528ce0613d\") " pod="kube-system/kube-controller-manager-ci-4230.1.0-f-ebc70812f4" Mar 17 17:54:03.395265 kubelet[2601]: I0317 17:54:03.395203 2601 kubelet_node_status.go:111] "Node was previously registered" node="ci-4230.1.0-f-ebc70812f4" Mar 17 17:54:03.395520 kubelet[2601]: I0317 17:54:03.395356 2601 kubelet_node_status.go:75] "Successfully registered node" node="ci-4230.1.0-f-ebc70812f4" Mar 17 17:54:03.602254 kubelet[2601]: E0317 17:54:03.602181 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:54:03.614637 kubelet[2601]: E0317 17:54:03.614453 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:54:03.615401 kubelet[2601]: E0317 17:54:03.615328 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:54:03.873881 kubelet[2601]: I0317 17:54:03.861642 2601 apiserver.go:52] "Watching apiserver" Mar 17 17:54:03.955917 kubelet[2601]: I0317 17:54:03.955836 2601 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 17 17:54:04.091225 kubelet[2601]: E0317 17:54:04.091186 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:54:04.091611 kubelet[2601]: E0317 17:54:04.091579 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:54:04.091975 kubelet[2601]: E0317 17:54:04.091948 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:54:04.230262 kubelet[2601]: I0317 17:54:04.228502 2601 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230.1.0-f-ebc70812f4" podStartSLOduration=4.228471483 podStartE2EDuration="4.228471483s" podCreationTimestamp="2025-03-17 17:54:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:54:04.199680245 +0000 UTC m=+1.595961153" watchObservedRunningTime="2025-03-17 17:54:04.228471483 +0000 UTC m=+1.624752388" Mar 17 17:54:04.233227 kubelet[2601]: I0317 17:54:04.232786 2601 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230.1.0-f-ebc70812f4" podStartSLOduration=1.232652756 podStartE2EDuration="1.232652756s" podCreationTimestamp="2025-03-17 17:54:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:54:04.232004314 +0000 UTC m=+1.628285222" watchObservedRunningTime="2025-03-17 17:54:04.232652756 +0000 UTC m=+1.628933668" Mar 17 17:54:04.290137 sudo[2615]: pam_unix(sudo:session): session closed for user root Mar 17 17:54:05.096673 kubelet[2601]: E0317 17:54:05.096253 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:54:06.097361 kubelet[2601]: E0317 17:54:06.096818 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:54:06.783876 sudo[1681]: pam_unix(sudo:session): session closed for user root Mar 17 17:54:06.789872 sshd[1680]: Connection closed by 139.178.68.195 port 49216 Mar 17 17:54:06.789129 sshd-session[1677]: pam_unix(sshd:session): session closed for user core Mar 17 17:54:06.796132 systemd[1]: sshd@6-24.199.119.133:22-139.178.68.195:49216.service: Deactivated successfully. Mar 17 17:54:06.801265 systemd[1]: session-7.scope: Deactivated successfully. Mar 17 17:54:06.801894 systemd[1]: session-7.scope: Consumed 7.051s CPU time, 223.8M memory peak. Mar 17 17:54:06.810106 systemd-logind[1469]: Session 7 logged out. Waiting for processes to exit. Mar 17 17:54:06.812975 systemd-logind[1469]: Removed session 7. Mar 17 17:54:07.194328 systemd[1]: Started sshd@8-24.199.119.133:22-218.92.0.188:44318.service - OpenSSH per-connection server daemon (218.92.0.188:44318). Mar 17 17:54:07.433257 kubelet[2601]: I0317 17:54:07.433167 2601 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 17 17:54:07.434560 containerd[1492]: time="2025-03-17T17:54:07.434474146Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 17 17:54:07.435993 kubelet[2601]: I0317 17:54:07.435479 2601 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 17 17:54:08.154204 kubelet[2601]: I0317 17:54:08.154110 2601 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230.1.0-f-ebc70812f4" podStartSLOduration=5.154055982 podStartE2EDuration="5.154055982s" podCreationTimestamp="2025-03-17 17:54:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:54:04.26563862 +0000 UTC m=+1.661919530" watchObservedRunningTime="2025-03-17 17:54:08.154055982 +0000 UTC m=+5.550336888" Mar 17 17:54:08.175784 systemd[1]: Created slice kubepods-besteffort-pod4ba7fb16_d8ac_4150_a01a_8fd676e83978.slice - libcontainer container kubepods-besteffort-pod4ba7fb16_d8ac_4150_a01a_8fd676e83978.slice. Mar 17 17:54:08.200776 systemd[1]: Created slice kubepods-burstable-pod8ccdb914_4378_4042_8c14_9d432415fa36.slice - libcontainer container kubepods-burstable-pod8ccdb914_4378_4042_8c14_9d432415fa36.slice. Mar 17 17:54:08.308095 kubelet[2601]: I0317 17:54:08.307959 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4ba7fb16-d8ac-4150-a01a-8fd676e83978-xtables-lock\") pod \"kube-proxy-kdjhf\" (UID: \"4ba7fb16-d8ac-4150-a01a-8fd676e83978\") " pod="kube-system/kube-proxy-kdjhf" Mar 17 17:54:08.308095 kubelet[2601]: I0317 17:54:08.308013 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8ccdb914-4378-4042-8c14-9d432415fa36-etc-cni-netd\") pod \"cilium-hvclt\" (UID: \"8ccdb914-4378-4042-8c14-9d432415fa36\") " pod="kube-system/cilium-hvclt" Mar 17 17:54:08.308095 kubelet[2601]: I0317 17:54:08.308031 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8ccdb914-4378-4042-8c14-9d432415fa36-host-proc-sys-kernel\") pod \"cilium-hvclt\" (UID: \"8ccdb914-4378-4042-8c14-9d432415fa36\") " pod="kube-system/cilium-hvclt" Mar 17 17:54:08.308760 kubelet[2601]: I0317 17:54:08.308101 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8ccdb914-4378-4042-8c14-9d432415fa36-cni-path\") pod \"cilium-hvclt\" (UID: \"8ccdb914-4378-4042-8c14-9d432415fa36\") " pod="kube-system/cilium-hvclt" Mar 17 17:54:08.308760 kubelet[2601]: I0317 17:54:08.308144 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8ccdb914-4378-4042-8c14-9d432415fa36-hubble-tls\") pod \"cilium-hvclt\" (UID: \"8ccdb914-4378-4042-8c14-9d432415fa36\") " pod="kube-system/cilium-hvclt" Mar 17 17:54:08.308760 kubelet[2601]: I0317 17:54:08.308167 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxlsl\" (UniqueName: \"kubernetes.io/projected/8ccdb914-4378-4042-8c14-9d432415fa36-kube-api-access-vxlsl\") pod \"cilium-hvclt\" (UID: \"8ccdb914-4378-4042-8c14-9d432415fa36\") " pod="kube-system/cilium-hvclt" Mar 17 17:54:08.308760 kubelet[2601]: I0317 17:54:08.308193 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4ba7fb16-d8ac-4150-a01a-8fd676e83978-kube-proxy\") pod \"kube-proxy-kdjhf\" (UID: \"4ba7fb16-d8ac-4150-a01a-8fd676e83978\") " pod="kube-system/kube-proxy-kdjhf" Mar 17 17:54:08.308760 kubelet[2601]: I0317 17:54:08.308238 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8ccdb914-4378-4042-8c14-9d432415fa36-lib-modules\") pod \"cilium-hvclt\" (UID: \"8ccdb914-4378-4042-8c14-9d432415fa36\") " pod="kube-system/cilium-hvclt" Mar 17 17:54:08.308760 kubelet[2601]: I0317 17:54:08.308265 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8ccdb914-4378-4042-8c14-9d432415fa36-cilium-cgroup\") pod \"cilium-hvclt\" (UID: \"8ccdb914-4378-4042-8c14-9d432415fa36\") " pod="kube-system/cilium-hvclt" Mar 17 17:54:08.309011 kubelet[2601]: I0317 17:54:08.308325 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8ccdb914-4378-4042-8c14-9d432415fa36-host-proc-sys-net\") pod \"cilium-hvclt\" (UID: \"8ccdb914-4378-4042-8c14-9d432415fa36\") " pod="kube-system/cilium-hvclt" Mar 17 17:54:08.309011 kubelet[2601]: I0317 17:54:08.308360 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8ccdb914-4378-4042-8c14-9d432415fa36-cilium-run\") pod \"cilium-hvclt\" (UID: \"8ccdb914-4378-4042-8c14-9d432415fa36\") " pod="kube-system/cilium-hvclt" Mar 17 17:54:08.309011 kubelet[2601]: I0317 17:54:08.308378 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8ccdb914-4378-4042-8c14-9d432415fa36-bpf-maps\") pod \"cilium-hvclt\" (UID: \"8ccdb914-4378-4042-8c14-9d432415fa36\") " pod="kube-system/cilium-hvclt" Mar 17 17:54:08.310093 kubelet[2601]: I0317 17:54:08.309298 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8ccdb914-4378-4042-8c14-9d432415fa36-xtables-lock\") pod \"cilium-hvclt\" (UID: \"8ccdb914-4378-4042-8c14-9d432415fa36\") " pod="kube-system/cilium-hvclt" Mar 17 17:54:08.310093 kubelet[2601]: I0317 17:54:08.310000 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8ccdb914-4378-4042-8c14-9d432415fa36-cilium-config-path\") pod \"cilium-hvclt\" (UID: \"8ccdb914-4378-4042-8c14-9d432415fa36\") " pod="kube-system/cilium-hvclt" Mar 17 17:54:08.310093 kubelet[2601]: I0317 17:54:08.310073 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4ba7fb16-d8ac-4150-a01a-8fd676e83978-lib-modules\") pod \"kube-proxy-kdjhf\" (UID: \"4ba7fb16-d8ac-4150-a01a-8fd676e83978\") " pod="kube-system/kube-proxy-kdjhf" Mar 17 17:54:08.312125 kubelet[2601]: I0317 17:54:08.310111 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8ccdb914-4378-4042-8c14-9d432415fa36-hostproc\") pod \"cilium-hvclt\" (UID: \"8ccdb914-4378-4042-8c14-9d432415fa36\") " pod="kube-system/cilium-hvclt" Mar 17 17:54:08.312125 kubelet[2601]: I0317 17:54:08.310175 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kspsp\" (UniqueName: \"kubernetes.io/projected/4ba7fb16-d8ac-4150-a01a-8fd676e83978-kube-api-access-kspsp\") pod \"kube-proxy-kdjhf\" (UID: \"4ba7fb16-d8ac-4150-a01a-8fd676e83978\") " pod="kube-system/kube-proxy-kdjhf" Mar 17 17:54:08.312125 kubelet[2601]: I0317 17:54:08.310208 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8ccdb914-4378-4042-8c14-9d432415fa36-clustermesh-secrets\") pod \"cilium-hvclt\" (UID: \"8ccdb914-4378-4042-8c14-9d432415fa36\") " pod="kube-system/cilium-hvclt" Mar 17 17:54:08.394441 sshd-session[2675]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.188 user=root Mar 17 17:54:08.493402 kubelet[2601]: E0317 17:54:08.493347 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:54:08.496697 containerd[1492]: time="2025-03-17T17:54:08.496075982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kdjhf,Uid:4ba7fb16-d8ac-4150-a01a-8fd676e83978,Namespace:kube-system,Attempt:0,}" Mar 17 17:54:08.509874 kubelet[2601]: E0317 17:54:08.508530 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:54:08.512184 containerd[1492]: time="2025-03-17T17:54:08.512131294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hvclt,Uid:8ccdb914-4378-4042-8c14-9d432415fa36,Namespace:kube-system,Attempt:0,}" Mar 17 17:54:08.559396 containerd[1492]: time="2025-03-17T17:54:08.558568662Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:54:08.563027 containerd[1492]: time="2025-03-17T17:54:08.558684885Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:54:08.563027 containerd[1492]: time="2025-03-17T17:54:08.561564096Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:54:08.563027 containerd[1492]: time="2025-03-17T17:54:08.561738261Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:54:08.612605 kubelet[2601]: I0317 17:54:08.612562 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5l4xw\" (UniqueName: \"kubernetes.io/projected/7f692ac4-12a5-4247-b4c3-da73eae3ab35-kube-api-access-5l4xw\") pod \"cilium-operator-5d85765b45-5thkq\" (UID: \"7f692ac4-12a5-4247-b4c3-da73eae3ab35\") " pod="kube-system/cilium-operator-5d85765b45-5thkq" Mar 17 17:54:08.612945 kubelet[2601]: I0317 17:54:08.612611 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7f692ac4-12a5-4247-b4c3-da73eae3ab35-cilium-config-path\") pod \"cilium-operator-5d85765b45-5thkq\" (UID: \"7f692ac4-12a5-4247-b4c3-da73eae3ab35\") " pod="kube-system/cilium-operator-5d85765b45-5thkq" Mar 17 17:54:08.632204 systemd[1]: Started cri-containerd-6263bd732f400bc44ce96cdbe9778cd81c6654c0ba9dc41b497333a41c923c8c.scope - libcontainer container 6263bd732f400bc44ce96cdbe9778cd81c6654c0ba9dc41b497333a41c923c8c. Mar 17 17:54:08.647725 systemd[1]: Created slice kubepods-besteffort-pod7f692ac4_12a5_4247_b4c3_da73eae3ab35.slice - libcontainer container kubepods-besteffort-pod7f692ac4_12a5_4247_b4c3_da73eae3ab35.slice. Mar 17 17:54:08.661860 containerd[1492]: time="2025-03-17T17:54:08.660903336Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:54:08.661860 containerd[1492]: time="2025-03-17T17:54:08.661005060Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:54:08.661860 containerd[1492]: time="2025-03-17T17:54:08.661030392Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:54:08.661860 containerd[1492]: time="2025-03-17T17:54:08.661190139Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:54:08.707197 systemd[1]: Started cri-containerd-9e808294057c550e1c299868218b09f3ed1788851b8d6ee07e550a5f3ba06ccd.scope - libcontainer container 9e808294057c550e1c299868218b09f3ed1788851b8d6ee07e550a5f3ba06ccd. Mar 17 17:54:08.772654 containerd[1492]: time="2025-03-17T17:54:08.771604702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hvclt,Uid:8ccdb914-4378-4042-8c14-9d432415fa36,Namespace:kube-system,Attempt:0,} returns sandbox id \"9e808294057c550e1c299868218b09f3ed1788851b8d6ee07e550a5f3ba06ccd\"" Mar 17 17:54:08.776231 kubelet[2601]: E0317 17:54:08.775526 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:54:08.785343 containerd[1492]: time="2025-03-17T17:54:08.785007673Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 17 17:54:08.790880 containerd[1492]: time="2025-03-17T17:54:08.790769510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kdjhf,Uid:4ba7fb16-d8ac-4150-a01a-8fd676e83978,Namespace:kube-system,Attempt:0,} returns sandbox id \"6263bd732f400bc44ce96cdbe9778cd81c6654c0ba9dc41b497333a41c923c8c\"" Mar 17 17:54:08.794952 kubelet[2601]: E0317 17:54:08.793644 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:54:08.797981 containerd[1492]: time="2025-03-17T17:54:08.797903333Z" level=info msg="CreateContainer within sandbox \"6263bd732f400bc44ce96cdbe9778cd81c6654c0ba9dc41b497333a41c923c8c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 17 17:54:08.823217 containerd[1492]: time="2025-03-17T17:54:08.823113227Z" level=info msg="CreateContainer within sandbox \"6263bd732f400bc44ce96cdbe9778cd81c6654c0ba9dc41b497333a41c923c8c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9f6c91c4bd99b334557e6468d2e6ee563eb80569711ac488a5bb62177d20bd33\"" Mar 17 17:54:08.824423 containerd[1492]: time="2025-03-17T17:54:08.824374105Z" level=info msg="StartContainer for \"9f6c91c4bd99b334557e6468d2e6ee563eb80569711ac488a5bb62177d20bd33\"" Mar 17 17:54:08.882535 systemd[1]: Started cri-containerd-9f6c91c4bd99b334557e6468d2e6ee563eb80569711ac488a5bb62177d20bd33.scope - libcontainer container 9f6c91c4bd99b334557e6468d2e6ee563eb80569711ac488a5bb62177d20bd33. Mar 17 17:54:08.941039 containerd[1492]: time="2025-03-17T17:54:08.940158978Z" level=info msg="StartContainer for \"9f6c91c4bd99b334557e6468d2e6ee563eb80569711ac488a5bb62177d20bd33\" returns successfully" Mar 17 17:54:08.956033 kubelet[2601]: E0317 17:54:08.955259 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:54:08.956667 containerd[1492]: time="2025-03-17T17:54:08.956572324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-5thkq,Uid:7f692ac4-12a5-4247-b4c3-da73eae3ab35,Namespace:kube-system,Attempt:0,}" Mar 17 17:54:09.011549 containerd[1492]: time="2025-03-17T17:54:09.010528732Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:54:09.011549 containerd[1492]: time="2025-03-17T17:54:09.010658734Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:54:09.011549 containerd[1492]: time="2025-03-17T17:54:09.010684735Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:54:09.012132 containerd[1492]: time="2025-03-17T17:54:09.010871390Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:54:09.051255 systemd[1]: Started cri-containerd-a7b80b72432f410b6950c98f97c385d6fcf48e1596b34c5fdcf2aaf1f3aa51ab.scope - libcontainer container a7b80b72432f410b6950c98f97c385d6fcf48e1596b34c5fdcf2aaf1f3aa51ab. Mar 17 17:54:09.114261 kubelet[2601]: E0317 17:54:09.113806 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:54:09.133071 containerd[1492]: time="2025-03-17T17:54:09.133017367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-5thkq,Uid:7f692ac4-12a5-4247-b4c3-da73eae3ab35,Namespace:kube-system,Attempt:0,} returns sandbox id \"a7b80b72432f410b6950c98f97c385d6fcf48e1596b34c5fdcf2aaf1f3aa51ab\"" Mar 17 17:54:09.136283 kubelet[2601]: E0317 17:54:09.136241 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:54:09.137734 kubelet[2601]: I0317 17:54:09.137535 2601 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kdjhf" podStartSLOduration=1.137512483 podStartE2EDuration="1.137512483s" podCreationTimestamp="2025-03-17 17:54:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:54:09.13730705 +0000 UTC m=+6.533587958" watchObservedRunningTime="2025-03-17 17:54:09.137512483 +0000 UTC m=+6.533793391" Mar 17 17:54:10.667215 kubelet[2601]: E0317 17:54:10.667151 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:54:10.796411 kubelet[2601]: E0317 17:54:10.795357 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:54:10.935084 sshd[2673]: PAM: Permission denied for root from 218.92.0.188 Mar 17 17:54:11.120047 kubelet[2601]: E0317 17:54:11.118621 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:54:11.120047 kubelet[2601]: E0317 17:54:11.119025 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:54:11.256554 sshd-session[2971]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.188 user=root Mar 17 17:54:11.880704 update_engine[1470]: I20250317 17:54:11.879673 1470 update_attempter.cc:509] Updating boot flags... Mar 17 17:54:11.937058 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (2978) Mar 17 17:54:13.540923 sshd[2673]: PAM: Permission denied for root from 218.92.0.188 Mar 17 17:54:13.865672 sshd-session[2985]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.188 user=root Mar 17 17:54:14.218262 systemd[1]: Started sshd@9-24.199.119.133:22-115.113.173.34:49682.service - OpenSSH per-connection server daemon (115.113.173.34:49682). Mar 17 17:54:14.529816 kubelet[2601]: E0317 17:54:14.529291 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:54:15.130738 kubelet[2601]: E0317 17:54:15.130690 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:54:15.341769 sshd[2987]: Invalid user deploy from 115.113.173.34 port 49682 Mar 17 17:54:15.863185 sshd[2987]: Connection closed by invalid user deploy 115.113.173.34 port 49682 [preauth] Mar 17 17:54:15.869616 systemd[1]: sshd@9-24.199.119.133:22-115.113.173.34:49682.service: Deactivated successfully. Mar 17 17:54:16.425816 sshd[2673]: PAM: Permission denied for root from 218.92.0.188 Mar 17 17:54:16.585975 sshd[2673]: Received disconnect from 218.92.0.188 port 44318:11: [preauth] Mar 17 17:54:16.585975 sshd[2673]: Disconnected from authenticating user root 218.92.0.188 port 44318 [preauth] Mar 17 17:54:16.590547 systemd[1]: sshd@8-24.199.119.133:22-218.92.0.188:44318.service: Deactivated successfully. Mar 17 17:54:18.098397 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1731326186.mount: Deactivated successfully. Mar 17 17:54:21.077051 containerd[1492]: time="2025-03-17T17:54:21.076973184Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:54:21.078103 containerd[1492]: time="2025-03-17T17:54:21.043104966Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Mar 17 17:54:21.079607 containerd[1492]: time="2025-03-17T17:54:21.079430686Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 12.294348156s" Mar 17 17:54:21.079607 containerd[1492]: time="2025-03-17T17:54:21.079492049Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 17 17:54:21.081623 containerd[1492]: time="2025-03-17T17:54:21.081006012Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:54:21.082543 containerd[1492]: time="2025-03-17T17:54:21.082497713Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 17 17:54:21.087565 containerd[1492]: time="2025-03-17T17:54:21.086988929Z" level=info msg="CreateContainer within sandbox \"9e808294057c550e1c299868218b09f3ed1788851b8d6ee07e550a5f3ba06ccd\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 17:54:21.177207 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1787157859.mount: Deactivated successfully. Mar 17 17:54:21.184963 containerd[1492]: time="2025-03-17T17:54:21.184895004Z" level=info msg="CreateContainer within sandbox \"9e808294057c550e1c299868218b09f3ed1788851b8d6ee07e550a5f3ba06ccd\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"338b43c0edd701ec4b7d3c9596870bc2071fe3c99f05a00e487c3b81f5c91060\"" Mar 17 17:54:21.188239 containerd[1492]: time="2025-03-17T17:54:21.187826790Z" level=info msg="StartContainer for \"338b43c0edd701ec4b7d3c9596870bc2071fe3c99f05a00e487c3b81f5c91060\"" Mar 17 17:54:21.272370 systemd[1]: Started cri-containerd-338b43c0edd701ec4b7d3c9596870bc2071fe3c99f05a00e487c3b81f5c91060.scope - libcontainer container 338b43c0edd701ec4b7d3c9596870bc2071fe3c99f05a00e487c3b81f5c91060. Mar 17 17:54:21.294657 systemd[1]: run-containerd-runc-k8s.io-338b43c0edd701ec4b7d3c9596870bc2071fe3c99f05a00e487c3b81f5c91060-runc.tfpsEF.mount: Deactivated successfully. Mar 17 17:54:21.330888 containerd[1492]: time="2025-03-17T17:54:21.329601291Z" level=info msg="StartContainer for \"338b43c0edd701ec4b7d3c9596870bc2071fe3c99f05a00e487c3b81f5c91060\" returns successfully" Mar 17 17:54:21.344645 systemd[1]: cri-containerd-338b43c0edd701ec4b7d3c9596870bc2071fe3c99f05a00e487c3b81f5c91060.scope: Deactivated successfully. Mar 17 17:54:21.514545 containerd[1492]: time="2025-03-17T17:54:21.469044047Z" level=info msg="shim disconnected" id=338b43c0edd701ec4b7d3c9596870bc2071fe3c99f05a00e487c3b81f5c91060 namespace=k8s.io Mar 17 17:54:21.514545 containerd[1492]: time="2025-03-17T17:54:21.514288328Z" level=warning msg="cleaning up after shim disconnected" id=338b43c0edd701ec4b7d3c9596870bc2071fe3c99f05a00e487c3b81f5c91060 namespace=k8s.io Mar 17 17:54:21.514545 containerd[1492]: time="2025-03-17T17:54:21.514311133Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:54:22.158358 kubelet[2601]: E0317 17:54:22.158277 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:54:22.163558 containerd[1492]: time="2025-03-17T17:54:22.163495140Z" level=info msg="CreateContainer within sandbox \"9e808294057c550e1c299868218b09f3ed1788851b8d6ee07e550a5f3ba06ccd\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 17:54:22.177543 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-338b43c0edd701ec4b7d3c9596870bc2071fe3c99f05a00e487c3b81f5c91060-rootfs.mount: Deactivated successfully. Mar 17 17:54:22.207230 containerd[1492]: time="2025-03-17T17:54:22.207128793Z" level=info msg="CreateContainer within sandbox \"9e808294057c550e1c299868218b09f3ed1788851b8d6ee07e550a5f3ba06ccd\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"31f744582357cb245d26f3eacdb2f6d364f6c952dfd9f9d958ff7a0cddced07b\"" Mar 17 17:54:22.208220 containerd[1492]: time="2025-03-17T17:54:22.208162347Z" level=info msg="StartContainer for \"31f744582357cb245d26f3eacdb2f6d364f6c952dfd9f9d958ff7a0cddced07b\"" Mar 17 17:54:22.267752 systemd[1]: run-containerd-runc-k8s.io-31f744582357cb245d26f3eacdb2f6d364f6c952dfd9f9d958ff7a0cddced07b-runc.0XnOx0.mount: Deactivated successfully. Mar 17 17:54:22.280256 systemd[1]: Started cri-containerd-31f744582357cb245d26f3eacdb2f6d364f6c952dfd9f9d958ff7a0cddced07b.scope - libcontainer container 31f744582357cb245d26f3eacdb2f6d364f6c952dfd9f9d958ff7a0cddced07b. Mar 17 17:54:22.349920 containerd[1492]: time="2025-03-17T17:54:22.349275101Z" level=info msg="StartContainer for \"31f744582357cb245d26f3eacdb2f6d364f6c952dfd9f9d958ff7a0cddced07b\" returns successfully" Mar 17 17:54:22.375733 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 17:54:22.376358 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:54:22.376572 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:54:22.384702 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:54:22.388642 systemd[1]: cri-containerd-31f744582357cb245d26f3eacdb2f6d364f6c952dfd9f9d958ff7a0cddced07b.scope: Deactivated successfully. Mar 17 17:54:22.444890 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:54:22.471469 containerd[1492]: time="2025-03-17T17:54:22.471393059Z" level=info msg="shim disconnected" id=31f744582357cb245d26f3eacdb2f6d364f6c952dfd9f9d958ff7a0cddced07b namespace=k8s.io Mar 17 17:54:22.471469 containerd[1492]: time="2025-03-17T17:54:22.471459840Z" level=warning msg="cleaning up after shim disconnected" id=31f744582357cb245d26f3eacdb2f6d364f6c952dfd9f9d958ff7a0cddced07b namespace=k8s.io Mar 17 17:54:22.471469 containerd[1492]: time="2025-03-17T17:54:22.471470450Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:54:22.496275 containerd[1492]: time="2025-03-17T17:54:22.496210886Z" level=warning msg="cleanup warnings time=\"2025-03-17T17:54:22Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 17 17:54:23.164928 kubelet[2601]: E0317 17:54:23.163233 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:54:23.174993 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-31f744582357cb245d26f3eacdb2f6d364f6c952dfd9f9d958ff7a0cddced07b-rootfs.mount: Deactivated successfully. Mar 17 17:54:23.191150 containerd[1492]: time="2025-03-17T17:54:23.190788199Z" level=info msg="CreateContainer within sandbox \"9e808294057c550e1c299868218b09f3ed1788851b8d6ee07e550a5f3ba06ccd\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 17:54:23.307446 containerd[1492]: time="2025-03-17T17:54:23.305808674Z" level=info msg="CreateContainer within sandbox \"9e808294057c550e1c299868218b09f3ed1788851b8d6ee07e550a5f3ba06ccd\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8055b970f77427e6287829495ae2df02612e8c777a9ae5a71b259e5175a842a1\"" Mar 17 17:54:23.321544 containerd[1492]: time="2025-03-17T17:54:23.321470618Z" level=info msg="StartContainer for \"8055b970f77427e6287829495ae2df02612e8c777a9ae5a71b259e5175a842a1\"" Mar 17 17:54:23.426136 systemd[1]: Started cri-containerd-8055b970f77427e6287829495ae2df02612e8c777a9ae5a71b259e5175a842a1.scope - libcontainer container 8055b970f77427e6287829495ae2df02612e8c777a9ae5a71b259e5175a842a1. Mar 17 17:54:23.513355 systemd[1]: cri-containerd-8055b970f77427e6287829495ae2df02612e8c777a9ae5a71b259e5175a842a1.scope: Deactivated successfully. Mar 17 17:54:23.515877 containerd[1492]: time="2025-03-17T17:54:23.515210848Z" level=info msg="StartContainer for \"8055b970f77427e6287829495ae2df02612e8c777a9ae5a71b259e5175a842a1\" returns successfully" Mar 17 17:54:23.576393 containerd[1492]: time="2025-03-17T17:54:23.576331566Z" level=info msg="shim disconnected" id=8055b970f77427e6287829495ae2df02612e8c777a9ae5a71b259e5175a842a1 namespace=k8s.io Mar 17 17:54:23.576889 containerd[1492]: time="2025-03-17T17:54:23.576669516Z" level=warning msg="cleaning up after shim disconnected" id=8055b970f77427e6287829495ae2df02612e8c777a9ae5a71b259e5175a842a1 namespace=k8s.io Mar 17 17:54:23.576889 containerd[1492]: time="2025-03-17T17:54:23.576693902Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:54:24.065585 containerd[1492]: time="2025-03-17T17:54:24.065503400Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:54:24.066888 containerd[1492]: time="2025-03-17T17:54:24.066497574Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Mar 17 17:54:24.088882 containerd[1492]: time="2025-03-17T17:54:24.088750666Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:54:24.092963 containerd[1492]: time="2025-03-17T17:54:24.092888600Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.010107261s" Mar 17 17:54:24.093511 containerd[1492]: time="2025-03-17T17:54:24.093269931Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 17 17:54:24.098461 containerd[1492]: time="2025-03-17T17:54:24.098183201Z" level=info msg="CreateContainer within sandbox \"a7b80b72432f410b6950c98f97c385d6fcf48e1596b34c5fdcf2aaf1f3aa51ab\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 17 17:54:24.129724 containerd[1492]: time="2025-03-17T17:54:24.129605196Z" level=info msg="CreateContainer within sandbox \"a7b80b72432f410b6950c98f97c385d6fcf48e1596b34c5fdcf2aaf1f3aa51ab\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"dc0031ff157e652338b9feeb389921cb487cd6405076f2e264a77a77761b0aa4\"" Mar 17 17:54:24.131913 containerd[1492]: time="2025-03-17T17:54:24.131383043Z" level=info msg="StartContainer for \"dc0031ff157e652338b9feeb389921cb487cd6405076f2e264a77a77761b0aa4\"" Mar 17 17:54:24.172891 kubelet[2601]: E0317 17:54:24.171913 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:54:24.177136 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8055b970f77427e6287829495ae2df02612e8c777a9ae5a71b259e5175a842a1-rootfs.mount: Deactivated successfully. Mar 17 17:54:24.193070 containerd[1492]: time="2025-03-17T17:54:24.193027470Z" level=info msg="CreateContainer within sandbox \"9e808294057c550e1c299868218b09f3ed1788851b8d6ee07e550a5f3ba06ccd\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 17:54:24.228866 containerd[1492]: time="2025-03-17T17:54:24.225052471Z" level=info msg="CreateContainer within sandbox \"9e808294057c550e1c299868218b09f3ed1788851b8d6ee07e550a5f3ba06ccd\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"de6fb2b8e83f554cc1085983a5f989364d81caedab649fa5bd50eebfc2c6d895\"" Mar 17 17:54:24.228866 containerd[1492]: time="2025-03-17T17:54:24.226368695Z" level=info msg="StartContainer for \"de6fb2b8e83f554cc1085983a5f989364d81caedab649fa5bd50eebfc2c6d895\"" Mar 17 17:54:24.239906 systemd[1]: Started cri-containerd-dc0031ff157e652338b9feeb389921cb487cd6405076f2e264a77a77761b0aa4.scope - libcontainer container dc0031ff157e652338b9feeb389921cb487cd6405076f2e264a77a77761b0aa4. Mar 17 17:54:24.306138 systemd[1]: Started cri-containerd-de6fb2b8e83f554cc1085983a5f989364d81caedab649fa5bd50eebfc2c6d895.scope - libcontainer container de6fb2b8e83f554cc1085983a5f989364d81caedab649fa5bd50eebfc2c6d895. Mar 17 17:54:24.310912 containerd[1492]: time="2025-03-17T17:54:24.310697935Z" level=info msg="StartContainer for \"dc0031ff157e652338b9feeb389921cb487cd6405076f2e264a77a77761b0aa4\" returns successfully" Mar 17 17:54:24.356088 systemd[1]: cri-containerd-de6fb2b8e83f554cc1085983a5f989364d81caedab649fa5bd50eebfc2c6d895.scope: Deactivated successfully. Mar 17 17:54:24.359235 containerd[1492]: time="2025-03-17T17:54:24.359103358Z" level=info msg="StartContainer for \"de6fb2b8e83f554cc1085983a5f989364d81caedab649fa5bd50eebfc2c6d895\" returns successfully" Mar 17 17:54:24.424780 containerd[1492]: time="2025-03-17T17:54:24.424196788Z" level=info msg="shim disconnected" id=de6fb2b8e83f554cc1085983a5f989364d81caedab649fa5bd50eebfc2c6d895 namespace=k8s.io Mar 17 17:54:24.424780 containerd[1492]: time="2025-03-17T17:54:24.424274500Z" level=warning msg="cleaning up after shim disconnected" id=de6fb2b8e83f554cc1085983a5f989364d81caedab649fa5bd50eebfc2c6d895 namespace=k8s.io Mar 17 17:54:24.424780 containerd[1492]: time="2025-03-17T17:54:24.424285250Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:54:24.456665 containerd[1492]: time="2025-03-17T17:54:24.454828656Z" level=warning msg="cleanup warnings time=\"2025-03-17T17:54:24Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 17 17:54:25.177625 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-de6fb2b8e83f554cc1085983a5f989364d81caedab649fa5bd50eebfc2c6d895-rootfs.mount: Deactivated successfully. Mar 17 17:54:25.202734 kubelet[2601]: E0317 17:54:25.202677 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:54:25.209000 kubelet[2601]: E0317 17:54:25.208926 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:54:25.211981 containerd[1492]: time="2025-03-17T17:54:25.211803748Z" level=info msg="CreateContainer within sandbox \"9e808294057c550e1c299868218b09f3ed1788851b8d6ee07e550a5f3ba06ccd\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 17:54:25.240174 containerd[1492]: time="2025-03-17T17:54:25.239773878Z" level=info msg="CreateContainer within sandbox \"9e808294057c550e1c299868218b09f3ed1788851b8d6ee07e550a5f3ba06ccd\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8651249fde162d5ae1c61c817b16acd75461eb90580f0ca6ab434fbd2faf9cf4\"" Mar 17 17:54:25.241054 containerd[1492]: time="2025-03-17T17:54:25.240949603Z" level=info msg="StartContainer for \"8651249fde162d5ae1c61c817b16acd75461eb90580f0ca6ab434fbd2faf9cf4\"" Mar 17 17:54:25.315581 systemd[1]: Started cri-containerd-8651249fde162d5ae1c61c817b16acd75461eb90580f0ca6ab434fbd2faf9cf4.scope - libcontainer container 8651249fde162d5ae1c61c817b16acd75461eb90580f0ca6ab434fbd2faf9cf4. Mar 17 17:54:25.444037 containerd[1492]: time="2025-03-17T17:54:25.442808881Z" level=info msg="StartContainer for \"8651249fde162d5ae1c61c817b16acd75461eb90580f0ca6ab434fbd2faf9cf4\" returns successfully" Mar 17 17:54:25.455064 kubelet[2601]: I0317 17:54:25.453861 2601 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-5thkq" podStartSLOduration=2.501453969 podStartE2EDuration="17.453827963s" podCreationTimestamp="2025-03-17 17:54:08 +0000 UTC" firstStartedPulling="2025-03-17 17:54:09.142547948 +0000 UTC m=+6.538828853" lastFinishedPulling="2025-03-17 17:54:24.09492196 +0000 UTC m=+21.491202847" observedRunningTime="2025-03-17 17:54:25.318349046 +0000 UTC m=+22.714629954" watchObservedRunningTime="2025-03-17 17:54:25.453827963 +0000 UTC m=+22.850108865" Mar 17 17:54:25.905018 kubelet[2601]: I0317 17:54:25.904957 2601 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Mar 17 17:54:26.048197 systemd[1]: Created slice kubepods-burstable-pod53f85659_f3c3_434f_8bce_1c56e1b91361.slice - libcontainer container kubepods-burstable-pod53f85659_f3c3_434f_8bce_1c56e1b91361.slice. Mar 17 17:54:26.067439 systemd[1]: Created slice kubepods-burstable-pod9d28f59d_10c3_4752_bcf3_ffc3593df58e.slice - libcontainer container kubepods-burstable-pod9d28f59d_10c3_4752_bcf3_ffc3593df58e.slice. Mar 17 17:54:26.146414 kubelet[2601]: I0317 17:54:26.146045 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sx4f2\" (UniqueName: \"kubernetes.io/projected/9d28f59d-10c3-4752-bcf3-ffc3593df58e-kube-api-access-sx4f2\") pod \"coredns-6f6b679f8f-kz8fq\" (UID: \"9d28f59d-10c3-4752-bcf3-ffc3593df58e\") " pod="kube-system/coredns-6f6b679f8f-kz8fq" Mar 17 17:54:26.146414 kubelet[2601]: I0317 17:54:26.146115 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/53f85659-f3c3-434f-8bce-1c56e1b91361-config-volume\") pod \"coredns-6f6b679f8f-x8jxz\" (UID: \"53f85659-f3c3-434f-8bce-1c56e1b91361\") " pod="kube-system/coredns-6f6b679f8f-x8jxz" Mar 17 17:54:26.146414 kubelet[2601]: I0317 17:54:26.146284 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9d28f59d-10c3-4752-bcf3-ffc3593df58e-config-volume\") pod \"coredns-6f6b679f8f-kz8fq\" (UID: \"9d28f59d-10c3-4752-bcf3-ffc3593df58e\") " pod="kube-system/coredns-6f6b679f8f-kz8fq" Mar 17 17:54:26.146414 kubelet[2601]: I0317 17:54:26.146332 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ns68k\" (UniqueName: \"kubernetes.io/projected/53f85659-f3c3-434f-8bce-1c56e1b91361-kube-api-access-ns68k\") pod \"coredns-6f6b679f8f-x8jxz\" (UID: \"53f85659-f3c3-434f-8bce-1c56e1b91361\") " pod="kube-system/coredns-6f6b679f8f-x8jxz" Mar 17 17:54:26.234433 kubelet[2601]: E0317 17:54:26.233406 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:54:26.237877 kubelet[2601]: E0317 17:54:26.237532 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:54:26.318938 kubelet[2601]: I0317 17:54:26.316314 2601 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-hvclt" podStartSLOduration=6.016700538 podStartE2EDuration="18.316178071s" podCreationTimestamp="2025-03-17 17:54:08 +0000 UTC" firstStartedPulling="2025-03-17 17:54:08.781965489 +0000 UTC m=+6.178246382" lastFinishedPulling="2025-03-17 17:54:21.081443018 +0000 UTC m=+18.477723915" observedRunningTime="2025-03-17 17:54:26.297645493 +0000 UTC m=+23.693926469" watchObservedRunningTime="2025-03-17 17:54:26.316178071 +0000 UTC m=+23.712458984" Mar 17 17:54:26.356472 kubelet[2601]: E0317 17:54:26.356411 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:54:26.358871 containerd[1492]: time="2025-03-17T17:54:26.358468179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-x8jxz,Uid:53f85659-f3c3-434f-8bce-1c56e1b91361,Namespace:kube-system,Attempt:0,}" Mar 17 17:54:26.387891 kubelet[2601]: E0317 17:54:26.381964 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:54:26.388179 containerd[1492]: time="2025-03-17T17:54:26.385791172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-kz8fq,Uid:9d28f59d-10c3-4752-bcf3-ffc3593df58e,Namespace:kube-system,Attempt:0,}" Mar 17 17:54:27.235414 kubelet[2601]: E0317 17:54:27.235364 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:54:28.238443 kubelet[2601]: E0317 17:54:28.238378 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:54:28.659661 systemd-networkd[1393]: cilium_host: Link UP Mar 17 17:54:28.664080 systemd-networkd[1393]: cilium_net: Link UP Mar 17 17:54:28.664672 systemd-networkd[1393]: cilium_net: Gained carrier Mar 17 17:54:28.665058 systemd-networkd[1393]: cilium_host: Gained carrier Mar 17 17:54:28.745559 systemd-networkd[1393]: cilium_host: Gained IPv6LL Mar 17 17:54:28.847082 systemd-networkd[1393]: cilium_net: Gained IPv6LL Mar 17 17:54:28.884097 systemd-networkd[1393]: cilium_vxlan: Link UP Mar 17 17:54:28.884110 systemd-networkd[1393]: cilium_vxlan: Gained carrier Mar 17 17:54:29.436222 kernel: NET: Registered PF_ALG protocol family Mar 17 17:54:30.599128 systemd-networkd[1393]: cilium_vxlan: Gained IPv6LL Mar 17 17:54:30.756579 systemd-networkd[1393]: lxc_health: Link UP Mar 17 17:54:30.763036 systemd-networkd[1393]: lxc_health: Gained carrier Mar 17 17:54:31.032079 kernel: eth0: renamed from tmp89fdf Mar 17 17:54:31.034326 systemd-networkd[1393]: lxc7e749016fcc8: Link UP Mar 17 17:54:31.038978 systemd-networkd[1393]: lxc7e749016fcc8: Gained carrier Mar 17 17:54:31.093926 kernel: eth0: renamed from tmp54dc8 Mar 17 17:54:31.098864 systemd-networkd[1393]: lxc1d691cc8c07b: Link UP Mar 17 17:54:31.103536 systemd-networkd[1393]: lxc1d691cc8c07b: Gained carrier Mar 17 17:54:32.007034 systemd-networkd[1393]: lxc_health: Gained IPv6LL Mar 17 17:54:32.263163 systemd-networkd[1393]: lxc7e749016fcc8: Gained IPv6LL Mar 17 17:54:32.512764 kubelet[2601]: E0317 17:54:32.510930 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:54:32.520353 systemd-networkd[1393]: lxc1d691cc8c07b: Gained IPv6LL Mar 17 17:54:33.253485 kubelet[2601]: E0317 17:54:33.253113 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:54:34.256676 kubelet[2601]: E0317 17:54:34.256398 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:54:36.712202 containerd[1492]: time="2025-03-17T17:54:36.711990051Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:54:36.714956 containerd[1492]: time="2025-03-17T17:54:36.712160397Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:54:36.714956 containerd[1492]: time="2025-03-17T17:54:36.712183236Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:54:36.714956 containerd[1492]: time="2025-03-17T17:54:36.712404988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:54:36.744467 containerd[1492]: time="2025-03-17T17:54:36.743687194Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:54:36.744467 containerd[1492]: time="2025-03-17T17:54:36.743964947Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:54:36.744467 containerd[1492]: time="2025-03-17T17:54:36.743996440Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:54:36.744467 containerd[1492]: time="2025-03-17T17:54:36.744214439Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:54:36.797197 systemd[1]: Started cri-containerd-54dc8f2dd17ed7a9033cc8fd456c2159152c12215d0cee345ab0062a8f2e6f99.scope - libcontainer container 54dc8f2dd17ed7a9033cc8fd456c2159152c12215d0cee345ab0062a8f2e6f99. Mar 17 17:54:36.829641 systemd[1]: Started cri-containerd-89fdfa39546d296502dabb3110c917e8fff63719c9f15e110b86164a761c6bd0.scope - libcontainer container 89fdfa39546d296502dabb3110c917e8fff63719c9f15e110b86164a761c6bd0. Mar 17 17:54:36.903659 containerd[1492]: time="2025-03-17T17:54:36.902752972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-kz8fq,Uid:9d28f59d-10c3-4752-bcf3-ffc3593df58e,Namespace:kube-system,Attempt:0,} returns sandbox id \"54dc8f2dd17ed7a9033cc8fd456c2159152c12215d0cee345ab0062a8f2e6f99\"" Mar 17 17:54:36.906371 kubelet[2601]: E0317 17:54:36.906339 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:54:36.911028 containerd[1492]: time="2025-03-17T17:54:36.910773695Z" level=info msg="CreateContainer within sandbox \"54dc8f2dd17ed7a9033cc8fd456c2159152c12215d0cee345ab0062a8f2e6f99\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 17:54:36.939735 containerd[1492]: time="2025-03-17T17:54:36.939667339Z" level=info msg="CreateContainer within sandbox \"54dc8f2dd17ed7a9033cc8fd456c2159152c12215d0cee345ab0062a8f2e6f99\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5e6f1e76d97bcfabb9d33627c531c9e6d36a6c07cdc14c365a3fd113a1b7fb49\"" Mar 17 17:54:36.941412 containerd[1492]: time="2025-03-17T17:54:36.941364951Z" level=info msg="StartContainer for \"5e6f1e76d97bcfabb9d33627c531c9e6d36a6c07cdc14c365a3fd113a1b7fb49\"" Mar 17 17:54:36.964638 containerd[1492]: time="2025-03-17T17:54:36.963665035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-x8jxz,Uid:53f85659-f3c3-434f-8bce-1c56e1b91361,Namespace:kube-system,Attempt:0,} returns sandbox id \"89fdfa39546d296502dabb3110c917e8fff63719c9f15e110b86164a761c6bd0\"" Mar 17 17:54:36.967221 kubelet[2601]: E0317 17:54:36.964677 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:54:36.970002 containerd[1492]: time="2025-03-17T17:54:36.969947199Z" level=info msg="CreateContainer within sandbox \"89fdfa39546d296502dabb3110c917e8fff63719c9f15e110b86164a761c6bd0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 17:54:36.992183 containerd[1492]: time="2025-03-17T17:54:36.992017884Z" level=info msg="CreateContainer within sandbox \"89fdfa39546d296502dabb3110c917e8fff63719c9f15e110b86164a761c6bd0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2dc4a722af698991cc670f9bc2090c646bc536e1cebbffe7f657982a33e1beb4\"" Mar 17 17:54:36.993894 containerd[1492]: time="2025-03-17T17:54:36.993130332Z" level=info msg="StartContainer for \"2dc4a722af698991cc670f9bc2090c646bc536e1cebbffe7f657982a33e1beb4\"" Mar 17 17:54:37.017512 systemd[1]: Started cri-containerd-5e6f1e76d97bcfabb9d33627c531c9e6d36a6c07cdc14c365a3fd113a1b7fb49.scope - libcontainer container 5e6f1e76d97bcfabb9d33627c531c9e6d36a6c07cdc14c365a3fd113a1b7fb49. Mar 17 17:54:37.073357 systemd[1]: Started cri-containerd-2dc4a722af698991cc670f9bc2090c646bc536e1cebbffe7f657982a33e1beb4.scope - libcontainer container 2dc4a722af698991cc670f9bc2090c646bc536e1cebbffe7f657982a33e1beb4. Mar 17 17:54:37.093658 containerd[1492]: time="2025-03-17T17:54:37.093592722Z" level=info msg="StartContainer for \"5e6f1e76d97bcfabb9d33627c531c9e6d36a6c07cdc14c365a3fd113a1b7fb49\" returns successfully" Mar 17 17:54:37.139756 containerd[1492]: time="2025-03-17T17:54:37.139666864Z" level=info msg="StartContainer for \"2dc4a722af698991cc670f9bc2090c646bc536e1cebbffe7f657982a33e1beb4\" returns successfully" Mar 17 17:54:37.266574 kubelet[2601]: E0317 17:54:37.265752 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:54:37.271373 kubelet[2601]: E0317 17:54:37.271336 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:54:37.346111 kubelet[2601]: I0317 17:54:37.346029 2601 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-x8jxz" podStartSLOduration=29.345999405 podStartE2EDuration="29.345999405s" podCreationTimestamp="2025-03-17 17:54:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:54:37.308691358 +0000 UTC m=+34.704972295" watchObservedRunningTime="2025-03-17 17:54:37.345999405 +0000 UTC m=+34.742280313" Mar 17 17:54:37.346408 kubelet[2601]: I0317 17:54:37.346179 2601 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-kz8fq" podStartSLOduration=29.34615966 podStartE2EDuration="29.34615966s" podCreationTimestamp="2025-03-17 17:54:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:54:37.343612657 +0000 UTC m=+34.739893569" watchObservedRunningTime="2025-03-17 17:54:37.34615966 +0000 UTC m=+34.742440571" Mar 17 17:54:38.273711 kubelet[2601]: E0317 17:54:38.273654 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:54:38.275051 kubelet[2601]: E0317 17:54:38.275025 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:54:39.276609 kubelet[2601]: E0317 17:54:39.276547 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:54:39.276609 kubelet[2601]: E0317 17:54:39.276551 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:54:54.002330 systemd[1]: Started sshd@10-24.199.119.133:22-115.113.173.34:53676.service - OpenSSH per-connection server daemon (115.113.173.34:53676). Mar 17 17:54:55.088444 sshd[4010]: Invalid user dev from 115.113.173.34 port 53676 Mar 17 17:54:55.351020 sshd[4010]: Connection closed by invalid user dev 115.113.173.34 port 53676 [preauth] Mar 17 17:54:55.354041 systemd[1]: sshd@10-24.199.119.133:22-115.113.173.34:53676.service: Deactivated successfully. Mar 17 17:54:59.848421 systemd[1]: Started sshd@11-24.199.119.133:22-139.178.68.195:32930.service - OpenSSH per-connection server daemon (139.178.68.195:32930). Mar 17 17:54:59.941760 systemd[1]: Started sshd@12-24.199.119.133:22-218.92.0.188:61879.service - OpenSSH per-connection server daemon (218.92.0.188:61879). Mar 17 17:54:59.956056 sshd[4015]: Accepted publickey for core from 139.178.68.195 port 32930 ssh2: RSA SHA256:nAUKsK2l9wjXYeF+xS7MSq6cfWij0pIIBV4i7QqSfSE Mar 17 17:54:59.958925 sshd-session[4015]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:54:59.972919 systemd-logind[1469]: New session 8 of user core. Mar 17 17:54:59.978644 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 17 17:55:00.861466 sshd[4020]: Connection closed by 139.178.68.195 port 32930 Mar 17 17:55:00.860252 sshd-session[4015]: pam_unix(sshd:session): session closed for user core Mar 17 17:55:00.871432 systemd[1]: sshd@11-24.199.119.133:22-139.178.68.195:32930.service: Deactivated successfully. Mar 17 17:55:00.877219 systemd[1]: session-8.scope: Deactivated successfully. Mar 17 17:55:00.879501 systemd-logind[1469]: Session 8 logged out. Waiting for processes to exit. Mar 17 17:55:00.882201 systemd-logind[1469]: Removed session 8. Mar 17 17:55:01.235521 sshd-session[4032]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.188 user=root Mar 17 17:55:03.057189 sshd[4018]: PAM: Permission denied for root from 218.92.0.188 Mar 17 17:55:03.421726 sshd-session[4035]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.188 user=root Mar 17 17:55:05.180453 sshd[4018]: PAM: Permission denied for root from 218.92.0.188 Mar 17 17:55:05.896620 systemd[1]: Started sshd@13-24.199.119.133:22-139.178.68.195:43392.service - OpenSSH per-connection server daemon (139.178.68.195:43392). Mar 17 17:55:06.021059 sshd[4037]: Accepted publickey for core from 139.178.68.195 port 43392 ssh2: RSA SHA256:nAUKsK2l9wjXYeF+xS7MSq6cfWij0pIIBV4i7QqSfSE Mar 17 17:55:06.024047 sshd-session[4037]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:55:06.046204 systemd-logind[1469]: New session 9 of user core. Mar 17 17:55:06.068321 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 17 17:55:06.289039 sshd-session[4040]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.188 user=root Mar 17 17:55:06.346155 sshd[4039]: Connection closed by 139.178.68.195 port 43392 Mar 17 17:55:06.349927 sshd-session[4037]: pam_unix(sshd:session): session closed for user core Mar 17 17:55:06.371156 systemd[1]: sshd@13-24.199.119.133:22-139.178.68.195:43392.service: Deactivated successfully. Mar 17 17:55:06.377110 systemd[1]: session-9.scope: Deactivated successfully. Mar 17 17:55:06.388690 systemd-logind[1469]: Session 9 logged out. Waiting for processes to exit. Mar 17 17:55:06.398675 systemd-logind[1469]: Removed session 9. Mar 17 17:55:08.458478 sshd[4018]: PAM: Permission denied for root from 218.92.0.188 Mar 17 17:55:08.617332 sshd[4018]: Received disconnect from 218.92.0.188 port 61879:11: [preauth] Mar 17 17:55:08.617332 sshd[4018]: Disconnected from authenticating user root 218.92.0.188 port 61879 [preauth] Mar 17 17:55:08.621382 systemd[1]: sshd@12-24.199.119.133:22-218.92.0.188:61879.service: Deactivated successfully. Mar 17 17:55:11.371413 systemd[1]: Started sshd@14-24.199.119.133:22-139.178.68.195:43400.service - OpenSSH per-connection server daemon (139.178.68.195:43400). Mar 17 17:55:11.502283 sshd[4058]: Accepted publickey for core from 139.178.68.195 port 43400 ssh2: RSA SHA256:nAUKsK2l9wjXYeF+xS7MSq6cfWij0pIIBV4i7QqSfSE Mar 17 17:55:11.506136 sshd-session[4058]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:55:11.519937 systemd-logind[1469]: New session 10 of user core. Mar 17 17:55:11.528926 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 17 17:55:11.742394 sshd[4060]: Connection closed by 139.178.68.195 port 43400 Mar 17 17:55:11.745123 sshd-session[4058]: pam_unix(sshd:session): session closed for user core Mar 17 17:55:11.753153 systemd-logind[1469]: Session 10 logged out. Waiting for processes to exit. Mar 17 17:55:11.754290 systemd[1]: sshd@14-24.199.119.133:22-139.178.68.195:43400.service: Deactivated successfully. Mar 17 17:55:11.758115 systemd[1]: session-10.scope: Deactivated successfully. Mar 17 17:55:11.761829 systemd-logind[1469]: Removed session 10. Mar 17 17:55:16.772516 systemd[1]: Started sshd@15-24.199.119.133:22-139.178.68.195:39900.service - OpenSSH per-connection server daemon (139.178.68.195:39900). Mar 17 17:55:16.844531 sshd[4073]: Accepted publickey for core from 139.178.68.195 port 39900 ssh2: RSA SHA256:nAUKsK2l9wjXYeF+xS7MSq6cfWij0pIIBV4i7QqSfSE Mar 17 17:55:16.847771 sshd-session[4073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:55:16.856558 systemd-logind[1469]: New session 11 of user core. Mar 17 17:55:16.870311 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 17 17:55:17.039103 sshd[4075]: Connection closed by 139.178.68.195 port 39900 Mar 17 17:55:17.040742 sshd-session[4073]: pam_unix(sshd:session): session closed for user core Mar 17 17:55:17.062791 systemd[1]: sshd@15-24.199.119.133:22-139.178.68.195:39900.service: Deactivated successfully. Mar 17 17:55:17.067579 systemd[1]: session-11.scope: Deactivated successfully. Mar 17 17:55:17.070029 systemd-logind[1469]: Session 11 logged out. Waiting for processes to exit. Mar 17 17:55:17.078570 systemd[1]: Started sshd@16-24.199.119.133:22-139.178.68.195:39910.service - OpenSSH per-connection server daemon (139.178.68.195:39910). Mar 17 17:55:17.079692 systemd-logind[1469]: Removed session 11. Mar 17 17:55:17.161692 sshd[4087]: Accepted publickey for core from 139.178.68.195 port 39910 ssh2: RSA SHA256:nAUKsK2l9wjXYeF+xS7MSq6cfWij0pIIBV4i7QqSfSE Mar 17 17:55:17.164708 sshd-session[4087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:55:17.178293 systemd-logind[1469]: New session 12 of user core. Mar 17 17:55:17.186246 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 17 17:55:17.445042 sshd[4090]: Connection closed by 139.178.68.195 port 39910 Mar 17 17:55:17.446189 sshd-session[4087]: pam_unix(sshd:session): session closed for user core Mar 17 17:55:17.471267 systemd[1]: Started sshd@17-24.199.119.133:22-139.178.68.195:39912.service - OpenSSH per-connection server daemon (139.178.68.195:39912). Mar 17 17:55:17.474413 systemd[1]: sshd@16-24.199.119.133:22-139.178.68.195:39910.service: Deactivated successfully. Mar 17 17:55:17.480763 systemd[1]: session-12.scope: Deactivated successfully. Mar 17 17:55:17.487635 systemd-logind[1469]: Session 12 logged out. Waiting for processes to exit. Mar 17 17:55:17.498643 systemd-logind[1469]: Removed session 12. Mar 17 17:55:17.591908 sshd[4097]: Accepted publickey for core from 139.178.68.195 port 39912 ssh2: RSA SHA256:nAUKsK2l9wjXYeF+xS7MSq6cfWij0pIIBV4i7QqSfSE Mar 17 17:55:17.595748 sshd-session[4097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:55:17.610148 systemd-logind[1469]: New session 13 of user core. Mar 17 17:55:17.616482 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 17 17:55:17.848373 sshd[4102]: Connection closed by 139.178.68.195 port 39912 Mar 17 17:55:17.850264 sshd-session[4097]: pam_unix(sshd:session): session closed for user core Mar 17 17:55:17.859715 systemd[1]: sshd@17-24.199.119.133:22-139.178.68.195:39912.service: Deactivated successfully. Mar 17 17:55:17.864805 systemd[1]: session-13.scope: Deactivated successfully. Mar 17 17:55:17.866589 systemd-logind[1469]: Session 13 logged out. Waiting for processes to exit. Mar 17 17:55:17.868588 systemd-logind[1469]: Removed session 13. Mar 17 17:55:19.975926 kubelet[2601]: E0317 17:55:19.974689 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:55:22.887437 systemd[1]: Started sshd@18-24.199.119.133:22-139.178.68.195:39924.service - OpenSSH per-connection server daemon (139.178.68.195:39924). Mar 17 17:55:22.962143 sshd[4115]: Accepted publickey for core from 139.178.68.195 port 39924 ssh2: RSA SHA256:nAUKsK2l9wjXYeF+xS7MSq6cfWij0pIIBV4i7QqSfSE Mar 17 17:55:22.963328 sshd-session[4115]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:55:22.971666 systemd-logind[1469]: New session 14 of user core. Mar 17 17:55:22.978299 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 17 17:55:23.166882 sshd[4117]: Connection closed by 139.178.68.195 port 39924 Mar 17 17:55:23.165120 sshd-session[4115]: pam_unix(sshd:session): session closed for user core Mar 17 17:55:23.170916 systemd[1]: sshd@18-24.199.119.133:22-139.178.68.195:39924.service: Deactivated successfully. Mar 17 17:55:23.175077 systemd[1]: session-14.scope: Deactivated successfully. Mar 17 17:55:23.178386 systemd-logind[1469]: Session 14 logged out. Waiting for processes to exit. Mar 17 17:55:23.179737 systemd-logind[1469]: Removed session 14. Mar 17 17:55:25.974541 kubelet[2601]: E0317 17:55:25.974398 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:55:28.183251 systemd[1]: Started sshd@19-24.199.119.133:22-139.178.68.195:58616.service - OpenSSH per-connection server daemon (139.178.68.195:58616). Mar 17 17:55:28.254921 sshd[4130]: Accepted publickey for core from 139.178.68.195 port 58616 ssh2: RSA SHA256:nAUKsK2l9wjXYeF+xS7MSq6cfWij0pIIBV4i7QqSfSE Mar 17 17:55:28.256735 sshd-session[4130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:55:28.265013 systemd-logind[1469]: New session 15 of user core. Mar 17 17:55:28.269204 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 17 17:55:28.430021 sshd[4132]: Connection closed by 139.178.68.195 port 58616 Mar 17 17:55:28.431152 sshd-session[4130]: pam_unix(sshd:session): session closed for user core Mar 17 17:55:28.437462 systemd[1]: sshd@19-24.199.119.133:22-139.178.68.195:58616.service: Deactivated successfully. Mar 17 17:55:28.441804 systemd[1]: session-15.scope: Deactivated successfully. Mar 17 17:55:28.443712 systemd-logind[1469]: Session 15 logged out. Waiting for processes to exit. Mar 17 17:55:28.445724 systemd-logind[1469]: Removed session 15. Mar 17 17:55:33.453373 systemd[1]: Started sshd@20-24.199.119.133:22-139.178.68.195:58620.service - OpenSSH per-connection server daemon (139.178.68.195:58620). Mar 17 17:55:33.513237 sshd[4144]: Accepted publickey for core from 139.178.68.195 port 58620 ssh2: RSA SHA256:nAUKsK2l9wjXYeF+xS7MSq6cfWij0pIIBV4i7QqSfSE Mar 17 17:55:33.514391 sshd-session[4144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:55:33.522819 systemd-logind[1469]: New session 16 of user core. Mar 17 17:55:33.531251 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 17 17:55:33.723094 sshd[4146]: Connection closed by 139.178.68.195 port 58620 Mar 17 17:55:33.724350 sshd-session[4144]: pam_unix(sshd:session): session closed for user core Mar 17 17:55:33.732334 systemd[1]: sshd@20-24.199.119.133:22-139.178.68.195:58620.service: Deactivated successfully. Mar 17 17:55:33.735996 systemd[1]: session-16.scope: Deactivated successfully. Mar 17 17:55:33.738270 systemd-logind[1469]: Session 16 logged out. Waiting for processes to exit. Mar 17 17:55:33.740786 systemd-logind[1469]: Removed session 16. Mar 17 17:55:34.975783 kubelet[2601]: E0317 17:55:34.974673 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:55:35.936685 systemd[1]: Started sshd@21-24.199.119.133:22-115.113.173.34:60282.service - OpenSSH per-connection server daemon (115.113.173.34:60282). Mar 17 17:55:36.979617 kubelet[2601]: E0317 17:55:36.977830 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:55:36.999911 sshd[4158]: Invalid user oscar from 115.113.173.34 port 60282 Mar 17 17:55:37.265033 sshd[4158]: Connection closed by invalid user oscar 115.113.173.34 port 60282 [preauth] Mar 17 17:55:37.269146 systemd[1]: sshd@21-24.199.119.133:22-115.113.173.34:60282.service: Deactivated successfully. Mar 17 17:55:38.748533 systemd[1]: Started sshd@22-24.199.119.133:22-139.178.68.195:52196.service - OpenSSH per-connection server daemon (139.178.68.195:52196). Mar 17 17:55:38.813745 sshd[4166]: Accepted publickey for core from 139.178.68.195 port 52196 ssh2: RSA SHA256:nAUKsK2l9wjXYeF+xS7MSq6cfWij0pIIBV4i7QqSfSE Mar 17 17:55:38.815648 sshd-session[4166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:55:38.824692 systemd-logind[1469]: New session 17 of user core. Mar 17 17:55:38.831181 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 17 17:55:38.996316 sshd[4168]: Connection closed by 139.178.68.195 port 52196 Mar 17 17:55:38.997131 sshd-session[4166]: pam_unix(sshd:session): session closed for user core Mar 17 17:55:39.011371 systemd[1]: sshd@22-24.199.119.133:22-139.178.68.195:52196.service: Deactivated successfully. Mar 17 17:55:39.016197 systemd[1]: session-17.scope: Deactivated successfully. Mar 17 17:55:39.021470 systemd-logind[1469]: Session 17 logged out. Waiting for processes to exit. Mar 17 17:55:39.026344 systemd[1]: Started sshd@23-24.199.119.133:22-139.178.68.195:52202.service - OpenSSH per-connection server daemon (139.178.68.195:52202). Mar 17 17:55:39.028723 systemd-logind[1469]: Removed session 17. Mar 17 17:55:39.104729 sshd[4179]: Accepted publickey for core from 139.178.68.195 port 52202 ssh2: RSA SHA256:nAUKsK2l9wjXYeF+xS7MSq6cfWij0pIIBV4i7QqSfSE Mar 17 17:55:39.107975 sshd-session[4179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:55:39.116912 systemd-logind[1469]: New session 18 of user core. Mar 17 17:55:39.124431 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 17 17:55:39.504982 sshd[4182]: Connection closed by 139.178.68.195 port 52202 Mar 17 17:55:39.504462 sshd-session[4179]: pam_unix(sshd:session): session closed for user core Mar 17 17:55:39.520614 systemd[1]: sshd@23-24.199.119.133:22-139.178.68.195:52202.service: Deactivated successfully. Mar 17 17:55:39.525102 systemd[1]: session-18.scope: Deactivated successfully. Mar 17 17:55:39.528883 systemd-logind[1469]: Session 18 logged out. Waiting for processes to exit. Mar 17 17:55:39.537470 systemd[1]: Started sshd@24-24.199.119.133:22-139.178.68.195:52210.service - OpenSSH per-connection server daemon (139.178.68.195:52210). Mar 17 17:55:39.540426 systemd-logind[1469]: Removed session 18. Mar 17 17:55:39.660718 sshd[4193]: Accepted publickey for core from 139.178.68.195 port 52210 ssh2: RSA SHA256:nAUKsK2l9wjXYeF+xS7MSq6cfWij0pIIBV4i7QqSfSE Mar 17 17:55:39.664540 sshd-session[4193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:55:39.674577 systemd-logind[1469]: New session 19 of user core. Mar 17 17:55:39.689207 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 17 17:55:42.143586 sshd[4196]: Connection closed by 139.178.68.195 port 52210 Mar 17 17:55:42.145150 sshd-session[4193]: pam_unix(sshd:session): session closed for user core Mar 17 17:55:42.172846 systemd[1]: sshd@24-24.199.119.133:22-139.178.68.195:52210.service: Deactivated successfully. Mar 17 17:55:42.181791 systemd[1]: session-19.scope: Deactivated successfully. Mar 17 17:55:42.185957 systemd-logind[1469]: Session 19 logged out. Waiting for processes to exit. Mar 17 17:55:42.204442 systemd[1]: Started sshd@25-24.199.119.133:22-139.178.68.195:52216.service - OpenSSH per-connection server daemon (139.178.68.195:52216). Mar 17 17:55:42.209282 systemd-logind[1469]: Removed session 19. Mar 17 17:55:42.280794 sshd[4211]: Accepted publickey for core from 139.178.68.195 port 52216 ssh2: RSA SHA256:nAUKsK2l9wjXYeF+xS7MSq6cfWij0pIIBV4i7QqSfSE Mar 17 17:55:42.283465 sshd-session[4211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:55:42.295637 systemd-logind[1469]: New session 20 of user core. Mar 17 17:55:42.302197 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 17 17:55:42.742484 sshd[4216]: Connection closed by 139.178.68.195 port 52216 Mar 17 17:55:42.743991 sshd-session[4211]: pam_unix(sshd:session): session closed for user core Mar 17 17:55:42.761286 systemd[1]: sshd@25-24.199.119.133:22-139.178.68.195:52216.service: Deactivated successfully. Mar 17 17:55:42.768058 systemd[1]: session-20.scope: Deactivated successfully. Mar 17 17:55:42.770340 systemd-logind[1469]: Session 20 logged out. Waiting for processes to exit. Mar 17 17:55:42.778424 systemd[1]: Started sshd@26-24.199.119.133:22-139.178.68.195:52222.service - OpenSSH per-connection server daemon (139.178.68.195:52222). Mar 17 17:55:42.782172 systemd-logind[1469]: Removed session 20. Mar 17 17:55:42.862173 sshd[4226]: Accepted publickey for core from 139.178.68.195 port 52222 ssh2: RSA SHA256:nAUKsK2l9wjXYeF+xS7MSq6cfWij0pIIBV4i7QqSfSE Mar 17 17:55:42.863976 sshd-session[4226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:55:42.873049 systemd-logind[1469]: New session 21 of user core. Mar 17 17:55:42.882294 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 17 17:55:43.085747 sshd[4229]: Connection closed by 139.178.68.195 port 52222 Mar 17 17:55:43.085151 sshd-session[4226]: pam_unix(sshd:session): session closed for user core Mar 17 17:55:43.090369 systemd[1]: sshd@26-24.199.119.133:22-139.178.68.195:52222.service: Deactivated successfully. Mar 17 17:55:43.094235 systemd[1]: session-21.scope: Deactivated successfully. Mar 17 17:55:43.097734 systemd-logind[1469]: Session 21 logged out. Waiting for processes to exit. Mar 17 17:55:43.099672 systemd-logind[1469]: Removed session 21. Mar 17 17:55:46.974951 kubelet[2601]: E0317 17:55:46.974584 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:55:46.977031 kubelet[2601]: E0317 17:55:46.976872 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:55:48.109445 systemd[1]: Started sshd@27-24.199.119.133:22-139.178.68.195:35924.service - OpenSSH per-connection server daemon (139.178.68.195:35924). Mar 17 17:55:48.193107 sshd[4241]: Accepted publickey for core from 139.178.68.195 port 35924 ssh2: RSA SHA256:nAUKsK2l9wjXYeF+xS7MSq6cfWij0pIIBV4i7QqSfSE Mar 17 17:55:48.195353 sshd-session[4241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:55:48.205025 systemd-logind[1469]: New session 22 of user core. Mar 17 17:55:48.213206 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 17 17:55:48.383864 sshd[4243]: Connection closed by 139.178.68.195 port 35924 Mar 17 17:55:48.384598 sshd-session[4241]: pam_unix(sshd:session): session closed for user core Mar 17 17:55:48.391263 systemd[1]: sshd@27-24.199.119.133:22-139.178.68.195:35924.service: Deactivated successfully. Mar 17 17:55:48.397022 systemd[1]: session-22.scope: Deactivated successfully. Mar 17 17:55:48.400824 systemd-logind[1469]: Session 22 logged out. Waiting for processes to exit. Mar 17 17:55:48.404002 systemd-logind[1469]: Removed session 22. Mar 17 17:55:49.975152 kubelet[2601]: E0317 17:55:49.974999 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:55:50.976497 kubelet[2601]: E0317 17:55:50.975557 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:55:53.206348 systemd[1]: Started sshd@28-24.199.119.133:22-218.92.0.188:46325.service - OpenSSH per-connection server daemon (218.92.0.188:46325). Mar 17 17:55:53.416611 systemd[1]: Started sshd@29-24.199.119.133:22-139.178.68.195:35932.service - OpenSSH per-connection server daemon (139.178.68.195:35932). Mar 17 17:55:53.480719 sshd[4261]: Accepted publickey for core from 139.178.68.195 port 35932 ssh2: RSA SHA256:nAUKsK2l9wjXYeF+xS7MSq6cfWij0pIIBV4i7QqSfSE Mar 17 17:55:53.483191 sshd-session[4261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:55:53.492245 systemd-logind[1469]: New session 23 of user core. Mar 17 17:55:53.502216 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 17 17:55:53.683230 sshd[4263]: Connection closed by 139.178.68.195 port 35932 Mar 17 17:55:53.686816 sshd-session[4261]: pam_unix(sshd:session): session closed for user core Mar 17 17:55:53.695790 systemd[1]: sshd@29-24.199.119.133:22-139.178.68.195:35932.service: Deactivated successfully. Mar 17 17:55:53.701072 systemd[1]: session-23.scope: Deactivated successfully. Mar 17 17:55:53.703561 systemd-logind[1469]: Session 23 logged out. Waiting for processes to exit. Mar 17 17:55:53.705488 systemd-logind[1469]: Removed session 23. Mar 17 17:55:55.720541 sshd-session[4274]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.188 user=root Mar 17 17:55:57.948581 sshd[4258]: PAM: Permission denied for root from 218.92.0.188 Mar 17 17:55:58.306680 sshd-session[4276]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.188 user=root Mar 17 17:55:58.736697 systemd[1]: Started sshd@30-24.199.119.133:22-139.178.68.195:45134.service - OpenSSH per-connection server daemon (139.178.68.195:45134). Mar 17 17:55:58.805899 sshd[4278]: Accepted publickey for core from 139.178.68.195 port 45134 ssh2: RSA SHA256:nAUKsK2l9wjXYeF+xS7MSq6cfWij0pIIBV4i7QqSfSE Mar 17 17:55:58.808538 sshd-session[4278]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:55:58.824950 systemd-logind[1469]: New session 24 of user core. Mar 17 17:55:58.830327 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 17 17:55:59.041437 sshd[4280]: Connection closed by 139.178.68.195 port 45134 Mar 17 17:55:59.045988 sshd-session[4278]: pam_unix(sshd:session): session closed for user core Mar 17 17:55:59.053610 systemd[1]: sshd@30-24.199.119.133:22-139.178.68.195:45134.service: Deactivated successfully. Mar 17 17:55:59.058320 systemd[1]: session-24.scope: Deactivated successfully. Mar 17 17:55:59.059649 systemd-logind[1469]: Session 24 logged out. Waiting for processes to exit. Mar 17 17:55:59.062306 systemd-logind[1469]: Removed session 24. Mar 17 17:55:59.948663 sshd[4258]: PAM: Permission denied for root from 218.92.0.188 Mar 17 17:56:00.748953 sshd-session[4290]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.188 user=root Mar 17 17:56:02.333248 sshd[4258]: PAM: Permission denied for root from 218.92.0.188 Mar 17 17:56:02.514985 sshd[4258]: Received disconnect from 218.92.0.188 port 46325:11: [preauth] Mar 17 17:56:02.514985 sshd[4258]: Disconnected from authenticating user root 218.92.0.188 port 46325 [preauth] Mar 17 17:56:02.516749 systemd[1]: sshd@28-24.199.119.133:22-218.92.0.188:46325.service: Deactivated successfully. Mar 17 17:56:04.085482 systemd[1]: Started sshd@31-24.199.119.133:22-139.178.68.195:45150.service - OpenSSH per-connection server daemon (139.178.68.195:45150). Mar 17 17:56:04.216574 sshd[4296]: Accepted publickey for core from 139.178.68.195 port 45150 ssh2: RSA SHA256:nAUKsK2l9wjXYeF+xS7MSq6cfWij0pIIBV4i7QqSfSE Mar 17 17:56:04.219205 sshd-session[4296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:56:04.229306 systemd-logind[1469]: New session 25 of user core. Mar 17 17:56:04.234276 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 17 17:56:04.450221 sshd[4298]: Connection closed by 139.178.68.195 port 45150 Mar 17 17:56:04.449805 sshd-session[4296]: pam_unix(sshd:session): session closed for user core Mar 17 17:56:04.481144 systemd[1]: sshd@31-24.199.119.133:22-139.178.68.195:45150.service: Deactivated successfully. Mar 17 17:56:04.485876 systemd[1]: session-25.scope: Deactivated successfully. Mar 17 17:56:04.488688 systemd-logind[1469]: Session 25 logged out. Waiting for processes to exit. Mar 17 17:56:04.501432 systemd[1]: Started sshd@32-24.199.119.133:22-139.178.68.195:45152.service - OpenSSH per-connection server daemon (139.178.68.195:45152). Mar 17 17:56:04.505325 systemd-logind[1469]: Removed session 25. Mar 17 17:56:04.576770 sshd[4309]: Accepted publickey for core from 139.178.68.195 port 45152 ssh2: RSA SHA256:nAUKsK2l9wjXYeF+xS7MSq6cfWij0pIIBV4i7QqSfSE Mar 17 17:56:04.580153 sshd-session[4309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:56:04.601127 systemd-logind[1469]: New session 26 of user core. Mar 17 17:56:04.606271 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 17 17:56:06.992313 containerd[1492]: time="2025-03-17T17:56:06.991870864Z" level=info msg="StopContainer for \"dc0031ff157e652338b9feeb389921cb487cd6405076f2e264a77a77761b0aa4\" with timeout 30 (s)" Mar 17 17:56:06.998529 containerd[1492]: time="2025-03-17T17:56:06.998348006Z" level=info msg="Stop container \"dc0031ff157e652338b9feeb389921cb487cd6405076f2e264a77a77761b0aa4\" with signal terminated" Mar 17 17:56:07.060521 systemd[1]: cri-containerd-dc0031ff157e652338b9feeb389921cb487cd6405076f2e264a77a77761b0aa4.scope: Deactivated successfully. Mar 17 17:56:07.120710 containerd[1492]: time="2025-03-17T17:56:07.120633832Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 17:56:07.143740 containerd[1492]: time="2025-03-17T17:56:07.143680638Z" level=info msg="StopContainer for \"8651249fde162d5ae1c61c817b16acd75461eb90580f0ca6ab434fbd2faf9cf4\" with timeout 2 (s)" Mar 17 17:56:07.145887 containerd[1492]: time="2025-03-17T17:56:07.145048720Z" level=info msg="Stop container \"8651249fde162d5ae1c61c817b16acd75461eb90580f0ca6ab434fbd2faf9cf4\" with signal terminated" Mar 17 17:56:07.182160 systemd-networkd[1393]: lxc_health: Link DOWN Mar 17 17:56:07.182174 systemd-networkd[1393]: lxc_health: Lost carrier Mar 17 17:56:07.184702 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dc0031ff157e652338b9feeb389921cb487cd6405076f2e264a77a77761b0aa4-rootfs.mount: Deactivated successfully. Mar 17 17:56:07.204088 containerd[1492]: time="2025-03-17T17:56:07.200933935Z" level=info msg="shim disconnected" id=dc0031ff157e652338b9feeb389921cb487cd6405076f2e264a77a77761b0aa4 namespace=k8s.io Mar 17 17:56:07.204088 containerd[1492]: time="2025-03-17T17:56:07.201124605Z" level=warning msg="cleaning up after shim disconnected" id=dc0031ff157e652338b9feeb389921cb487cd6405076f2e264a77a77761b0aa4 namespace=k8s.io Mar 17 17:56:07.204088 containerd[1492]: time="2025-03-17T17:56:07.201159790Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:56:07.235008 systemd[1]: cri-containerd-8651249fde162d5ae1c61c817b16acd75461eb90580f0ca6ab434fbd2faf9cf4.scope: Deactivated successfully. Mar 17 17:56:07.235531 systemd[1]: cri-containerd-8651249fde162d5ae1c61c817b16acd75461eb90580f0ca6ab434fbd2faf9cf4.scope: Consumed 10.449s CPU time, 182.3M memory peak, 59.4M read from disk, 13.3M written to disk. Mar 17 17:56:07.285985 containerd[1492]: time="2025-03-17T17:56:07.285792613Z" level=info msg="StopContainer for \"dc0031ff157e652338b9feeb389921cb487cd6405076f2e264a77a77761b0aa4\" returns successfully" Mar 17 17:56:07.291666 containerd[1492]: time="2025-03-17T17:56:07.291589965Z" level=info msg="StopPodSandbox for \"a7b80b72432f410b6950c98f97c385d6fcf48e1596b34c5fdcf2aaf1f3aa51ab\"" Mar 17 17:56:07.320001 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8651249fde162d5ae1c61c817b16acd75461eb90580f0ca6ab434fbd2faf9cf4-rootfs.mount: Deactivated successfully. Mar 17 17:56:07.325489 containerd[1492]: time="2025-03-17T17:56:07.325327541Z" level=info msg="shim disconnected" id=8651249fde162d5ae1c61c817b16acd75461eb90580f0ca6ab434fbd2faf9cf4 namespace=k8s.io Mar 17 17:56:07.325489 containerd[1492]: time="2025-03-17T17:56:07.325418394Z" level=warning msg="cleaning up after shim disconnected" id=8651249fde162d5ae1c61c817b16acd75461eb90580f0ca6ab434fbd2faf9cf4 namespace=k8s.io Mar 17 17:56:07.325489 containerd[1492]: time="2025-03-17T17:56:07.325440978Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:56:07.327500 containerd[1492]: time="2025-03-17T17:56:07.297535149Z" level=info msg="Container to stop \"dc0031ff157e652338b9feeb389921cb487cd6405076f2e264a77a77761b0aa4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:56:07.336861 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a7b80b72432f410b6950c98f97c385d6fcf48e1596b34c5fdcf2aaf1f3aa51ab-shm.mount: Deactivated successfully. Mar 17 17:56:07.358621 systemd[1]: cri-containerd-a7b80b72432f410b6950c98f97c385d6fcf48e1596b34c5fdcf2aaf1f3aa51ab.scope: Deactivated successfully. Mar 17 17:56:07.389528 containerd[1492]: time="2025-03-17T17:56:07.389459295Z" level=info msg="StopContainer for \"8651249fde162d5ae1c61c817b16acd75461eb90580f0ca6ab434fbd2faf9cf4\" returns successfully" Mar 17 17:56:07.390815 containerd[1492]: time="2025-03-17T17:56:07.390643141Z" level=info msg="StopPodSandbox for \"9e808294057c550e1c299868218b09f3ed1788851b8d6ee07e550a5f3ba06ccd\"" Mar 17 17:56:07.390815 containerd[1492]: time="2025-03-17T17:56:07.390717753Z" level=info msg="Container to stop \"8651249fde162d5ae1c61c817b16acd75461eb90580f0ca6ab434fbd2faf9cf4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:56:07.390815 containerd[1492]: time="2025-03-17T17:56:07.390779313Z" level=info msg="Container to stop \"31f744582357cb245d26f3eacdb2f6d364f6c952dfd9f9d958ff7a0cddced07b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:56:07.390815 containerd[1492]: time="2025-03-17T17:56:07.390791972Z" level=info msg="Container to stop \"de6fb2b8e83f554cc1085983a5f989364d81caedab649fa5bd50eebfc2c6d895\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:56:07.391769 containerd[1492]: time="2025-03-17T17:56:07.391155110Z" level=info msg="Container to stop \"338b43c0edd701ec4b7d3c9596870bc2071fe3c99f05a00e487c3b81f5c91060\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:56:07.391769 containerd[1492]: time="2025-03-17T17:56:07.391178603Z" level=info msg="Container to stop \"8055b970f77427e6287829495ae2df02612e8c777a9ae5a71b259e5175a842a1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:56:07.402139 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9e808294057c550e1c299868218b09f3ed1788851b8d6ee07e550a5f3ba06ccd-shm.mount: Deactivated successfully. Mar 17 17:56:07.413053 systemd[1]: cri-containerd-9e808294057c550e1c299868218b09f3ed1788851b8d6ee07e550a5f3ba06ccd.scope: Deactivated successfully. Mar 17 17:56:07.471772 containerd[1492]: time="2025-03-17T17:56:07.471543061Z" level=info msg="shim disconnected" id=a7b80b72432f410b6950c98f97c385d6fcf48e1596b34c5fdcf2aaf1f3aa51ab namespace=k8s.io Mar 17 17:56:07.471772 containerd[1492]: time="2025-03-17T17:56:07.471630417Z" level=warning msg="cleaning up after shim disconnected" id=a7b80b72432f410b6950c98f97c385d6fcf48e1596b34c5fdcf2aaf1f3aa51ab namespace=k8s.io Mar 17 17:56:07.471772 containerd[1492]: time="2025-03-17T17:56:07.471643293Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:56:07.494167 containerd[1492]: time="2025-03-17T17:56:07.493809624Z" level=info msg="shim disconnected" id=9e808294057c550e1c299868218b09f3ed1788851b8d6ee07e550a5f3ba06ccd namespace=k8s.io Mar 17 17:56:07.494167 containerd[1492]: time="2025-03-17T17:56:07.493915064Z" level=warning msg="cleaning up after shim disconnected" id=9e808294057c550e1c299868218b09f3ed1788851b8d6ee07e550a5f3ba06ccd namespace=k8s.io Mar 17 17:56:07.494167 containerd[1492]: time="2025-03-17T17:56:07.493932966Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:56:07.514669 containerd[1492]: time="2025-03-17T17:56:07.514598658Z" level=info msg="TearDown network for sandbox \"a7b80b72432f410b6950c98f97c385d6fcf48e1596b34c5fdcf2aaf1f3aa51ab\" successfully" Mar 17 17:56:07.515220 containerd[1492]: time="2025-03-17T17:56:07.514998357Z" level=info msg="StopPodSandbox for \"a7b80b72432f410b6950c98f97c385d6fcf48e1596b34c5fdcf2aaf1f3aa51ab\" returns successfully" Mar 17 17:56:07.558183 containerd[1492]: time="2025-03-17T17:56:07.556684119Z" level=info msg="TearDown network for sandbox \"9e808294057c550e1c299868218b09f3ed1788851b8d6ee07e550a5f3ba06ccd\" successfully" Mar 17 17:56:07.558183 containerd[1492]: time="2025-03-17T17:56:07.556750673Z" level=info msg="StopPodSandbox for \"9e808294057c550e1c299868218b09f3ed1788851b8d6ee07e550a5f3ba06ccd\" returns successfully" Mar 17 17:56:07.587656 kubelet[2601]: I0317 17:56:07.585387 2601 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e808294057c550e1c299868218b09f3ed1788851b8d6ee07e550a5f3ba06ccd" Mar 17 17:56:07.594076 kubelet[2601]: I0317 17:56:07.593235 2601 scope.go:117] "RemoveContainer" containerID="dc0031ff157e652338b9feeb389921cb487cd6405076f2e264a77a77761b0aa4" Mar 17 17:56:07.621665 kubelet[2601]: I0317 17:56:07.621580 2601 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8ccdb914-4378-4042-8c14-9d432415fa36-bpf-maps\") pod \"8ccdb914-4378-4042-8c14-9d432415fa36\" (UID: \"8ccdb914-4378-4042-8c14-9d432415fa36\") " Mar 17 17:56:07.621665 kubelet[2601]: I0317 17:56:07.621682 2601 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5l4xw\" (UniqueName: \"kubernetes.io/projected/7f692ac4-12a5-4247-b4c3-da73eae3ab35-kube-api-access-5l4xw\") pod \"7f692ac4-12a5-4247-b4c3-da73eae3ab35\" (UID: \"7f692ac4-12a5-4247-b4c3-da73eae3ab35\") " Mar 17 17:56:07.621995 kubelet[2601]: I0317 17:56:07.621716 2601 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8ccdb914-4378-4042-8c14-9d432415fa36-cni-path\") pod \"8ccdb914-4378-4042-8c14-9d432415fa36\" (UID: \"8ccdb914-4378-4042-8c14-9d432415fa36\") " Mar 17 17:56:07.621995 kubelet[2601]: I0317 17:56:07.621749 2601 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8ccdb914-4378-4042-8c14-9d432415fa36-etc-cni-netd\") pod \"8ccdb914-4378-4042-8c14-9d432415fa36\" (UID: \"8ccdb914-4378-4042-8c14-9d432415fa36\") " Mar 17 17:56:07.621995 kubelet[2601]: I0317 17:56:07.621779 2601 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8ccdb914-4378-4042-8c14-9d432415fa36-hubble-tls\") pod \"8ccdb914-4378-4042-8c14-9d432415fa36\" (UID: \"8ccdb914-4378-4042-8c14-9d432415fa36\") " Mar 17 17:56:07.621995 kubelet[2601]: I0317 17:56:07.621808 2601 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8ccdb914-4378-4042-8c14-9d432415fa36-host-proc-sys-net\") pod \"8ccdb914-4378-4042-8c14-9d432415fa36\" (UID: \"8ccdb914-4378-4042-8c14-9d432415fa36\") " Mar 17 17:56:07.621995 kubelet[2601]: I0317 17:56:07.621860 2601 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxlsl\" (UniqueName: \"kubernetes.io/projected/8ccdb914-4378-4042-8c14-9d432415fa36-kube-api-access-vxlsl\") pod \"8ccdb914-4378-4042-8c14-9d432415fa36\" (UID: \"8ccdb914-4378-4042-8c14-9d432415fa36\") " Mar 17 17:56:07.621995 kubelet[2601]: I0317 17:56:07.621886 2601 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8ccdb914-4378-4042-8c14-9d432415fa36-lib-modules\") pod \"8ccdb914-4378-4042-8c14-9d432415fa36\" (UID: \"8ccdb914-4378-4042-8c14-9d432415fa36\") " Mar 17 17:56:07.622293 kubelet[2601]: I0317 17:56:07.621918 2601 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8ccdb914-4378-4042-8c14-9d432415fa36-cilium-config-path\") pod \"8ccdb914-4378-4042-8c14-9d432415fa36\" (UID: \"8ccdb914-4378-4042-8c14-9d432415fa36\") " Mar 17 17:56:07.622293 kubelet[2601]: I0317 17:56:07.621949 2601 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7f692ac4-12a5-4247-b4c3-da73eae3ab35-cilium-config-path\") pod \"7f692ac4-12a5-4247-b4c3-da73eae3ab35\" (UID: \"7f692ac4-12a5-4247-b4c3-da73eae3ab35\") " Mar 17 17:56:07.622293 kubelet[2601]: I0317 17:56:07.621975 2601 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8ccdb914-4378-4042-8c14-9d432415fa36-cilium-run\") pod \"8ccdb914-4378-4042-8c14-9d432415fa36\" (UID: \"8ccdb914-4378-4042-8c14-9d432415fa36\") " Mar 17 17:56:07.622293 kubelet[2601]: I0317 17:56:07.622004 2601 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8ccdb914-4378-4042-8c14-9d432415fa36-xtables-lock\") pod \"8ccdb914-4378-4042-8c14-9d432415fa36\" (UID: \"8ccdb914-4378-4042-8c14-9d432415fa36\") " Mar 17 17:56:07.622293 kubelet[2601]: I0317 17:56:07.622029 2601 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8ccdb914-4378-4042-8c14-9d432415fa36-hostproc\") pod \"8ccdb914-4378-4042-8c14-9d432415fa36\" (UID: \"8ccdb914-4378-4042-8c14-9d432415fa36\") " Mar 17 17:56:07.622293 kubelet[2601]: I0317 17:56:07.622055 2601 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8ccdb914-4378-4042-8c14-9d432415fa36-clustermesh-secrets\") pod \"8ccdb914-4378-4042-8c14-9d432415fa36\" (UID: \"8ccdb914-4378-4042-8c14-9d432415fa36\") " Mar 17 17:56:07.627887 kubelet[2601]: I0317 17:56:07.622082 2601 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8ccdb914-4378-4042-8c14-9d432415fa36-host-proc-sys-kernel\") pod \"8ccdb914-4378-4042-8c14-9d432415fa36\" (UID: \"8ccdb914-4378-4042-8c14-9d432415fa36\") " Mar 17 17:56:07.627887 kubelet[2601]: I0317 17:56:07.622109 2601 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8ccdb914-4378-4042-8c14-9d432415fa36-cilium-cgroup\") pod \"8ccdb914-4378-4042-8c14-9d432415fa36\" (UID: \"8ccdb914-4378-4042-8c14-9d432415fa36\") " Mar 17 17:56:07.627887 kubelet[2601]: I0317 17:56:07.622240 2601 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ccdb914-4378-4042-8c14-9d432415fa36-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "8ccdb914-4378-4042-8c14-9d432415fa36" (UID: "8ccdb914-4378-4042-8c14-9d432415fa36"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:56:07.627887 kubelet[2601]: I0317 17:56:07.622307 2601 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ccdb914-4378-4042-8c14-9d432415fa36-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "8ccdb914-4378-4042-8c14-9d432415fa36" (UID: "8ccdb914-4378-4042-8c14-9d432415fa36"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:56:07.630165 kubelet[2601]: I0317 17:56:07.629929 2601 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ccdb914-4378-4042-8c14-9d432415fa36-cni-path" (OuterVolumeSpecName: "cni-path") pod "8ccdb914-4378-4042-8c14-9d432415fa36" (UID: "8ccdb914-4378-4042-8c14-9d432415fa36"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:56:07.630165 kubelet[2601]: I0317 17:56:07.630001 2601 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ccdb914-4378-4042-8c14-9d432415fa36-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "8ccdb914-4378-4042-8c14-9d432415fa36" (UID: "8ccdb914-4378-4042-8c14-9d432415fa36"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:56:07.630986 containerd[1492]: time="2025-03-17T17:56:07.630442333Z" level=info msg="RemoveContainer for \"dc0031ff157e652338b9feeb389921cb487cd6405076f2e264a77a77761b0aa4\"" Mar 17 17:56:07.634968 kubelet[2601]: I0317 17:56:07.632509 2601 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ccdb914-4378-4042-8c14-9d432415fa36-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "8ccdb914-4378-4042-8c14-9d432415fa36" (UID: "8ccdb914-4378-4042-8c14-9d432415fa36"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:56:07.634968 kubelet[2601]: I0317 17:56:07.632613 2601 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ccdb914-4378-4042-8c14-9d432415fa36-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "8ccdb914-4378-4042-8c14-9d432415fa36" (UID: "8ccdb914-4378-4042-8c14-9d432415fa36"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:56:07.634968 kubelet[2601]: I0317 17:56:07.632633 2601 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ccdb914-4378-4042-8c14-9d432415fa36-hostproc" (OuterVolumeSpecName: "hostproc") pod "8ccdb914-4378-4042-8c14-9d432415fa36" (UID: "8ccdb914-4378-4042-8c14-9d432415fa36"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:56:07.638646 kubelet[2601]: I0317 17:56:07.638555 2601 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ccdb914-4378-4042-8c14-9d432415fa36-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "8ccdb914-4378-4042-8c14-9d432415fa36" (UID: "8ccdb914-4378-4042-8c14-9d432415fa36"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:56:07.639286 kubelet[2601]: I0317 17:56:07.639223 2601 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ccdb914-4378-4042-8c14-9d432415fa36-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "8ccdb914-4378-4042-8c14-9d432415fa36" (UID: "8ccdb914-4378-4042-8c14-9d432415fa36"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:56:07.641991 containerd[1492]: time="2025-03-17T17:56:07.641791248Z" level=info msg="RemoveContainer for \"dc0031ff157e652338b9feeb389921cb487cd6405076f2e264a77a77761b0aa4\" returns successfully" Mar 17 17:56:07.642509 kubelet[2601]: I0317 17:56:07.642378 2601 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ccdb914-4378-4042-8c14-9d432415fa36-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "8ccdb914-4378-4042-8c14-9d432415fa36" (UID: "8ccdb914-4378-4042-8c14-9d432415fa36"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:56:07.644945 kubelet[2601]: I0317 17:56:07.644253 2601 scope.go:117] "RemoveContainer" containerID="dc0031ff157e652338b9feeb389921cb487cd6405076f2e264a77a77761b0aa4" Mar 17 17:56:07.651799 kubelet[2601]: I0317 17:56:07.650568 2601 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ccdb914-4378-4042-8c14-9d432415fa36-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8ccdb914-4378-4042-8c14-9d432415fa36" (UID: "8ccdb914-4378-4042-8c14-9d432415fa36"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 17:56:07.655008 containerd[1492]: time="2025-03-17T17:56:07.651786211Z" level=error msg="ContainerStatus for \"dc0031ff157e652338b9feeb389921cb487cd6405076f2e264a77a77761b0aa4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dc0031ff157e652338b9feeb389921cb487cd6405076f2e264a77a77761b0aa4\": not found" Mar 17 17:56:07.661794 kubelet[2601]: E0317 17:56:07.661614 2601 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dc0031ff157e652338b9feeb389921cb487cd6405076f2e264a77a77761b0aa4\": not found" containerID="dc0031ff157e652338b9feeb389921cb487cd6405076f2e264a77a77761b0aa4" Mar 17 17:56:07.663790 kubelet[2601]: I0317 17:56:07.661927 2601 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dc0031ff157e652338b9feeb389921cb487cd6405076f2e264a77a77761b0aa4"} err="failed to get container status \"dc0031ff157e652338b9feeb389921cb487cd6405076f2e264a77a77761b0aa4\": rpc error: code = NotFound desc = an error occurred when try to find container \"dc0031ff157e652338b9feeb389921cb487cd6405076f2e264a77a77761b0aa4\": not found" Mar 17 17:56:07.663790 kubelet[2601]: I0317 17:56:07.662094 2601 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ccdb914-4378-4042-8c14-9d432415fa36-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "8ccdb914-4378-4042-8c14-9d432415fa36" (UID: "8ccdb914-4378-4042-8c14-9d432415fa36"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 17:56:07.665761 kubelet[2601]: I0317 17:56:07.665500 2601 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f692ac4-12a5-4247-b4c3-da73eae3ab35-kube-api-access-5l4xw" (OuterVolumeSpecName: "kube-api-access-5l4xw") pod "7f692ac4-12a5-4247-b4c3-da73eae3ab35" (UID: "7f692ac4-12a5-4247-b4c3-da73eae3ab35"). InnerVolumeSpecName "kube-api-access-5l4xw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 17:56:07.668920 kubelet[2601]: I0317 17:56:07.668551 2601 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ccdb914-4378-4042-8c14-9d432415fa36-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "8ccdb914-4378-4042-8c14-9d432415fa36" (UID: "8ccdb914-4378-4042-8c14-9d432415fa36"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 17:56:07.670132 kubelet[2601]: I0317 17:56:07.669888 2601 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f692ac4-12a5-4247-b4c3-da73eae3ab35-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7f692ac4-12a5-4247-b4c3-da73eae3ab35" (UID: "7f692ac4-12a5-4247-b4c3-da73eae3ab35"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 17:56:07.671241 kubelet[2601]: I0317 17:56:07.671191 2601 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ccdb914-4378-4042-8c14-9d432415fa36-kube-api-access-vxlsl" (OuterVolumeSpecName: "kube-api-access-vxlsl") pod "8ccdb914-4378-4042-8c14-9d432415fa36" (UID: "8ccdb914-4378-4042-8c14-9d432415fa36"). InnerVolumeSpecName "kube-api-access-vxlsl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 17:56:07.723284 kubelet[2601]: I0317 17:56:07.722921 2601 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8ccdb914-4378-4042-8c14-9d432415fa36-cilium-cgroup\") on node \"ci-4230.1.0-f-ebc70812f4\" DevicePath \"\"" Mar 17 17:56:07.723284 kubelet[2601]: I0317 17:56:07.722981 2601 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8ccdb914-4378-4042-8c14-9d432415fa36-bpf-maps\") on node \"ci-4230.1.0-f-ebc70812f4\" DevicePath \"\"" Mar 17 17:56:07.723284 kubelet[2601]: I0317 17:56:07.722998 2601 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-5l4xw\" (UniqueName: \"kubernetes.io/projected/7f692ac4-12a5-4247-b4c3-da73eae3ab35-kube-api-access-5l4xw\") on node \"ci-4230.1.0-f-ebc70812f4\" DevicePath \"\"" Mar 17 17:56:07.723284 kubelet[2601]: I0317 17:56:07.723021 2601 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8ccdb914-4378-4042-8c14-9d432415fa36-cni-path\") on node \"ci-4230.1.0-f-ebc70812f4\" DevicePath \"\"" Mar 17 17:56:07.723284 kubelet[2601]: I0317 17:56:07.723036 2601 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8ccdb914-4378-4042-8c14-9d432415fa36-etc-cni-netd\") on node \"ci-4230.1.0-f-ebc70812f4\" DevicePath \"\"" Mar 17 17:56:07.723284 kubelet[2601]: I0317 17:56:07.723052 2601 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8ccdb914-4378-4042-8c14-9d432415fa36-hubble-tls\") on node \"ci-4230.1.0-f-ebc70812f4\" DevicePath \"\"" Mar 17 17:56:07.723284 kubelet[2601]: I0317 17:56:07.723066 2601 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8ccdb914-4378-4042-8c14-9d432415fa36-host-proc-sys-net\") on node \"ci-4230.1.0-f-ebc70812f4\" DevicePath \"\"" Mar 17 17:56:07.723284 kubelet[2601]: I0317 17:56:07.723081 2601 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-vxlsl\" (UniqueName: \"kubernetes.io/projected/8ccdb914-4378-4042-8c14-9d432415fa36-kube-api-access-vxlsl\") on node \"ci-4230.1.0-f-ebc70812f4\" DevicePath \"\"" Mar 17 17:56:07.724237 kubelet[2601]: I0317 17:56:07.723096 2601 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8ccdb914-4378-4042-8c14-9d432415fa36-lib-modules\") on node \"ci-4230.1.0-f-ebc70812f4\" DevicePath \"\"" Mar 17 17:56:07.724237 kubelet[2601]: I0317 17:56:07.723110 2601 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8ccdb914-4378-4042-8c14-9d432415fa36-cilium-config-path\") on node \"ci-4230.1.0-f-ebc70812f4\" DevicePath \"\"" Mar 17 17:56:07.724237 kubelet[2601]: I0317 17:56:07.723125 2601 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7f692ac4-12a5-4247-b4c3-da73eae3ab35-cilium-config-path\") on node \"ci-4230.1.0-f-ebc70812f4\" DevicePath \"\"" Mar 17 17:56:07.724237 kubelet[2601]: I0317 17:56:07.723139 2601 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8ccdb914-4378-4042-8c14-9d432415fa36-host-proc-sys-kernel\") on node \"ci-4230.1.0-f-ebc70812f4\" DevicePath \"\"" Mar 17 17:56:07.724237 kubelet[2601]: I0317 17:56:07.723153 2601 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8ccdb914-4378-4042-8c14-9d432415fa36-cilium-run\") on node \"ci-4230.1.0-f-ebc70812f4\" DevicePath \"\"" Mar 17 17:56:07.724237 kubelet[2601]: I0317 17:56:07.723168 2601 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8ccdb914-4378-4042-8c14-9d432415fa36-xtables-lock\") on node \"ci-4230.1.0-f-ebc70812f4\" DevicePath \"\"" Mar 17 17:56:07.724237 kubelet[2601]: I0317 17:56:07.723194 2601 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8ccdb914-4378-4042-8c14-9d432415fa36-hostproc\") on node \"ci-4230.1.0-f-ebc70812f4\" DevicePath \"\"" Mar 17 17:56:07.724237 kubelet[2601]: I0317 17:56:07.723209 2601 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8ccdb914-4378-4042-8c14-9d432415fa36-clustermesh-secrets\") on node \"ci-4230.1.0-f-ebc70812f4\" DevicePath \"\"" Mar 17 17:56:07.902457 systemd[1]: Removed slice kubepods-besteffort-pod7f692ac4_12a5_4247_b4c3_da73eae3ab35.slice - libcontainer container kubepods-besteffort-pod7f692ac4_12a5_4247_b4c3_da73eae3ab35.slice. Mar 17 17:56:08.040062 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a7b80b72432f410b6950c98f97c385d6fcf48e1596b34c5fdcf2aaf1f3aa51ab-rootfs.mount: Deactivated successfully. Mar 17 17:56:08.040297 systemd[1]: var-lib-kubelet-pods-7f692ac4\x2d12a5\x2d4247\x2db4c3\x2dda73eae3ab35-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5l4xw.mount: Deactivated successfully. Mar 17 17:56:08.040420 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9e808294057c550e1c299868218b09f3ed1788851b8d6ee07e550a5f3ba06ccd-rootfs.mount: Deactivated successfully. Mar 17 17:56:08.040528 systemd[1]: var-lib-kubelet-pods-8ccdb914\x2d4378\x2d4042\x2d8c14\x2d9d432415fa36-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvxlsl.mount: Deactivated successfully. Mar 17 17:56:08.040638 systemd[1]: var-lib-kubelet-pods-8ccdb914\x2d4378\x2d4042\x2d8c14\x2d9d432415fa36-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 17 17:56:08.040750 systemd[1]: var-lib-kubelet-pods-8ccdb914\x2d4378\x2d4042\x2d8c14\x2d9d432415fa36-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 17 17:56:08.322221 kubelet[2601]: E0317 17:56:08.322016 2601 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 17:56:08.626791 systemd[1]: Removed slice kubepods-burstable-pod8ccdb914_4378_4042_8c14_9d432415fa36.slice - libcontainer container kubepods-burstable-pod8ccdb914_4378_4042_8c14_9d432415fa36.slice. Mar 17 17:56:08.628289 systemd[1]: kubepods-burstable-pod8ccdb914_4378_4042_8c14_9d432415fa36.slice: Consumed 10.576s CPU time, 182.6M memory peak, 59.5M read from disk, 13.3M written to disk. Mar 17 17:56:08.870791 sshd[4312]: Connection closed by 139.178.68.195 port 45152 Mar 17 17:56:08.873804 sshd-session[4309]: pam_unix(sshd:session): session closed for user core Mar 17 17:56:08.890605 systemd[1]: sshd@32-24.199.119.133:22-139.178.68.195:45152.service: Deactivated successfully. Mar 17 17:56:08.898509 systemd[1]: session-26.scope: Deactivated successfully. Mar 17 17:56:08.899118 systemd[1]: session-26.scope: Consumed 1.505s CPU time, 28.1M memory peak. Mar 17 17:56:08.909381 systemd-logind[1469]: Session 26 logged out. Waiting for processes to exit. Mar 17 17:56:08.922746 systemd[1]: Started sshd@33-24.199.119.133:22-139.178.68.195:41504.service - OpenSSH per-connection server daemon (139.178.68.195:41504). Mar 17 17:56:08.930152 systemd-logind[1469]: Removed session 26. Mar 17 17:56:08.984557 kubelet[2601]: I0317 17:56:08.979743 2601 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f692ac4-12a5-4247-b4c3-da73eae3ab35" path="/var/lib/kubelet/pods/7f692ac4-12a5-4247-b4c3-da73eae3ab35/volumes" Mar 17 17:56:08.984557 kubelet[2601]: I0317 17:56:08.980605 2601 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ccdb914-4378-4042-8c14-9d432415fa36" path="/var/lib/kubelet/pods/8ccdb914-4378-4042-8c14-9d432415fa36/volumes" Mar 17 17:56:09.094896 sshd[4473]: Accepted publickey for core from 139.178.68.195 port 41504 ssh2: RSA SHA256:nAUKsK2l9wjXYeF+xS7MSq6cfWij0pIIBV4i7QqSfSE Mar 17 17:56:09.097497 sshd-session[4473]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:56:09.158127 systemd-logind[1469]: New session 27 of user core. Mar 17 17:56:09.165341 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 17 17:56:10.341444 sshd[4476]: Connection closed by 139.178.68.195 port 41504 Mar 17 17:56:10.345599 sshd-session[4473]: pam_unix(sshd:session): session closed for user core Mar 17 17:56:10.362750 systemd[1]: sshd@33-24.199.119.133:22-139.178.68.195:41504.service: Deactivated successfully. Mar 17 17:56:10.372514 systemd[1]: session-27.scope: Deactivated successfully. Mar 17 17:56:10.378021 systemd-logind[1469]: Session 27 logged out. Waiting for processes to exit. Mar 17 17:56:10.387096 systemd-logind[1469]: Removed session 27. Mar 17 17:56:10.405524 systemd[1]: Started sshd@34-24.199.119.133:22-139.178.68.195:41514.service - OpenSSH per-connection server daemon (139.178.68.195:41514). Mar 17 17:56:10.440737 kubelet[2601]: E0317 17:56:10.440662 2601 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7f692ac4-12a5-4247-b4c3-da73eae3ab35" containerName="cilium-operator" Mar 17 17:56:10.443897 kubelet[2601]: E0317 17:56:10.441910 2601 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8ccdb914-4378-4042-8c14-9d432415fa36" containerName="clean-cilium-state" Mar 17 17:56:10.443897 kubelet[2601]: E0317 17:56:10.441996 2601 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8ccdb914-4378-4042-8c14-9d432415fa36" containerName="mount-bpf-fs" Mar 17 17:56:10.443897 kubelet[2601]: E0317 17:56:10.442015 2601 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8ccdb914-4378-4042-8c14-9d432415fa36" containerName="mount-cgroup" Mar 17 17:56:10.443897 kubelet[2601]: E0317 17:56:10.442027 2601 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8ccdb914-4378-4042-8c14-9d432415fa36" containerName="apply-sysctl-overwrites" Mar 17 17:56:10.443897 kubelet[2601]: E0317 17:56:10.442078 2601 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8ccdb914-4378-4042-8c14-9d432415fa36" containerName="cilium-agent" Mar 17 17:56:10.443897 kubelet[2601]: I0317 17:56:10.442166 2601 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ccdb914-4378-4042-8c14-9d432415fa36" containerName="cilium-agent" Mar 17 17:56:10.443897 kubelet[2601]: I0317 17:56:10.442181 2601 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f692ac4-12a5-4247-b4c3-da73eae3ab35" containerName="cilium-operator" Mar 17 17:56:10.485104 systemd[1]: Created slice kubepods-burstable-pod71b71827_d154_46f1_a23d_9c7054a561c2.slice - libcontainer container kubepods-burstable-pod71b71827_d154_46f1_a23d_9c7054a561c2.slice. Mar 17 17:56:10.554761 sshd[4488]: Accepted publickey for core from 139.178.68.195 port 41514 ssh2: RSA SHA256:nAUKsK2l9wjXYeF+xS7MSq6cfWij0pIIBV4i7QqSfSE Mar 17 17:56:10.558284 sshd-session[4488]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:56:10.566805 kubelet[2601]: I0317 17:56:10.566681 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/71b71827-d154-46f1-a23d-9c7054a561c2-cilium-ipsec-secrets\") pod \"cilium-qbxqn\" (UID: \"71b71827-d154-46f1-a23d-9c7054a561c2\") " pod="kube-system/cilium-qbxqn" Mar 17 17:56:10.566805 kubelet[2601]: I0317 17:56:10.566801 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/71b71827-d154-46f1-a23d-9c7054a561c2-cilium-config-path\") pod \"cilium-qbxqn\" (UID: \"71b71827-d154-46f1-a23d-9c7054a561c2\") " pod="kube-system/cilium-qbxqn" Mar 17 17:56:10.567105 kubelet[2601]: I0317 17:56:10.566845 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/71b71827-d154-46f1-a23d-9c7054a561c2-bpf-maps\") pod \"cilium-qbxqn\" (UID: \"71b71827-d154-46f1-a23d-9c7054a561c2\") " pod="kube-system/cilium-qbxqn" Mar 17 17:56:10.568214 kubelet[2601]: I0317 17:56:10.567713 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/71b71827-d154-46f1-a23d-9c7054a561c2-cilium-cgroup\") pod \"cilium-qbxqn\" (UID: \"71b71827-d154-46f1-a23d-9c7054a561c2\") " pod="kube-system/cilium-qbxqn" Mar 17 17:56:10.568939 kubelet[2601]: I0317 17:56:10.568285 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/71b71827-d154-46f1-a23d-9c7054a561c2-xtables-lock\") pod \"cilium-qbxqn\" (UID: \"71b71827-d154-46f1-a23d-9c7054a561c2\") " pod="kube-system/cilium-qbxqn" Mar 17 17:56:10.569221 kubelet[2601]: I0317 17:56:10.569188 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/71b71827-d154-46f1-a23d-9c7054a561c2-clustermesh-secrets\") pod \"cilium-qbxqn\" (UID: \"71b71827-d154-46f1-a23d-9c7054a561c2\") " pod="kube-system/cilium-qbxqn" Mar 17 17:56:10.569457 kubelet[2601]: I0317 17:56:10.569256 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/71b71827-d154-46f1-a23d-9c7054a561c2-hostproc\") pod \"cilium-qbxqn\" (UID: \"71b71827-d154-46f1-a23d-9c7054a561c2\") " pod="kube-system/cilium-qbxqn" Mar 17 17:56:10.569457 kubelet[2601]: I0317 17:56:10.569286 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/71b71827-d154-46f1-a23d-9c7054a561c2-etc-cni-netd\") pod \"cilium-qbxqn\" (UID: \"71b71827-d154-46f1-a23d-9c7054a561c2\") " pod="kube-system/cilium-qbxqn" Mar 17 17:56:10.569457 kubelet[2601]: I0317 17:56:10.569316 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/71b71827-d154-46f1-a23d-9c7054a561c2-lib-modules\") pod \"cilium-qbxqn\" (UID: \"71b71827-d154-46f1-a23d-9c7054a561c2\") " pod="kube-system/cilium-qbxqn" Mar 17 17:56:10.569457 kubelet[2601]: I0317 17:56:10.569344 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/71b71827-d154-46f1-a23d-9c7054a561c2-cni-path\") pod \"cilium-qbxqn\" (UID: \"71b71827-d154-46f1-a23d-9c7054a561c2\") " pod="kube-system/cilium-qbxqn" Mar 17 17:56:10.569457 kubelet[2601]: I0317 17:56:10.569370 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/71b71827-d154-46f1-a23d-9c7054a561c2-host-proc-sys-kernel\") pod \"cilium-qbxqn\" (UID: \"71b71827-d154-46f1-a23d-9c7054a561c2\") " pod="kube-system/cilium-qbxqn" Mar 17 17:56:10.569457 kubelet[2601]: I0317 17:56:10.569400 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/71b71827-d154-46f1-a23d-9c7054a561c2-hubble-tls\") pod \"cilium-qbxqn\" (UID: \"71b71827-d154-46f1-a23d-9c7054a561c2\") " pod="kube-system/cilium-qbxqn" Mar 17 17:56:10.570302 kubelet[2601]: I0317 17:56:10.569429 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cr7lv\" (UniqueName: \"kubernetes.io/projected/71b71827-d154-46f1-a23d-9c7054a561c2-kube-api-access-cr7lv\") pod \"cilium-qbxqn\" (UID: \"71b71827-d154-46f1-a23d-9c7054a561c2\") " pod="kube-system/cilium-qbxqn" Mar 17 17:56:10.570302 kubelet[2601]: I0317 17:56:10.569467 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/71b71827-d154-46f1-a23d-9c7054a561c2-cilium-run\") pod \"cilium-qbxqn\" (UID: \"71b71827-d154-46f1-a23d-9c7054a561c2\") " pod="kube-system/cilium-qbxqn" Mar 17 17:56:10.570302 kubelet[2601]: I0317 17:56:10.569487 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/71b71827-d154-46f1-a23d-9c7054a561c2-host-proc-sys-net\") pod \"cilium-qbxqn\" (UID: \"71b71827-d154-46f1-a23d-9c7054a561c2\") " pod="kube-system/cilium-qbxqn" Mar 17 17:56:10.589228 systemd-logind[1469]: New session 28 of user core. Mar 17 17:56:10.599406 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 17 17:56:10.709066 sshd[4491]: Connection closed by 139.178.68.195 port 41514 Mar 17 17:56:10.717598 sshd-session[4488]: pam_unix(sshd:session): session closed for user core Mar 17 17:56:10.766535 systemd[1]: sshd@34-24.199.119.133:22-139.178.68.195:41514.service: Deactivated successfully. Mar 17 17:56:10.773036 systemd[1]: session-28.scope: Deactivated successfully. Mar 17 17:56:10.780520 systemd-logind[1469]: Session 28 logged out. Waiting for processes to exit. Mar 17 17:56:10.792076 systemd[1]: Started sshd@35-24.199.119.133:22-139.178.68.195:41518.service - OpenSSH per-connection server daemon (139.178.68.195:41518). Mar 17 17:56:10.796655 systemd-logind[1469]: Removed session 28. Mar 17 17:56:10.812679 kubelet[2601]: E0317 17:56:10.810816 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:56:10.814021 containerd[1492]: time="2025-03-17T17:56:10.813778591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qbxqn,Uid:71b71827-d154-46f1-a23d-9c7054a561c2,Namespace:kube-system,Attempt:0,}" Mar 17 17:56:10.891500 containerd[1492]: time="2025-03-17T17:56:10.889852938Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:56:10.891800 containerd[1492]: time="2025-03-17T17:56:10.891253397Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:56:10.892311 containerd[1492]: time="2025-03-17T17:56:10.891771335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:56:10.892433 containerd[1492]: time="2025-03-17T17:56:10.892286092Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:56:10.944100 sshd[4501]: Accepted publickey for core from 139.178.68.195 port 41518 ssh2: RSA SHA256:nAUKsK2l9wjXYeF+xS7MSq6cfWij0pIIBV4i7QqSfSE Mar 17 17:56:10.944381 systemd[1]: Started cri-containerd-e40574cbc7ca0bec4d7f58cbaca06282fd15508a2fe41730753bd37edd19a4c4.scope - libcontainer container e40574cbc7ca0bec4d7f58cbaca06282fd15508a2fe41730753bd37edd19a4c4. Mar 17 17:56:10.948940 sshd-session[4501]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:56:10.963496 systemd-logind[1469]: New session 29 of user core. Mar 17 17:56:10.971334 systemd[1]: Started session-29.scope - Session 29 of User core. Mar 17 17:56:11.014940 containerd[1492]: time="2025-03-17T17:56:11.014878400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qbxqn,Uid:71b71827-d154-46f1-a23d-9c7054a561c2,Namespace:kube-system,Attempt:0,} returns sandbox id \"e40574cbc7ca0bec4d7f58cbaca06282fd15508a2fe41730753bd37edd19a4c4\"" Mar 17 17:56:11.020104 kubelet[2601]: E0317 17:56:11.020006 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:56:11.033511 containerd[1492]: time="2025-03-17T17:56:11.032777558Z" level=info msg="CreateContainer within sandbox \"e40574cbc7ca0bec4d7f58cbaca06282fd15508a2fe41730753bd37edd19a4c4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 17:56:11.070266 containerd[1492]: time="2025-03-17T17:56:11.069684345Z" level=info msg="CreateContainer within sandbox \"e40574cbc7ca0bec4d7f58cbaca06282fd15508a2fe41730753bd37edd19a4c4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3e8b9fa4ae154831ae484c688fb93aebee474c3d37ce66728f04715e3e8b99ea\"" Mar 17 17:56:11.071371 containerd[1492]: time="2025-03-17T17:56:11.071064173Z" level=info msg="StartContainer for \"3e8b9fa4ae154831ae484c688fb93aebee474c3d37ce66728f04715e3e8b99ea\"" Mar 17 17:56:11.140324 systemd[1]: Started cri-containerd-3e8b9fa4ae154831ae484c688fb93aebee474c3d37ce66728f04715e3e8b99ea.scope - libcontainer container 3e8b9fa4ae154831ae484c688fb93aebee474c3d37ce66728f04715e3e8b99ea. Mar 17 17:56:11.269217 containerd[1492]: time="2025-03-17T17:56:11.268459403Z" level=info msg="StartContainer for \"3e8b9fa4ae154831ae484c688fb93aebee474c3d37ce66728f04715e3e8b99ea\" returns successfully" Mar 17 17:56:11.306473 systemd[1]: cri-containerd-3e8b9fa4ae154831ae484c688fb93aebee474c3d37ce66728f04715e3e8b99ea.scope: Deactivated successfully. Mar 17 17:56:11.365007 containerd[1492]: time="2025-03-17T17:56:11.364568271Z" level=info msg="shim disconnected" id=3e8b9fa4ae154831ae484c688fb93aebee474c3d37ce66728f04715e3e8b99ea namespace=k8s.io Mar 17 17:56:11.365007 containerd[1492]: time="2025-03-17T17:56:11.364755173Z" level=warning msg="cleaning up after shim disconnected" id=3e8b9fa4ae154831ae484c688fb93aebee474c3d37ce66728f04715e3e8b99ea namespace=k8s.io Mar 17 17:56:11.365007 containerd[1492]: time="2025-03-17T17:56:11.364775463Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:56:11.399808 containerd[1492]: time="2025-03-17T17:56:11.396225492Z" level=warning msg="cleanup warnings time=\"2025-03-17T17:56:11Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 17 17:56:11.631267 kubelet[2601]: E0317 17:56:11.630741 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:56:11.658218 containerd[1492]: time="2025-03-17T17:56:11.658137473Z" level=info msg="CreateContainer within sandbox \"e40574cbc7ca0bec4d7f58cbaca06282fd15508a2fe41730753bd37edd19a4c4\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 17:56:11.702899 containerd[1492]: time="2025-03-17T17:56:11.701041254Z" level=info msg="CreateContainer within sandbox \"e40574cbc7ca0bec4d7f58cbaca06282fd15508a2fe41730753bd37edd19a4c4\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1145fb8ba221f11203474b1d5cedec23ab3df0f3bf7f50c4d6a188ac8fbdfdc3\"" Mar 17 17:56:11.703497 containerd[1492]: time="2025-03-17T17:56:11.703448224Z" level=info msg="StartContainer for \"1145fb8ba221f11203474b1d5cedec23ab3df0f3bf7f50c4d6a188ac8fbdfdc3\"" Mar 17 17:56:11.786468 systemd[1]: run-containerd-runc-k8s.io-1145fb8ba221f11203474b1d5cedec23ab3df0f3bf7f50c4d6a188ac8fbdfdc3-runc.OzA4My.mount: Deactivated successfully. Mar 17 17:56:11.811602 systemd[1]: Started cri-containerd-1145fb8ba221f11203474b1d5cedec23ab3df0f3bf7f50c4d6a188ac8fbdfdc3.scope - libcontainer container 1145fb8ba221f11203474b1d5cedec23ab3df0f3bf7f50c4d6a188ac8fbdfdc3. Mar 17 17:56:11.866774 containerd[1492]: time="2025-03-17T17:56:11.866687233Z" level=info msg="StartContainer for \"1145fb8ba221f11203474b1d5cedec23ab3df0f3bf7f50c4d6a188ac8fbdfdc3\" returns successfully" Mar 17 17:56:11.887290 systemd[1]: cri-containerd-1145fb8ba221f11203474b1d5cedec23ab3df0f3bf7f50c4d6a188ac8fbdfdc3.scope: Deactivated successfully. Mar 17 17:56:11.940935 containerd[1492]: time="2025-03-17T17:56:11.940583611Z" level=info msg="shim disconnected" id=1145fb8ba221f11203474b1d5cedec23ab3df0f3bf7f50c4d6a188ac8fbdfdc3 namespace=k8s.io Mar 17 17:56:11.940935 containerd[1492]: time="2025-03-17T17:56:11.940663299Z" level=warning msg="cleaning up after shim disconnected" id=1145fb8ba221f11203474b1d5cedec23ab3df0f3bf7f50c4d6a188ac8fbdfdc3 namespace=k8s.io Mar 17 17:56:11.940935 containerd[1492]: time="2025-03-17T17:56:11.940677835Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:56:11.985715 containerd[1492]: time="2025-03-17T17:56:11.985589193Z" level=warning msg="cleanup warnings time=\"2025-03-17T17:56:11Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 17 17:56:12.641292 kubelet[2601]: E0317 17:56:12.641233 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:56:12.661884 containerd[1492]: time="2025-03-17T17:56:12.657188713Z" level=info msg="CreateContainer within sandbox \"e40574cbc7ca0bec4d7f58cbaca06282fd15508a2fe41730753bd37edd19a4c4\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 17:56:12.686780 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1145fb8ba221f11203474b1d5cedec23ab3df0f3bf7f50c4d6a188ac8fbdfdc3-rootfs.mount: Deactivated successfully. Mar 17 17:56:12.724507 containerd[1492]: time="2025-03-17T17:56:12.724422029Z" level=info msg="CreateContainer within sandbox \"e40574cbc7ca0bec4d7f58cbaca06282fd15508a2fe41730753bd37edd19a4c4\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c7d66f0203dd9077ce46ba5dc806dcd801ba5ffeb83851fedf75d59147840917\"" Mar 17 17:56:12.726749 containerd[1492]: time="2025-03-17T17:56:12.725790320Z" level=info msg="StartContainer for \"c7d66f0203dd9077ce46ba5dc806dcd801ba5ffeb83851fedf75d59147840917\"" Mar 17 17:56:12.808528 systemd[1]: Started cri-containerd-c7d66f0203dd9077ce46ba5dc806dcd801ba5ffeb83851fedf75d59147840917.scope - libcontainer container c7d66f0203dd9077ce46ba5dc806dcd801ba5ffeb83851fedf75d59147840917. Mar 17 17:56:12.880950 containerd[1492]: time="2025-03-17T17:56:12.880879664Z" level=info msg="StartContainer for \"c7d66f0203dd9077ce46ba5dc806dcd801ba5ffeb83851fedf75d59147840917\" returns successfully" Mar 17 17:56:12.893200 systemd[1]: cri-containerd-c7d66f0203dd9077ce46ba5dc806dcd801ba5ffeb83851fedf75d59147840917.scope: Deactivated successfully. Mar 17 17:56:12.943330 containerd[1492]: time="2025-03-17T17:56:12.943214579Z" level=info msg="shim disconnected" id=c7d66f0203dd9077ce46ba5dc806dcd801ba5ffeb83851fedf75d59147840917 namespace=k8s.io Mar 17 17:56:12.943600 containerd[1492]: time="2025-03-17T17:56:12.943322465Z" level=warning msg="cleaning up after shim disconnected" id=c7d66f0203dd9077ce46ba5dc806dcd801ba5ffeb83851fedf75d59147840917 namespace=k8s.io Mar 17 17:56:12.943600 containerd[1492]: time="2025-03-17T17:56:12.943354179Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:56:13.324582 kubelet[2601]: E0317 17:56:13.323455 2601 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 17:56:13.648793 kubelet[2601]: E0317 17:56:13.647875 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:56:13.653654 containerd[1492]: time="2025-03-17T17:56:13.653553309Z" level=info msg="CreateContainer within sandbox \"e40574cbc7ca0bec4d7f58cbaca06282fd15508a2fe41730753bd37edd19a4c4\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 17:56:13.682949 containerd[1492]: time="2025-03-17T17:56:13.682877887Z" level=info msg="CreateContainer within sandbox \"e40574cbc7ca0bec4d7f58cbaca06282fd15508a2fe41730753bd37edd19a4c4\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"725a55cc589c8d8d7f9fc00a00069659f2d02b0f701d8545dc997db9410ff56f\"" Mar 17 17:56:13.687897 containerd[1492]: time="2025-03-17T17:56:13.685506490Z" level=info msg="StartContainer for \"725a55cc589c8d8d7f9fc00a00069659f2d02b0f701d8545dc997db9410ff56f\"" Mar 17 17:56:13.688531 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c7d66f0203dd9077ce46ba5dc806dcd801ba5ffeb83851fedf75d59147840917-rootfs.mount: Deactivated successfully. Mar 17 17:56:13.770241 systemd[1]: Started cri-containerd-725a55cc589c8d8d7f9fc00a00069659f2d02b0f701d8545dc997db9410ff56f.scope - libcontainer container 725a55cc589c8d8d7f9fc00a00069659f2d02b0f701d8545dc997db9410ff56f. Mar 17 17:56:13.836616 systemd[1]: cri-containerd-725a55cc589c8d8d7f9fc00a00069659f2d02b0f701d8545dc997db9410ff56f.scope: Deactivated successfully. Mar 17 17:56:13.843239 containerd[1492]: time="2025-03-17T17:56:13.842403020Z" level=info msg="StartContainer for \"725a55cc589c8d8d7f9fc00a00069659f2d02b0f701d8545dc997db9410ff56f\" returns successfully" Mar 17 17:56:13.888572 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-725a55cc589c8d8d7f9fc00a00069659f2d02b0f701d8545dc997db9410ff56f-rootfs.mount: Deactivated successfully. Mar 17 17:56:13.892861 containerd[1492]: time="2025-03-17T17:56:13.892747886Z" level=info msg="shim disconnected" id=725a55cc589c8d8d7f9fc00a00069659f2d02b0f701d8545dc997db9410ff56f namespace=k8s.io Mar 17 17:56:13.892861 containerd[1492]: time="2025-03-17T17:56:13.892852626Z" level=warning msg="cleaning up after shim disconnected" id=725a55cc589c8d8d7f9fc00a00069659f2d02b0f701d8545dc997db9410ff56f namespace=k8s.io Mar 17 17:56:13.892861 containerd[1492]: time="2025-03-17T17:56:13.892868089Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:56:14.657476 kubelet[2601]: E0317 17:56:14.656386 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:56:14.663164 containerd[1492]: time="2025-03-17T17:56:14.662711377Z" level=info msg="CreateContainer within sandbox \"e40574cbc7ca0bec4d7f58cbaca06282fd15508a2fe41730753bd37edd19a4c4\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 17:56:14.701224 containerd[1492]: time="2025-03-17T17:56:14.701136436Z" level=info msg="CreateContainer within sandbox \"e40574cbc7ca0bec4d7f58cbaca06282fd15508a2fe41730753bd37edd19a4c4\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0b258bfb6b7b376ea990b5b9cb8c97c6bd31971944266a329bced1de76e6c3a5\"" Mar 17 17:56:14.711114 containerd[1492]: time="2025-03-17T17:56:14.702660782Z" level=info msg="StartContainer for \"0b258bfb6b7b376ea990b5b9cb8c97c6bd31971944266a329bced1de76e6c3a5\"" Mar 17 17:56:14.832012 systemd[1]: Started cri-containerd-0b258bfb6b7b376ea990b5b9cb8c97c6bd31971944266a329bced1de76e6c3a5.scope - libcontainer container 0b258bfb6b7b376ea990b5b9cb8c97c6bd31971944266a329bced1de76e6c3a5. Mar 17 17:56:15.000711 containerd[1492]: time="2025-03-17T17:56:14.998746415Z" level=info msg="StartContainer for \"0b258bfb6b7b376ea990b5b9cb8c97c6bd31971944266a329bced1de76e6c3a5\" returns successfully" Mar 17 17:56:15.668807 kubelet[2601]: E0317 17:56:15.668322 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:56:15.695482 systemd[1]: run-containerd-runc-k8s.io-0b258bfb6b7b376ea990b5b9cb8c97c6bd31971944266a329bced1de76e6c3a5-runc.F1JQXG.mount: Deactivated successfully. Mar 17 17:56:16.012915 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Mar 17 17:56:16.411447 kubelet[2601]: I0317 17:56:16.410662 2601 setters.go:600] "Node became not ready" node="ci-4230.1.0-f-ebc70812f4" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-03-17T17:56:16Z","lastTransitionTime":"2025-03-17T17:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 17 17:56:16.815962 kubelet[2601]: E0317 17:56:16.812889 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:56:17.095077 systemd[1]: Started sshd@36-24.199.119.133:22-115.113.173.34:58190.service - OpenSSH per-connection server daemon (115.113.173.34:58190). Mar 17 17:56:17.707507 systemd[1]: run-containerd-runc-k8s.io-0b258bfb6b7b376ea990b5b9cb8c97c6bd31971944266a329bced1de76e6c3a5-runc.RjgRsn.mount: Deactivated successfully. Mar 17 17:56:18.232958 sshd[4936]: Invalid user dolphinscheduler from 115.113.173.34 port 58190 Mar 17 17:56:18.485478 sshd[4936]: Connection closed by invalid user dolphinscheduler 115.113.173.34 port 58190 [preauth] Mar 17 17:56:18.488065 systemd[1]: sshd@36-24.199.119.133:22-115.113.173.34:58190.service: Deactivated successfully. Mar 17 17:56:21.203449 systemd-networkd[1393]: lxc_health: Link UP Mar 17 17:56:21.204289 systemd-networkd[1393]: lxc_health: Gained carrier Mar 17 17:56:22.727104 systemd-networkd[1393]: lxc_health: Gained IPv6LL Mar 17 17:56:22.813744 kubelet[2601]: E0317 17:56:22.813438 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:56:22.853491 kubelet[2601]: I0317 17:56:22.852711 2601 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-qbxqn" podStartSLOduration=12.852687172 podStartE2EDuration="12.852687172s" podCreationTimestamp="2025-03-17 17:56:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:56:15.724114909 +0000 UTC m=+133.120395840" watchObservedRunningTime="2025-03-17 17:56:22.852687172 +0000 UTC m=+140.248968434" Mar 17 17:56:23.713199 kubelet[2601]: E0317 17:56:23.713143 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:56:24.669540 systemd[1]: run-containerd-runc-k8s.io-0b258bfb6b7b376ea990b5b9cb8c97c6bd31971944266a329bced1de76e6c3a5-runc.cfabpM.mount: Deactivated successfully. Mar 17 17:56:24.719058 kubelet[2601]: E0317 17:56:24.718971 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 17:56:26.982786 sshd[4538]: Connection closed by 139.178.68.195 port 41518 Mar 17 17:56:26.986255 sshd-session[4501]: pam_unix(sshd:session): session closed for user core Mar 17 17:56:26.990953 systemd[1]: sshd@35-24.199.119.133:22-139.178.68.195:41518.service: Deactivated successfully. Mar 17 17:56:26.996575 systemd[1]: session-29.scope: Deactivated successfully. Mar 17 17:56:27.002092 systemd-logind[1469]: Session 29 logged out. Waiting for processes to exit. Mar 17 17:56:27.006696 systemd-logind[1469]: Removed session 29.