Jan 29 16:20:56.031995 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 14:51:22 -00 2025 Jan 29 16:20:56.032040 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=baa4132e9c604885344fa8e79d67c80ef841a135b233c762ecfe0386901a895d Jan 29 16:20:56.032066 kernel: BIOS-provided physical RAM map: Jan 29 16:20:56.032076 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 29 16:20:56.032088 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 29 16:20:56.032100 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 29 16:20:56.032114 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Jan 29 16:20:56.032127 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Jan 29 16:20:56.032140 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 29 16:20:56.032153 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 29 16:20:56.032175 kernel: NX (Execute Disable) protection: active Jan 29 16:20:56.032188 kernel: APIC: Static calls initialized Jan 29 16:20:56.032205 kernel: SMBIOS 2.8 present. Jan 29 16:20:56.032245 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Jan 29 16:20:56.032262 kernel: Hypervisor detected: KVM Jan 29 16:20:56.032276 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 29 16:20:56.032305 kernel: kvm-clock: using sched offset of 3629152551 cycles Jan 29 16:20:56.032319 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 29 16:20:56.032333 kernel: tsc: Detected 2494.140 MHz processor Jan 29 16:20:56.032350 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 29 16:20:56.032364 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 29 16:20:56.032380 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Jan 29 16:20:56.032395 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 29 16:20:56.032411 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 29 16:20:56.032435 kernel: ACPI: Early table checksum verification disabled Jan 29 16:20:56.032450 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Jan 29 16:20:56.032466 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:20:56.032482 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:20:56.032495 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:20:56.032508 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jan 29 16:20:56.032521 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:20:56.032534 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:20:56.032549 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:20:56.032574 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:20:56.032589 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Jan 29 16:20:56.032604 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Jan 29 16:20:56.032619 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jan 29 16:20:56.032634 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Jan 29 16:20:56.032648 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Jan 29 16:20:56.032662 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Jan 29 16:20:56.032691 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Jan 29 16:20:56.032713 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 29 16:20:56.032729 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 29 16:20:56.032746 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 29 16:20:56.032762 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 29 16:20:56.032782 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Jan 29 16:20:56.032797 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Jan 29 16:20:56.032823 kernel: Zone ranges: Jan 29 16:20:56.032834 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 29 16:20:56.032843 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Jan 29 16:20:56.032852 kernel: Normal empty Jan 29 16:20:56.032861 kernel: Movable zone start for each node Jan 29 16:20:56.032870 kernel: Early memory node ranges Jan 29 16:20:56.032878 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 29 16:20:56.032887 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Jan 29 16:20:56.032896 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Jan 29 16:20:56.032911 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 16:20:56.032920 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 29 16:20:56.032932 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Jan 29 16:20:56.032941 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 29 16:20:56.032954 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 29 16:20:56.032968 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 29 16:20:56.032983 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 29 16:20:56.032998 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 29 16:20:56.033014 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 29 16:20:56.033031 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 29 16:20:56.033056 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 29 16:20:56.033071 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 29 16:20:56.033085 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 29 16:20:56.033099 kernel: TSC deadline timer available Jan 29 16:20:56.033113 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 29 16:20:56.033126 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 29 16:20:56.033141 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Jan 29 16:20:56.033161 kernel: Booting paravirtualized kernel on KVM Jan 29 16:20:56.033175 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 29 16:20:56.033201 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 29 16:20:56.033211 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 29 16:20:56.033247 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 29 16:20:56.033264 kernel: pcpu-alloc: [0] 0 1 Jan 29 16:20:56.033279 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 29 16:20:56.033298 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=baa4132e9c604885344fa8e79d67c80ef841a135b233c762ecfe0386901a895d Jan 29 16:20:56.033315 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 16:20:56.033346 kernel: random: crng init done Jan 29 16:20:56.033374 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 16:20:56.033389 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 29 16:20:56.033402 kernel: Fallback order for Node 0: 0 Jan 29 16:20:56.033418 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Jan 29 16:20:56.033432 kernel: Policy zone: DMA32 Jan 29 16:20:56.033455 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 16:20:56.033473 kernel: Memory: 1969156K/2096612K available (14336K kernel code, 2301K rwdata, 22852K rodata, 43472K init, 1600K bss, 127196K reserved, 0K cma-reserved) Jan 29 16:20:56.033486 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 29 16:20:56.033502 kernel: Kernel/User page tables isolation: enabled Jan 29 16:20:56.033528 kernel: ftrace: allocating 37893 entries in 149 pages Jan 29 16:20:56.033544 kernel: ftrace: allocated 149 pages with 4 groups Jan 29 16:20:56.033558 kernel: Dynamic Preempt: voluntary Jan 29 16:20:56.033574 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 16:20:56.033592 kernel: rcu: RCU event tracing is enabled. Jan 29 16:20:56.033608 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 29 16:20:56.033625 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 16:20:56.033640 kernel: Rude variant of Tasks RCU enabled. Jan 29 16:20:56.033656 kernel: Tracing variant of Tasks RCU enabled. Jan 29 16:20:56.033682 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 16:20:56.033697 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 29 16:20:56.033711 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 29 16:20:56.033727 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 16:20:56.033747 kernel: Console: colour VGA+ 80x25 Jan 29 16:20:56.033763 kernel: printk: console [tty0] enabled Jan 29 16:20:56.033779 kernel: printk: console [ttyS0] enabled Jan 29 16:20:56.033795 kernel: ACPI: Core revision 20230628 Jan 29 16:20:56.033810 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 29 16:20:56.033834 kernel: APIC: Switch to symmetric I/O mode setup Jan 29 16:20:56.033847 kernel: x2apic enabled Jan 29 16:20:56.033862 kernel: APIC: Switched APIC routing to: physical x2apic Jan 29 16:20:56.033873 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 29 16:20:56.033882 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Jan 29 16:20:56.033892 kernel: Calibrating delay loop (skipped) preset value.. 4988.28 BogoMIPS (lpj=2494140) Jan 29 16:20:56.033901 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 29 16:20:56.033910 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 29 16:20:56.033944 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 29 16:20:56.033954 kernel: Spectre V2 : Mitigation: Retpolines Jan 29 16:20:56.033964 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 29 16:20:56.033979 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 29 16:20:56.033989 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 29 16:20:56.033999 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 29 16:20:56.034008 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 29 16:20:56.034017 kernel: MDS: Mitigation: Clear CPU buffers Jan 29 16:20:56.034030 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 29 16:20:56.034055 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 29 16:20:56.034070 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 29 16:20:56.034087 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 29 16:20:56.034104 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 29 16:20:56.034120 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 29 16:20:56.034136 kernel: Freeing SMP alternatives memory: 32K Jan 29 16:20:56.034152 kernel: pid_max: default: 32768 minimum: 301 Jan 29 16:20:56.034164 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 16:20:56.034182 kernel: landlock: Up and running. Jan 29 16:20:56.034191 kernel: SELinux: Initializing. Jan 29 16:20:56.034204 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 29 16:20:56.036284 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 29 16:20:56.036332 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Jan 29 16:20:56.036343 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 16:20:56.036354 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 16:20:56.036363 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 16:20:56.036373 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Jan 29 16:20:56.036397 kernel: signal: max sigframe size: 1776 Jan 29 16:20:56.036407 kernel: rcu: Hierarchical SRCU implementation. Jan 29 16:20:56.036418 kernel: rcu: Max phase no-delay instances is 400. Jan 29 16:20:56.036428 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 29 16:20:56.036437 kernel: smp: Bringing up secondary CPUs ... Jan 29 16:20:56.036447 kernel: smpboot: x86: Booting SMP configuration: Jan 29 16:20:56.036456 kernel: .... node #0, CPUs: #1 Jan 29 16:20:56.036466 kernel: smp: Brought up 1 node, 2 CPUs Jan 29 16:20:56.036478 kernel: smpboot: Max logical packages: 1 Jan 29 16:20:56.036493 kernel: smpboot: Total of 2 processors activated (9976.56 BogoMIPS) Jan 29 16:20:56.036503 kernel: devtmpfs: initialized Jan 29 16:20:56.036512 kernel: x86/mm: Memory block size: 128MB Jan 29 16:20:56.036522 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 16:20:56.036531 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 29 16:20:56.036541 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 16:20:56.036550 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 16:20:56.036560 kernel: audit: initializing netlink subsys (disabled) Jan 29 16:20:56.036569 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 16:20:56.036585 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 29 16:20:56.036594 kernel: audit: type=2000 audit(1738167654.665:1): state=initialized audit_enabled=0 res=1 Jan 29 16:20:56.036603 kernel: cpuidle: using governor menu Jan 29 16:20:56.036613 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 16:20:56.036622 kernel: dca service started, version 1.12.1 Jan 29 16:20:56.036631 kernel: PCI: Using configuration type 1 for base access Jan 29 16:20:56.036641 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 29 16:20:56.036650 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 16:20:56.036660 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 16:20:56.036676 kernel: ACPI: Added _OSI(Module Device) Jan 29 16:20:56.036686 kernel: ACPI: Added _OSI(Processor Device) Jan 29 16:20:56.036695 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 16:20:56.036704 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 16:20:56.036713 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 16:20:56.036722 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 29 16:20:56.036732 kernel: ACPI: Interpreter enabled Jan 29 16:20:56.036741 kernel: ACPI: PM: (supports S0 S5) Jan 29 16:20:56.036750 kernel: ACPI: Using IOAPIC for interrupt routing Jan 29 16:20:56.036766 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 29 16:20:56.036776 kernel: PCI: Using E820 reservations for host bridge windows Jan 29 16:20:56.036785 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 29 16:20:56.036795 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 16:20:56.037059 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 29 16:20:56.037244 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 29 16:20:56.037421 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 29 16:20:56.037461 kernel: acpiphp: Slot [3] registered Jan 29 16:20:56.037477 kernel: acpiphp: Slot [4] registered Jan 29 16:20:56.037491 kernel: acpiphp: Slot [5] registered Jan 29 16:20:56.037507 kernel: acpiphp: Slot [6] registered Jan 29 16:20:56.037524 kernel: acpiphp: Slot [7] registered Jan 29 16:20:56.037541 kernel: acpiphp: Slot [8] registered Jan 29 16:20:56.037557 kernel: acpiphp: Slot [9] registered Jan 29 16:20:56.037573 kernel: acpiphp: Slot [10] registered Jan 29 16:20:56.037591 kernel: acpiphp: Slot [11] registered Jan 29 16:20:56.037604 kernel: acpiphp: Slot [12] registered Jan 29 16:20:56.037627 kernel: acpiphp: Slot [13] registered Jan 29 16:20:56.037642 kernel: acpiphp: Slot [14] registered Jan 29 16:20:56.037657 kernel: acpiphp: Slot [15] registered Jan 29 16:20:56.037672 kernel: acpiphp: Slot [16] registered Jan 29 16:20:56.037686 kernel: acpiphp: Slot [17] registered Jan 29 16:20:56.037701 kernel: acpiphp: Slot [18] registered Jan 29 16:20:56.037715 kernel: acpiphp: Slot [19] registered Jan 29 16:20:56.037730 kernel: acpiphp: Slot [20] registered Jan 29 16:20:56.037745 kernel: acpiphp: Slot [21] registered Jan 29 16:20:56.037774 kernel: acpiphp: Slot [22] registered Jan 29 16:20:56.037790 kernel: acpiphp: Slot [23] registered Jan 29 16:20:56.037803 kernel: acpiphp: Slot [24] registered Jan 29 16:20:56.037818 kernel: acpiphp: Slot [25] registered Jan 29 16:20:56.037833 kernel: acpiphp: Slot [26] registered Jan 29 16:20:56.037849 kernel: acpiphp: Slot [27] registered Jan 29 16:20:56.037867 kernel: acpiphp: Slot [28] registered Jan 29 16:20:56.037882 kernel: acpiphp: Slot [29] registered Jan 29 16:20:56.037897 kernel: acpiphp: Slot [30] registered Jan 29 16:20:56.037925 kernel: acpiphp: Slot [31] registered Jan 29 16:20:56.037943 kernel: PCI host bridge to bus 0000:00 Jan 29 16:20:56.038762 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 29 16:20:56.038998 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 29 16:20:56.039151 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 29 16:20:56.039337 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 29 16:20:56.039551 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jan 29 16:20:56.039707 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 16:20:56.039964 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 29 16:20:56.040195 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 29 16:20:56.040463 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jan 29 16:20:56.040638 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Jan 29 16:20:56.040808 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 29 16:20:56.040970 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 29 16:20:56.041126 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 29 16:20:56.041388 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 29 16:20:56.041534 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Jan 29 16:20:56.041651 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Jan 29 16:20:56.041838 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 29 16:20:56.042022 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jan 29 16:20:56.042315 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jan 29 16:20:56.042540 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jan 29 16:20:56.042719 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jan 29 16:20:56.042888 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Jan 29 16:20:56.043062 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Jan 29 16:20:56.043246 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jan 29 16:20:56.043416 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 29 16:20:56.043616 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 29 16:20:56.043726 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Jan 29 16:20:56.043911 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Jan 29 16:20:56.044083 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Jan 29 16:20:56.044421 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 29 16:20:56.044597 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Jan 29 16:20:56.044770 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Jan 29 16:20:56.044982 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Jan 29 16:20:56.045176 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Jan 29 16:20:56.049760 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Jan 29 16:20:56.050007 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Jan 29 16:20:56.050179 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Jan 29 16:20:56.050441 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Jan 29 16:20:56.050617 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Jan 29 16:20:56.050829 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Jan 29 16:20:56.050996 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Jan 29 16:20:56.051199 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Jan 29 16:20:56.051412 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Jan 29 16:20:56.051580 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Jan 29 16:20:56.051744 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Jan 29 16:20:56.051875 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Jan 29 16:20:56.052004 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Jan 29 16:20:56.052115 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Jan 29 16:20:56.052128 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 29 16:20:56.052138 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 29 16:20:56.052148 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 29 16:20:56.052157 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 29 16:20:56.052167 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 29 16:20:56.052185 kernel: iommu: Default domain type: Translated Jan 29 16:20:56.052195 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 29 16:20:56.052205 kernel: PCI: Using ACPI for IRQ routing Jan 29 16:20:56.052214 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 29 16:20:56.054343 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 29 16:20:56.054371 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Jan 29 16:20:56.054692 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jan 29 16:20:56.054872 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jan 29 16:20:56.055049 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 29 16:20:56.055067 kernel: vgaarb: loaded Jan 29 16:20:56.055081 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 29 16:20:56.055096 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 29 16:20:56.055111 kernel: clocksource: Switched to clocksource kvm-clock Jan 29 16:20:56.055128 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 16:20:56.055145 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 16:20:56.055161 kernel: pnp: PnP ACPI init Jan 29 16:20:56.055176 kernel: pnp: PnP ACPI: found 4 devices Jan 29 16:20:56.055204 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 29 16:20:56.055373 kernel: NET: Registered PF_INET protocol family Jan 29 16:20:56.055393 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 16:20:56.055410 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 29 16:20:56.055427 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 16:20:56.055444 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 29 16:20:56.055459 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 29 16:20:56.055474 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 29 16:20:56.055488 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 29 16:20:56.055518 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 29 16:20:56.055535 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 16:20:56.055553 kernel: NET: Registered PF_XDP protocol family Jan 29 16:20:56.055776 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 29 16:20:56.055933 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 29 16:20:56.056080 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 29 16:20:56.056248 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 29 16:20:56.056416 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jan 29 16:20:56.056629 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jan 29 16:20:56.056831 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 29 16:20:56.056862 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 29 16:20:56.057062 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 42589 usecs Jan 29 16:20:56.057088 kernel: PCI: CLS 0 bytes, default 64 Jan 29 16:20:56.057105 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 29 16:20:56.057125 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Jan 29 16:20:56.057143 kernel: Initialise system trusted keyrings Jan 29 16:20:56.057161 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 29 16:20:56.057199 kernel: Key type asymmetric registered Jan 29 16:20:56.057217 kernel: Asymmetric key parser 'x509' registered Jan 29 16:20:56.059340 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 29 16:20:56.059360 kernel: io scheduler mq-deadline registered Jan 29 16:20:56.059377 kernel: io scheduler kyber registered Jan 29 16:20:56.059394 kernel: io scheduler bfq registered Jan 29 16:20:56.059412 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 29 16:20:56.059432 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jan 29 16:20:56.059449 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 29 16:20:56.059495 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 29 16:20:56.059512 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 16:20:56.059529 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 29 16:20:56.059546 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 29 16:20:56.059564 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 29 16:20:56.059582 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 29 16:20:56.059917 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 29 16:20:56.059951 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 29 16:20:56.060156 kernel: rtc_cmos 00:03: registered as rtc0 Jan 29 16:20:56.061617 kernel: rtc_cmos 00:03: setting system clock to 2025-01-29T16:20:55 UTC (1738167655) Jan 29 16:20:56.061818 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 29 16:20:56.061844 kernel: intel_pstate: CPU model not supported Jan 29 16:20:56.061861 kernel: NET: Registered PF_INET6 protocol family Jan 29 16:20:56.061878 kernel: Segment Routing with IPv6 Jan 29 16:20:56.061895 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 16:20:56.061908 kernel: NET: Registered PF_PACKET protocol family Jan 29 16:20:56.061948 kernel: Key type dns_resolver registered Jan 29 16:20:56.061961 kernel: IPI shorthand broadcast: enabled Jan 29 16:20:56.061975 kernel: sched_clock: Marking stable (1257005243, 122350306)->(1401432672, -22077123) Jan 29 16:20:56.061990 kernel: registered taskstats version 1 Jan 29 16:20:56.062006 kernel: Loading compiled-in X.509 certificates Jan 29 16:20:56.062023 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 68134fdf6dac3690da6e3bc9c22b042a5c364340' Jan 29 16:20:56.062040 kernel: Key type .fscrypt registered Jan 29 16:20:56.062054 kernel: Key type fscrypt-provisioning registered Jan 29 16:20:56.062070 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 16:20:56.062097 kernel: ima: Allocated hash algorithm: sha1 Jan 29 16:20:56.062115 kernel: ima: No architecture policies found Jan 29 16:20:56.062129 kernel: clk: Disabling unused clocks Jan 29 16:20:56.062146 kernel: Freeing unused kernel image (initmem) memory: 43472K Jan 29 16:20:56.062161 kernel: Write protecting the kernel read-only data: 38912k Jan 29 16:20:56.062271 kernel: Freeing unused kernel image (rodata/data gap) memory: 1724K Jan 29 16:20:56.062297 kernel: Run /init as init process Jan 29 16:20:56.062314 kernel: with arguments: Jan 29 16:20:56.062332 kernel: /init Jan 29 16:20:56.062355 kernel: with environment: Jan 29 16:20:56.062371 kernel: HOME=/ Jan 29 16:20:56.062385 kernel: TERM=linux Jan 29 16:20:56.062403 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 16:20:56.062425 systemd[1]: Successfully made /usr/ read-only. Jan 29 16:20:56.062449 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 29 16:20:56.062467 systemd[1]: Detected virtualization kvm. Jan 29 16:20:56.062485 systemd[1]: Detected architecture x86-64. Jan 29 16:20:56.062518 systemd[1]: Running in initrd. Jan 29 16:20:56.062536 systemd[1]: No hostname configured, using default hostname. Jan 29 16:20:56.062555 systemd[1]: Hostname set to . Jan 29 16:20:56.062573 systemd[1]: Initializing machine ID from VM UUID. Jan 29 16:20:56.062593 systemd[1]: Queued start job for default target initrd.target. Jan 29 16:20:56.062611 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 16:20:56.062630 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 16:20:56.062649 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 16:20:56.062677 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 16:20:56.062696 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 16:20:56.062717 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 16:20:56.062738 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 16:20:56.062758 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 16:20:56.062776 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 16:20:56.062794 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 16:20:56.062819 systemd[1]: Reached target paths.target - Path Units. Jan 29 16:20:56.062836 systemd[1]: Reached target slices.target - Slice Units. Jan 29 16:20:56.062861 systemd[1]: Reached target swap.target - Swaps. Jan 29 16:20:56.062878 systemd[1]: Reached target timers.target - Timer Units. Jan 29 16:20:56.062897 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 16:20:56.062923 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 16:20:56.062942 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 16:20:56.062961 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 29 16:20:56.062979 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 16:20:56.062996 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 16:20:56.063010 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 16:20:56.063027 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 16:20:56.063044 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 16:20:56.063061 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 16:20:56.063091 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 16:20:56.063109 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 16:20:56.063127 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 16:20:56.063146 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 16:20:56.063165 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:20:56.063184 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 16:20:56.063203 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 16:20:56.065409 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 16:20:56.065448 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 16:20:56.065564 systemd-journald[184]: Collecting audit messages is disabled. Jan 29 16:20:56.065624 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 16:20:56.065647 systemd-journald[184]: Journal started Jan 29 16:20:56.065688 systemd-journald[184]: Runtime Journal (/run/log/journal/99bb6b28773d45e687e4ce64f2da7027) is 4.9M, max 39.3M, 34.4M free. Jan 29 16:20:56.044425 systemd-modules-load[185]: Inserted module 'overlay' Jan 29 16:20:56.096638 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 16:20:56.096726 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 16:20:56.100443 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:20:56.106161 kernel: Bridge firewalling registered Jan 29 16:20:56.102216 systemd-modules-load[185]: Inserted module 'br_netfilter' Jan 29 16:20:56.107204 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 16:20:56.118713 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 16:20:56.128650 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:20:56.132577 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 16:20:56.148459 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 16:20:56.152321 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:20:56.168345 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 16:20:56.181451 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:20:56.195813 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 16:20:56.196916 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 16:20:56.206762 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 16:20:56.215417 dracut-cmdline[218]: dracut-dracut-053 Jan 29 16:20:56.220962 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=baa4132e9c604885344fa8e79d67c80ef841a135b233c762ecfe0386901a895d Jan 29 16:20:56.263906 systemd-resolved[222]: Positive Trust Anchors: Jan 29 16:20:56.264826 systemd-resolved[222]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 16:20:56.264870 systemd-resolved[222]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 16:20:56.271534 systemd-resolved[222]: Defaulting to hostname 'linux'. Jan 29 16:20:56.274582 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 16:20:56.275182 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 16:20:56.356329 kernel: SCSI subsystem initialized Jan 29 16:20:56.369295 kernel: Loading iSCSI transport class v2.0-870. Jan 29 16:20:56.385314 kernel: iscsi: registered transport (tcp) Jan 29 16:20:56.417317 kernel: iscsi: registered transport (qla4xxx) Jan 29 16:20:56.417487 kernel: QLogic iSCSI HBA Driver Jan 29 16:20:56.505839 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 16:20:56.512714 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 16:20:56.559639 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 16:20:56.559782 kernel: device-mapper: uevent: version 1.0.3 Jan 29 16:20:56.562295 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 16:20:56.616308 kernel: raid6: avx2x4 gen() 13217 MB/s Jan 29 16:20:56.633299 kernel: raid6: avx2x2 gen() 12996 MB/s Jan 29 16:20:56.650346 kernel: raid6: avx2x1 gen() 10126 MB/s Jan 29 16:20:56.650496 kernel: raid6: using algorithm avx2x4 gen() 13217 MB/s Jan 29 16:20:56.668360 kernel: raid6: .... xor() 5318 MB/s, rmw enabled Jan 29 16:20:56.668505 kernel: raid6: using avx2x2 recovery algorithm Jan 29 16:20:56.699831 kernel: xor: automatically using best checksumming function avx Jan 29 16:20:56.925299 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 16:20:56.945558 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 16:20:56.952649 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 16:20:56.988064 systemd-udevd[404]: Using default interface naming scheme 'v255'. Jan 29 16:20:56.997805 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 16:20:57.006505 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 16:20:57.036471 dracut-pre-trigger[412]: rd.md=0: removing MD RAID activation Jan 29 16:20:57.081516 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 16:20:57.087647 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 16:20:57.190718 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 16:20:57.201912 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 16:20:57.228111 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 16:20:57.233532 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 16:20:57.234626 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 16:20:57.235789 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 16:20:57.241523 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 16:20:57.286878 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 16:20:57.333691 kernel: scsi host0: Virtio SCSI HBA Jan 29 16:20:57.335281 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Jan 29 16:20:57.422767 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 29 16:20:57.422982 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 16:20:57.422998 kernel: GPT:9289727 != 125829119 Jan 29 16:20:57.423011 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 16:20:57.423024 kernel: GPT:9289727 != 125829119 Jan 29 16:20:57.423037 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 16:20:57.423049 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 16:20:57.423062 kernel: libata version 3.00 loaded. Jan 29 16:20:57.423075 kernel: cryptd: max_cpu_qlen set to 1000 Jan 29 16:20:57.423092 kernel: ata_piix 0000:00:01.1: version 2.13 Jan 29 16:20:57.426592 kernel: scsi host1: ata_piix Jan 29 16:20:57.426957 kernel: scsi host2: ata_piix Jan 29 16:20:57.427431 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Jan 29 16:20:57.427470 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Jan 29 16:20:57.427500 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Jan 29 16:20:57.445888 kernel: virtio_blk virtio5: [vdb] 976 512-byte logical blocks (500 kB/488 KiB) Jan 29 16:20:57.446204 kernel: ACPI: bus type USB registered Jan 29 16:20:57.446256 kernel: usbcore: registered new interface driver usbfs Jan 29 16:20:57.446274 kernel: usbcore: registered new interface driver hub Jan 29 16:20:57.446288 kernel: usbcore: registered new device driver usb Jan 29 16:20:57.389365 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 16:20:57.489150 kernel: AVX2 version of gcm_enc/dec engaged. Jan 29 16:20:57.489300 kernel: AES CTR mode by8 optimization enabled Jan 29 16:20:57.389595 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:20:57.390613 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 16:20:57.391119 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 16:20:57.391446 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:20:57.392185 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:20:57.402555 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:20:57.403717 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 29 16:20:57.491751 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:20:57.501627 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 16:20:57.533880 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:20:57.619580 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (455) Jan 29 16:20:57.640290 kernel: BTRFS: device fsid b756ea5d-2d08-456f-8231-a684aa2555c3 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (466) Jan 29 16:20:57.652113 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 29 16:20:57.659425 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Jan 29 16:20:57.667495 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Jan 29 16:20:57.667793 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Jan 29 16:20:57.668034 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Jan 29 16:20:57.668289 kernel: hub 1-0:1.0: USB hub found Jan 29 16:20:57.668538 kernel: hub 1-0:1.0: 2 ports detected Jan 29 16:20:57.683926 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 29 16:20:57.708872 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 16:20:57.721397 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 29 16:20:57.722143 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 29 16:20:57.735645 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 16:20:57.745508 disk-uuid[551]: Primary Header is updated. Jan 29 16:20:57.745508 disk-uuid[551]: Secondary Entries is updated. Jan 29 16:20:57.745508 disk-uuid[551]: Secondary Header is updated. Jan 29 16:20:57.753302 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 16:20:57.770344 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 16:20:58.766366 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 16:20:58.766476 disk-uuid[552]: The operation has completed successfully. Jan 29 16:20:58.836059 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 16:20:58.836316 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 16:20:58.859605 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 16:20:58.878296 sh[563]: Success Jan 29 16:20:58.896293 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 29 16:20:58.970482 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 16:20:58.988401 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 16:20:58.990285 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 16:20:59.014298 kernel: BTRFS info (device dm-0): first mount of filesystem b756ea5d-2d08-456f-8231-a684aa2555c3 Jan 29 16:20:59.014401 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 29 16:20:59.014420 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 16:20:59.014866 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 16:20:59.015837 kernel: BTRFS info (device dm-0): using free space tree Jan 29 16:20:59.025937 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 16:20:59.027045 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 16:20:59.046681 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 16:20:59.051452 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 16:20:59.071573 kernel: BTRFS info (device vda6): first mount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:20:59.071676 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 16:20:59.071690 kernel: BTRFS info (device vda6): using free space tree Jan 29 16:20:59.076264 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 16:20:59.092730 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 16:20:59.094831 kernel: BTRFS info (device vda6): last unmount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:20:59.101581 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 16:20:59.108670 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 16:20:59.229914 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 16:20:59.244589 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 16:20:59.297258 ignition[656]: Ignition 2.20.0 Jan 29 16:20:59.297271 ignition[656]: Stage: fetch-offline Jan 29 16:20:59.299193 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 16:20:59.297366 ignition[656]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:20:59.297378 ignition[656]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 29 16:20:59.297516 ignition[656]: parsed url from cmdline: "" Jan 29 16:20:59.297520 ignition[656]: no config URL provided Jan 29 16:20:59.297526 ignition[656]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 16:20:59.297540 ignition[656]: no config at "/usr/lib/ignition/user.ign" Jan 29 16:20:59.297547 ignition[656]: failed to fetch config: resource requires networking Jan 29 16:20:59.297766 ignition[656]: Ignition finished successfully Jan 29 16:20:59.316738 systemd-networkd[750]: lo: Link UP Jan 29 16:20:59.316755 systemd-networkd[750]: lo: Gained carrier Jan 29 16:20:59.320619 systemd-networkd[750]: Enumeration completed Jan 29 16:20:59.321049 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 16:20:59.321127 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 29 16:20:59.321134 systemd-networkd[750]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Jan 29 16:20:59.321628 systemd[1]: Reached target network.target - Network. Jan 29 16:20:59.322948 systemd-networkd[750]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:20:59.322955 systemd-networkd[750]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 16:20:59.324161 systemd-networkd[750]: eth0: Link UP Jan 29 16:20:59.324167 systemd-networkd[750]: eth0: Gained carrier Jan 29 16:20:59.324182 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 29 16:20:59.327991 systemd-networkd[750]: eth1: Link UP Jan 29 16:20:59.327998 systemd-networkd[750]: eth1: Gained carrier Jan 29 16:20:59.328016 systemd-networkd[750]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:20:59.330086 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 29 16:20:59.341375 systemd-networkd[750]: eth1: DHCPv4 address 10.124.0.7/20 acquired from 169.254.169.253 Jan 29 16:20:59.346358 systemd-networkd[750]: eth0: DHCPv4 address 164.92.66.114/20, gateway 164.92.64.1 acquired from 169.254.169.253 Jan 29 16:20:59.353769 ignition[757]: Ignition 2.20.0 Jan 29 16:20:59.353784 ignition[757]: Stage: fetch Jan 29 16:20:59.354308 ignition[757]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:20:59.354323 ignition[757]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 29 16:20:59.354926 ignition[757]: parsed url from cmdline: "" Jan 29 16:20:59.354941 ignition[757]: no config URL provided Jan 29 16:20:59.354949 ignition[757]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 16:20:59.354962 ignition[757]: no config at "/usr/lib/ignition/user.ign" Jan 29 16:20:59.354992 ignition[757]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Jan 29 16:20:59.371934 ignition[757]: GET result: OK Jan 29 16:20:59.372551 ignition[757]: parsing config with SHA512: cfab21394dc40b03e3c94e8e71a31fc544f25d3a313383c705156ce3bd9071e73749c8544c44df667029878a0c0345a15bc183cb3e949b6193601d6625f778a6 Jan 29 16:20:59.379198 unknown[757]: fetched base config from "system" Jan 29 16:20:59.379999 unknown[757]: fetched base config from "system" Jan 29 16:20:59.380512 unknown[757]: fetched user config from "digitalocean" Jan 29 16:20:59.381050 ignition[757]: fetch: fetch complete Jan 29 16:20:59.381059 ignition[757]: fetch: fetch passed Jan 29 16:20:59.381166 ignition[757]: Ignition finished successfully Jan 29 16:20:59.385007 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 29 16:20:59.390571 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 16:20:59.419402 ignition[765]: Ignition 2.20.0 Jan 29 16:20:59.419424 ignition[765]: Stage: kargs Jan 29 16:20:59.419740 ignition[765]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:20:59.419759 ignition[765]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 29 16:20:59.421494 ignition[765]: kargs: kargs passed Jan 29 16:20:59.423615 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 16:20:59.421609 ignition[765]: Ignition finished successfully Jan 29 16:20:59.429761 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 16:20:59.462867 ignition[771]: Ignition 2.20.0 Jan 29 16:20:59.462886 ignition[771]: Stage: disks Jan 29 16:20:59.463268 ignition[771]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:20:59.466837 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 16:20:59.463291 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 29 16:20:59.464823 ignition[771]: disks: disks passed Jan 29 16:20:59.469021 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 16:20:59.464934 ignition[771]: Ignition finished successfully Jan 29 16:20:59.470306 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 16:20:59.470986 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 16:20:59.471839 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 16:20:59.472604 systemd[1]: Reached target basic.target - Basic System. Jan 29 16:20:59.483747 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 16:20:59.505921 systemd-fsck[779]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 29 16:20:59.508763 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 16:21:00.016418 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 16:21:00.134540 kernel: EXT4-fs (vda9): mounted filesystem 93ea9bb6-d6ba-4a18-a828-f0002683a7b4 r/w with ordered data mode. Quota mode: none. Jan 29 16:21:00.135323 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 16:21:00.136321 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 16:21:00.143429 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 16:21:00.146467 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 16:21:00.149017 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... Jan 29 16:21:00.157588 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 29 16:21:00.166059 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (787) Jan 29 16:21:00.166090 kernel: BTRFS info (device vda6): first mount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:21:00.166111 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 16:21:00.166131 kernel: BTRFS info (device vda6): using free space tree Jan 29 16:21:00.165100 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 16:21:00.165151 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 16:21:00.170570 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 16:21:00.173862 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 16:21:00.177923 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 16:21:00.184680 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 16:21:00.259806 coreos-metadata[789]: Jan 29 16:21:00.259 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 29 16:21:00.265037 coreos-metadata[790]: Jan 29 16:21:00.264 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 29 16:21:00.266383 initrd-setup-root[817]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 16:21:00.270796 coreos-metadata[789]: Jan 29 16:21:00.270 INFO Fetch successful Jan 29 16:21:00.274210 initrd-setup-root[824]: cut: /sysroot/etc/group: No such file or directory Jan 29 16:21:00.279637 coreos-metadata[790]: Jan 29 16:21:00.278 INFO Fetch successful Jan 29 16:21:00.283964 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. Jan 29 16:21:00.285571 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. Jan 29 16:21:00.287692 coreos-metadata[790]: Jan 29 16:21:00.286 INFO wrote hostname ci-4230.0.0-8-df8e9582f3 to /sysroot/etc/hostname Jan 29 16:21:00.288456 initrd-setup-root[831]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 16:21:00.288749 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 29 16:21:00.295151 initrd-setup-root[840]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 16:21:00.416721 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 16:21:00.423422 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 16:21:00.426459 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 16:21:00.443261 kernel: BTRFS info (device vda6): last unmount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:21:00.467036 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 16:21:00.477349 ignition[907]: INFO : Ignition 2.20.0 Jan 29 16:21:00.477349 ignition[907]: INFO : Stage: mount Jan 29 16:21:00.478365 ignition[907]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 16:21:00.478365 ignition[907]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 29 16:21:00.479588 ignition[907]: INFO : mount: mount passed Jan 29 16:21:00.479992 ignition[907]: INFO : Ignition finished successfully Jan 29 16:21:00.481561 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 16:21:00.486464 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 16:21:00.616457 systemd-networkd[750]: eth0: Gained IPv6LL Jan 29 16:21:01.000476 systemd-networkd[750]: eth1: Gained IPv6LL Jan 29 16:21:01.012138 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 16:21:01.017594 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 16:21:01.037254 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (920) Jan 29 16:21:01.039526 kernel: BTRFS info (device vda6): first mount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:21:01.039608 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 16:21:01.040501 kernel: BTRFS info (device vda6): using free space tree Jan 29 16:21:01.045494 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 16:21:01.047460 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 16:21:01.086388 ignition[937]: INFO : Ignition 2.20.0 Jan 29 16:21:01.086388 ignition[937]: INFO : Stage: files Jan 29 16:21:01.087413 ignition[937]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 16:21:01.087413 ignition[937]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 29 16:21:01.088385 ignition[937]: DEBUG : files: compiled without relabeling support, skipping Jan 29 16:21:01.088865 ignition[937]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 16:21:01.088865 ignition[937]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 16:21:01.091339 ignition[937]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 16:21:01.092107 ignition[937]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 16:21:01.093030 unknown[937]: wrote ssh authorized keys file for user: core Jan 29 16:21:01.093713 ignition[937]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 16:21:01.094602 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 16:21:01.095284 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 29 16:21:01.143262 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 29 16:21:01.270870 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 16:21:01.270870 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 29 16:21:01.273504 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 29 16:21:01.766121 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 29 16:21:01.837551 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 29 16:21:01.837551 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 29 16:21:01.839144 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 16:21:01.839144 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 16:21:01.839144 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 16:21:01.839144 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 16:21:01.839144 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 16:21:01.839144 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 16:21:01.839144 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 16:21:01.839144 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 16:21:01.839144 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 16:21:01.839144 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 16:21:01.839144 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 16:21:01.839144 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 16:21:01.839144 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Jan 29 16:21:02.256973 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 29 16:21:02.530715 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 16:21:02.530715 ignition[937]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 29 16:21:02.532177 ignition[937]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 16:21:02.532826 ignition[937]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 16:21:02.532826 ignition[937]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 29 16:21:02.532826 ignition[937]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 29 16:21:02.534598 ignition[937]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 16:21:02.534598 ignition[937]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 16:21:02.534598 ignition[937]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 16:21:02.534598 ignition[937]: INFO : files: files passed Jan 29 16:21:02.534598 ignition[937]: INFO : Ignition finished successfully Jan 29 16:21:02.535309 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 16:21:02.542508 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 16:21:02.544418 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 16:21:02.562262 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 16:21:02.562415 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 16:21:02.574686 initrd-setup-root-after-ignition[965]: grep: Jan 29 16:21:02.574686 initrd-setup-root-after-ignition[969]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 16:21:02.576785 initrd-setup-root-after-ignition[965]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 16:21:02.576785 initrd-setup-root-after-ignition[965]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 16:21:02.577746 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 16:21:02.579137 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 16:21:02.587687 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 16:21:02.628029 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 16:21:02.628246 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 16:21:02.629707 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 16:21:02.630772 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 16:21:02.631469 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 16:21:02.632997 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 16:21:02.666366 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 16:21:02.677751 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 16:21:02.692011 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 16:21:02.692992 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 16:21:02.694019 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 16:21:02.695044 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 16:21:02.695430 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 16:21:02.696827 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 16:21:02.697554 systemd[1]: Stopped target basic.target - Basic System. Jan 29 16:21:02.698412 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 16:21:02.699152 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 16:21:02.699970 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 16:21:02.700807 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 16:21:02.701637 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 16:21:02.702500 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 16:21:02.703278 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 16:21:02.704006 systemd[1]: Stopped target swap.target - Swaps. Jan 29 16:21:02.704696 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 16:21:02.704968 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 16:21:02.706309 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 16:21:02.707032 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 16:21:02.707769 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 16:21:02.708631 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 16:21:02.709750 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 16:21:02.709965 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 16:21:02.711300 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 16:21:02.711551 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 16:21:02.712566 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 16:21:02.712790 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 16:21:02.713757 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 29 16:21:02.713925 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 29 16:21:02.723692 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 16:21:02.724403 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 16:21:02.724700 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 16:21:02.728582 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 16:21:02.729130 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 16:21:02.729426 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 16:21:02.730091 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 16:21:02.731792 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 16:21:02.743679 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 16:21:02.744716 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 16:21:02.753287 ignition[989]: INFO : Ignition 2.20.0 Jan 29 16:21:02.753287 ignition[989]: INFO : Stage: umount Jan 29 16:21:02.753287 ignition[989]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 16:21:02.753287 ignition[989]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 29 16:21:02.757131 ignition[989]: INFO : umount: umount passed Jan 29 16:21:02.757131 ignition[989]: INFO : Ignition finished successfully Jan 29 16:21:02.756494 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 16:21:02.756690 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 16:21:02.759881 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 16:21:02.760169 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 16:21:02.761388 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 16:21:02.761494 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 16:21:02.762285 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 29 16:21:02.762371 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 29 16:21:02.762967 systemd[1]: Stopped target network.target - Network. Jan 29 16:21:02.764776 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 16:21:02.764930 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 16:21:02.768301 systemd[1]: Stopped target paths.target - Path Units. Jan 29 16:21:02.768914 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 16:21:02.772400 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 16:21:02.773527 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 16:21:02.775670 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 16:21:02.776444 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 16:21:02.776531 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 16:21:02.777186 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 16:21:02.777267 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 16:21:02.779688 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 16:21:02.779818 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 16:21:02.782797 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 16:21:02.782917 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 16:21:02.793782 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 16:21:02.818437 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 16:21:02.822070 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 16:21:02.833809 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 16:21:02.833940 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 16:21:02.847841 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 29 16:21:02.848702 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 16:21:02.848822 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 16:21:02.853837 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 29 16:21:02.854112 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 16:21:02.854215 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 16:21:02.855962 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 29 16:21:02.856739 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 16:21:02.856821 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 16:21:02.864419 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 16:21:02.864915 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 16:21:02.865008 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 16:21:02.865907 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 16:21:02.865959 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:21:02.867794 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 16:21:02.867843 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 16:21:02.869606 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 16:21:02.873974 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 29 16:21:02.874971 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 16:21:02.875092 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 16:21:02.887746 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 16:21:02.887985 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 16:21:02.889382 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 16:21:02.889537 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 16:21:02.892407 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 16:21:02.892519 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 16:21:02.893693 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 16:21:02.893758 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 16:21:02.894502 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 16:21:02.894594 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 16:21:02.895819 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 16:21:02.895903 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 16:21:02.897008 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 16:21:02.897093 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:21:02.898326 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 16:21:02.898418 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 16:21:02.904543 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 16:21:02.905635 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 16:21:02.905735 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 16:21:02.907277 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 16:21:02.907351 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:21:02.930895 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 16:21:02.931087 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 16:21:02.933108 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 16:21:02.941613 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 16:21:02.956389 systemd[1]: Switching root. Jan 29 16:21:02.991552 systemd-journald[184]: Journal stopped Jan 29 16:21:04.480651 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Jan 29 16:21:04.480791 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 16:21:04.480821 kernel: SELinux: policy capability open_perms=1 Jan 29 16:21:04.480834 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 16:21:04.480852 kernel: SELinux: policy capability always_check_network=0 Jan 29 16:21:04.480869 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 16:21:04.480895 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 16:21:04.480915 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 16:21:04.480956 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 16:21:04.480976 kernel: audit: type=1403 audit(1738167663.132:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 16:21:04.480993 systemd[1]: Successfully loaded SELinux policy in 45.333ms. Jan 29 16:21:04.481022 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 23.333ms. Jan 29 16:21:04.481050 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 29 16:21:04.481071 systemd[1]: Detected virtualization kvm. Jan 29 16:21:04.481090 systemd[1]: Detected architecture x86-64. Jan 29 16:21:04.481109 systemd[1]: Detected first boot. Jan 29 16:21:04.481142 systemd[1]: Hostname set to . Jan 29 16:21:04.481160 systemd[1]: Initializing machine ID from VM UUID. Jan 29 16:21:04.481177 zram_generator::config[1034]: No configuration found. Jan 29 16:21:04.481197 kernel: Guest personality initialized and is inactive Jan 29 16:21:04.481214 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jan 29 16:21:04.481249 kernel: Initialized host personality Jan 29 16:21:04.481265 kernel: NET: Registered PF_VSOCK protocol family Jan 29 16:21:04.481281 systemd[1]: Populated /etc with preset unit settings. Jan 29 16:21:04.481301 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 29 16:21:04.481390 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 16:21:04.481409 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 16:21:04.481432 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 16:21:04.481451 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 16:21:04.481469 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 16:21:04.481488 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 16:21:04.481508 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 16:21:04.481527 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 16:21:04.481558 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 16:21:04.481577 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 16:21:04.481595 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 16:21:04.481614 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 16:21:04.481634 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 16:21:04.481653 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 16:21:04.481672 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 16:21:04.481703 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 16:21:04.481725 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 16:21:04.481746 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 29 16:21:04.481764 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 16:21:04.490471 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 16:21:04.490510 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 16:21:04.490586 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 16:21:04.490601 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 16:21:04.490629 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 16:21:04.490643 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 16:21:04.490662 systemd[1]: Reached target slices.target - Slice Units. Jan 29 16:21:04.490681 systemd[1]: Reached target swap.target - Swaps. Jan 29 16:21:04.490700 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 16:21:04.490717 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 16:21:04.490735 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 29 16:21:04.490756 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 16:21:04.490776 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 16:21:04.490799 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 16:21:04.490822 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 16:21:04.490836 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 16:21:04.490849 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 16:21:04.490862 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 16:21:04.490875 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:21:04.490892 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 16:21:04.490905 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 16:21:04.490919 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 16:21:04.490940 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 16:21:04.490954 systemd[1]: Reached target machines.target - Containers. Jan 29 16:21:04.490966 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 16:21:04.490984 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:21:04.490997 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 16:21:04.491011 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 16:21:04.491023 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:21:04.491035 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 16:21:04.491055 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 16:21:04.491069 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 16:21:04.491082 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 16:21:04.491095 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 16:21:04.491108 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 16:21:04.491120 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 16:21:04.491133 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 16:21:04.491146 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 16:21:04.491159 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:21:04.491179 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 16:21:04.491193 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 16:21:04.491206 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 16:21:04.491257 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 16:21:04.491273 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 29 16:21:04.491285 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 16:21:04.491306 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 16:21:04.491323 systemd[1]: Stopped verity-setup.service. Jan 29 16:21:04.491337 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:21:04.491423 systemd-journald[1108]: Collecting audit messages is disabled. Jan 29 16:21:04.491473 systemd-journald[1108]: Journal started Jan 29 16:21:04.491513 systemd-journald[1108]: Runtime Journal (/run/log/journal/99bb6b28773d45e687e4ce64f2da7027) is 4.9M, max 39.3M, 34.4M free. Jan 29 16:21:04.115024 systemd[1]: Queued start job for default target multi-user.target. Jan 29 16:21:04.125669 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 29 16:21:04.126443 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 16:21:04.533275 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 16:21:04.538687 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 16:21:04.540443 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 16:21:04.542520 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 16:21:04.542985 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 16:21:04.543934 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 16:21:04.546526 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 16:21:04.547408 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 16:21:04.548350 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 16:21:04.548612 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 16:21:04.559299 kernel: ACPI: bus type drm_connector registered Jan 29 16:21:04.551967 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:21:04.552298 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:21:04.554661 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 16:21:04.554858 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 16:21:04.558606 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 16:21:04.558869 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 16:21:04.562308 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 16:21:04.563327 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 16:21:04.564382 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 16:21:04.586709 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 16:21:04.614360 kernel: loop: module loaded Jan 29 16:21:04.613437 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 16:21:04.613811 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 16:21:04.621518 kernel: fuse: init (API version 7.39) Jan 29 16:21:04.620901 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 29 16:21:04.624478 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 16:21:04.636632 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 16:21:04.637469 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 16:21:04.637688 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 16:21:04.640051 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 29 16:21:04.649984 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 16:21:04.658442 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 16:21:04.659792 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:21:04.663483 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 16:21:04.673672 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 16:21:04.674352 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 16:21:04.681630 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 16:21:04.682551 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 16:21:04.685618 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:21:04.695884 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 16:21:04.717948 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 16:21:04.722996 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 16:21:04.725547 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 16:21:04.729979 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 16:21:04.731004 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 16:21:04.732320 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 16:21:04.764421 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 16:21:04.784070 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 16:21:04.789266 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 16:21:04.811345 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 16:21:04.814850 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 16:21:04.826425 systemd-journald[1108]: Time spent on flushing to /var/log/journal/99bb6b28773d45e687e4ce64f2da7027 is 141.302ms for 1008 entries. Jan 29 16:21:04.826425 systemd-journald[1108]: System Journal (/var/log/journal/99bb6b28773d45e687e4ce64f2da7027) is 8M, max 195.6M, 187.6M free. Jan 29 16:21:05.011276 systemd-journald[1108]: Received client request to flush runtime journal. Jan 29 16:21:05.011464 kernel: loop0: detected capacity change from 0 to 138176 Jan 29 16:21:05.011503 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 16:21:05.011532 kernel: loop1: detected capacity change from 0 to 205544 Jan 29 16:21:04.829819 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 29 16:21:04.866912 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:21:04.904429 udevadm[1165]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 29 16:21:04.937638 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 29 16:21:04.964942 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 16:21:04.976832 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 16:21:05.019360 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 16:21:05.058281 kernel: loop2: detected capacity change from 0 to 8 Jan 29 16:21:05.083479 kernel: loop3: detected capacity change from 0 to 147912 Jan 29 16:21:05.116479 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. Jan 29 16:21:05.117765 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. Jan 29 16:21:05.129564 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 16:21:05.141902 kernel: loop4: detected capacity change from 0 to 138176 Jan 29 16:21:05.141282 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 16:21:05.184294 kernel: loop5: detected capacity change from 0 to 205544 Jan 29 16:21:05.246597 kernel: loop6: detected capacity change from 0 to 8 Jan 29 16:21:05.254526 kernel: loop7: detected capacity change from 0 to 147912 Jan 29 16:21:05.310503 (sd-merge)[1185]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Jan 29 16:21:05.311445 (sd-merge)[1185]: Merged extensions into '/usr'. Jan 29 16:21:05.335599 systemd[1]: Reload requested from client PID 1158 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 16:21:05.335649 systemd[1]: Reloading... Jan 29 16:21:05.554326 zram_generator::config[1215]: No configuration found. Jan 29 16:21:05.729280 ldconfig[1153]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 16:21:05.794019 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:21:05.919102 systemd[1]: Reloading finished in 582 ms. Jan 29 16:21:05.935336 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 16:21:05.936179 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 16:21:05.959637 systemd[1]: Starting ensure-sysext.service... Jan 29 16:21:05.973635 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 16:21:06.004033 systemd[1]: Reload requested from client PID 1257 ('systemctl') (unit ensure-sysext.service)... Jan 29 16:21:06.004247 systemd[1]: Reloading... Jan 29 16:21:06.039428 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 16:21:06.041142 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 16:21:06.042194 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 16:21:06.043741 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Jan 29 16:21:06.043944 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Jan 29 16:21:06.047860 systemd-tmpfiles[1258]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 16:21:06.048061 systemd-tmpfiles[1258]: Skipping /boot Jan 29 16:21:06.071478 systemd-tmpfiles[1258]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 16:21:06.072422 systemd-tmpfiles[1258]: Skipping /boot Jan 29 16:21:06.155300 zram_generator::config[1287]: No configuration found. Jan 29 16:21:06.382643 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:21:06.480170 systemd[1]: Reloading finished in 475 ms. Jan 29 16:21:06.495605 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 16:21:06.509217 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 16:21:06.522722 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 16:21:06.529201 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 16:21:06.531705 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 16:21:06.536350 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 16:21:06.540594 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 16:21:06.545582 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 16:21:06.551779 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:21:06.551993 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:21:06.558337 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:21:06.564796 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 16:21:06.568603 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 16:21:06.569880 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:21:06.570390 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:21:06.570495 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:21:06.578808 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:21:06.579012 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:21:06.579204 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:21:06.580467 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:21:06.580632 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:21:06.585161 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:21:06.587357 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:21:06.596687 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 16:21:06.597445 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:21:06.597595 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:21:06.597739 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:21:06.611835 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 16:21:06.613524 systemd[1]: Finished ensure-sysext.service. Jan 29 16:21:06.643623 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 29 16:21:06.646368 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 16:21:06.647705 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 16:21:06.647916 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 16:21:06.657722 systemd-udevd[1336]: Using default interface naming scheme 'v255'. Jan 29 16:21:06.658960 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 16:21:06.659951 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 16:21:06.663863 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 16:21:06.667825 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 16:21:06.678043 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:21:06.678310 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:21:06.680759 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 16:21:06.681037 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 16:21:06.683103 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 16:21:06.683276 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 16:21:06.702587 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 16:21:06.712327 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 16:21:06.720082 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 16:21:06.730595 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 16:21:06.735738 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 16:21:06.757504 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 16:21:06.762778 augenrules[1386]: No rules Jan 29 16:21:06.767024 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 16:21:06.768419 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 16:21:06.850824 systemd[1]: Condition check resulted in dev-disk-by\x2dlabel-config\x2d2.device - /dev/disk/by-label/config-2 being skipped. Jan 29 16:21:06.859431 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Jan 29 16:21:06.859910 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:21:06.860084 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:21:06.863587 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:21:06.871511 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 16:21:06.880754 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 16:21:06.881816 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:21:06.887758 kernel: ISO 9660 Extensions: RRIP_1991A Jan 29 16:21:06.881862 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:21:06.881896 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 16:21:06.881917 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:21:06.891950 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Jan 29 16:21:06.896186 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 29 16:21:06.918129 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 16:21:06.919332 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 16:21:06.920958 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 16:21:06.923758 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 16:21:06.925066 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 16:21:06.937473 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:21:06.937696 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:21:06.938459 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 16:21:06.997448 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1373) Jan 29 16:21:07.058629 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 16:21:07.063778 systemd-networkd[1370]: lo: Link UP Jan 29 16:21:07.064095 systemd-networkd[1370]: lo: Gained carrier Jan 29 16:21:07.064548 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 16:21:07.067989 systemd-networkd[1370]: Enumeration completed Jan 29 16:21:07.068111 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 16:21:07.075018 systemd-resolved[1335]: Positive Trust Anchors: Jan 29 16:21:07.075041 systemd-resolved[1335]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 16:21:07.075081 systemd-resolved[1335]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 16:21:07.077743 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 29 16:21:07.080779 systemd-resolved[1335]: Using system hostname 'ci-4230.0.0-8-df8e9582f3'. Jan 29 16:21:07.089489 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 16:21:07.090163 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 29 16:21:07.090711 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 16:21:07.091699 systemd[1]: Reached target network.target - Network. Jan 29 16:21:07.092288 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 16:21:07.093034 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 16:21:07.103545 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 29 16:21:07.108529 kernel: ACPI: button: Power Button [PWRF] Jan 29 16:21:07.123616 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 16:21:07.138758 systemd-networkd[1370]: eth0: Configuring with /run/systemd/network/10-72:d8:01:dd:3f:d5.network. Jan 29 16:21:07.142282 systemd-networkd[1370]: eth0: Link UP Jan 29 16:21:07.142295 systemd-networkd[1370]: eth0: Gained carrier Jan 29 16:21:07.152108 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 29 16:21:07.153256 systemd-timesyncd[1350]: Network configuration changed, trying to establish connection. Jan 29 16:21:07.161594 systemd-networkd[1370]: eth1: Configuring with /run/systemd/network/10-d2:72:74:74:42:23.network. Jan 29 16:21:07.164388 systemd-networkd[1370]: eth1: Link UP Jan 29 16:21:07.164399 systemd-networkd[1370]: eth1: Gained carrier Jan 29 16:21:07.186249 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 29 16:21:07.615729 systemd-timesyncd[1350]: Contacted time server 96.60.160.227:123 (0.flatcar.pool.ntp.org). Jan 29 16:21:07.615822 systemd-timesyncd[1350]: Initial clock synchronization to Wed 2025-01-29 16:21:07.615372 UTC. Jan 29 16:21:07.616115 systemd-resolved[1335]: Clock change detected. Flushing caches. Jan 29 16:21:07.619087 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jan 29 16:21:07.666479 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:21:07.680108 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 16:21:07.720391 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jan 29 16:21:07.720523 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jan 29 16:21:07.721312 kernel: Console: switching to colour dummy device 80x25 Jan 29 16:21:07.722131 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 29 16:21:07.722162 kernel: [drm] features: -context_init Jan 29 16:21:07.724291 kernel: [drm] number of scanouts: 1 Jan 29 16:21:07.724358 kernel: [drm] number of cap sets: 0 Jan 29 16:21:07.727202 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jan 29 16:21:07.745109 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 29 16:21:07.749139 kernel: Console: switching to colour frame buffer device 128x48 Jan 29 16:21:07.771091 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 29 16:21:07.774553 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 16:21:07.774942 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:21:07.828587 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:21:07.900087 kernel: EDAC MC: Ver: 3.0.0 Jan 29 16:21:07.902961 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:21:07.938497 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 16:21:07.946477 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 16:21:07.963859 lvm[1450]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 16:21:07.996905 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 16:21:07.998606 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 16:21:07.998742 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 16:21:07.998929 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 16:21:07.999055 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 16:21:07.999373 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 16:21:07.999566 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 16:21:07.999669 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 16:21:07.999741 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 16:21:07.999768 systemd[1]: Reached target paths.target - Path Units. Jan 29 16:21:07.999828 systemd[1]: Reached target timers.target - Timer Units. Jan 29 16:21:08.002518 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 16:21:08.004935 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 16:21:08.010192 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 29 16:21:08.011544 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 29 16:21:08.011881 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 29 16:21:08.025376 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 16:21:08.027040 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 29 16:21:08.041445 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 16:21:08.042828 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 16:21:08.047267 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 16:21:08.047833 systemd[1]: Reached target basic.target - Basic System. Jan 29 16:21:08.048548 lvm[1454]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 16:21:08.049135 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 16:21:08.049169 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 16:21:08.057321 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 16:21:08.063304 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 29 16:21:08.068721 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 16:21:08.081299 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 16:21:08.085357 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 16:21:08.086222 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 16:21:08.099388 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 16:21:08.106214 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 29 16:21:08.113348 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 16:21:08.117602 dbus-daemon[1457]: [system] SELinux support is enabled Jan 29 16:21:08.121543 jq[1458]: false Jan 29 16:21:08.122512 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 16:21:08.142354 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 16:21:08.146758 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 16:21:08.147658 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 16:21:08.156746 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 16:21:08.160106 extend-filesystems[1459]: Found loop4 Jan 29 16:21:08.160106 extend-filesystems[1459]: Found loop5 Jan 29 16:21:08.160106 extend-filesystems[1459]: Found loop6 Jan 29 16:21:08.160106 extend-filesystems[1459]: Found loop7 Jan 29 16:21:08.160106 extend-filesystems[1459]: Found vda Jan 29 16:21:08.160106 extend-filesystems[1459]: Found vda1 Jan 29 16:21:08.160106 extend-filesystems[1459]: Found vda2 Jan 29 16:21:08.160106 extend-filesystems[1459]: Found vda3 Jan 29 16:21:08.160106 extend-filesystems[1459]: Found usr Jan 29 16:21:08.160106 extend-filesystems[1459]: Found vda4 Jan 29 16:21:08.160106 extend-filesystems[1459]: Found vda6 Jan 29 16:21:08.160106 extend-filesystems[1459]: Found vda7 Jan 29 16:21:08.209210 extend-filesystems[1459]: Found vda9 Jan 29 16:21:08.209210 extend-filesystems[1459]: Checking size of /dev/vda9 Jan 29 16:21:08.210050 coreos-metadata[1456]: Jan 29 16:21:08.201 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 29 16:21:08.171408 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 16:21:08.181306 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 16:21:08.223612 coreos-metadata[1456]: Jan 29 16:21:08.223 INFO Fetch successful Jan 29 16:21:08.202994 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 16:21:08.218982 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 16:21:08.219374 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 16:21:08.221678 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 16:21:08.222507 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 16:21:08.231101 jq[1471]: true Jan 29 16:21:08.262783 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 16:21:08.262885 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 16:21:08.264689 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 16:21:08.264785 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Jan 29 16:21:08.264809 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 16:21:08.279930 extend-filesystems[1459]: Resized partition /dev/vda9 Jan 29 16:21:08.296490 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Jan 29 16:21:08.296592 extend-filesystems[1496]: resize2fs 1.47.1 (20-May-2024) Jan 29 16:21:08.298698 jq[1481]: true Jan 29 16:21:08.314676 (ntainerd)[1485]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 16:21:08.329736 update_engine[1468]: I20250129 16:21:08.328635 1468 main.cc:92] Flatcar Update Engine starting Jan 29 16:21:08.332533 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 16:21:08.332800 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 16:21:08.346973 tar[1479]: linux-amd64/helm Jan 29 16:21:08.359729 update_engine[1468]: I20250129 16:21:08.359639 1468 update_check_scheduler.cc:74] Next update check in 3m14s Jan 29 16:21:08.369533 systemd[1]: Started update-engine.service - Update Engine. Jan 29 16:21:08.402256 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1402) Jan 29 16:21:08.388867 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 16:21:08.395858 systemd-logind[1467]: New seat seat0. Jan 29 16:21:08.400563 systemd-logind[1467]: Watching system buttons on /dev/input/event1 (Power Button) Jan 29 16:21:08.400585 systemd-logind[1467]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 29 16:21:08.400922 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 16:21:08.408060 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 29 16:21:08.433896 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 16:21:08.553611 locksmithd[1503]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 16:21:08.563104 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 29 16:21:08.589183 extend-filesystems[1496]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 29 16:21:08.589183 extend-filesystems[1496]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 29 16:21:08.589183 extend-filesystems[1496]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 29 16:21:08.588813 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 16:21:08.595622 bash[1519]: Updated "/home/core/.ssh/authorized_keys" Jan 29 16:21:08.595765 extend-filesystems[1459]: Resized filesystem in /dev/vda9 Jan 29 16:21:08.595765 extend-filesystems[1459]: Found vdb Jan 29 16:21:08.592894 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 16:21:08.598444 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 16:21:08.613574 systemd[1]: Starting sshkeys.service... Jan 29 16:21:08.665234 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 29 16:21:08.678606 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 29 16:21:08.784877 coreos-metadata[1532]: Jan 29 16:21:08.784 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 29 16:21:08.799107 coreos-metadata[1532]: Jan 29 16:21:08.796 INFO Fetch successful Jan 29 16:21:08.815169 unknown[1532]: wrote ssh authorized keys file for user: core Jan 29 16:21:08.844186 systemd-networkd[1370]: eth0: Gained IPv6LL Jan 29 16:21:08.851654 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 16:21:08.855918 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 16:21:08.872436 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:21:08.888542 update-ssh-keys[1535]: Updated "/home/core/.ssh/authorized_keys" Jan 29 16:21:08.889020 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 16:21:08.894849 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 29 16:21:08.902513 systemd[1]: Finished sshkeys.service. Jan 29 16:21:08.913440 containerd[1485]: time="2025-01-29T16:21:08.913313778Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 29 16:21:08.948610 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 16:21:08.990165 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 16:21:09.024091 containerd[1485]: time="2025-01-29T16:21:09.021784259Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:21:09.030501 containerd[1485]: time="2025-01-29T16:21:09.030404884Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:21:09.031184 containerd[1485]: time="2025-01-29T16:21:09.031153692Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 16:21:09.031331 containerd[1485]: time="2025-01-29T16:21:09.031290312Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 16:21:09.031582 containerd[1485]: time="2025-01-29T16:21:09.031562859Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 16:21:09.032103 containerd[1485]: time="2025-01-29T16:21:09.032039202Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 16:21:09.032295 containerd[1485]: time="2025-01-29T16:21:09.032274923Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:21:09.032638 containerd[1485]: time="2025-01-29T16:21:09.032619957Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:21:09.033982 containerd[1485]: time="2025-01-29T16:21:09.033165906Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:21:09.033982 containerd[1485]: time="2025-01-29T16:21:09.033189003Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 16:21:09.033982 containerd[1485]: time="2025-01-29T16:21:09.033205049Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:21:09.033982 containerd[1485]: time="2025-01-29T16:21:09.033215079Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 16:21:09.033982 containerd[1485]: time="2025-01-29T16:21:09.033321392Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:21:09.033982 containerd[1485]: time="2025-01-29T16:21:09.033542382Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:21:09.033982 containerd[1485]: time="2025-01-29T16:21:09.033701319Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:21:09.033982 containerd[1485]: time="2025-01-29T16:21:09.033715346Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 16:21:09.033982 containerd[1485]: time="2025-01-29T16:21:09.033796993Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 16:21:09.033982 containerd[1485]: time="2025-01-29T16:21:09.033843482Z" level=info msg="metadata content store policy set" policy=shared Jan 29 16:21:09.043094 containerd[1485]: time="2025-01-29T16:21:09.040749369Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 16:21:09.043094 containerd[1485]: time="2025-01-29T16:21:09.041775360Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 16:21:09.043094 containerd[1485]: time="2025-01-29T16:21:09.041827533Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 16:21:09.043094 containerd[1485]: time="2025-01-29T16:21:09.041866822Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 16:21:09.043094 containerd[1485]: time="2025-01-29T16:21:09.041888164Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 16:21:09.043355 containerd[1485]: time="2025-01-29T16:21:09.043172630Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 16:21:09.043567 containerd[1485]: time="2025-01-29T16:21:09.043429185Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 16:21:09.043614 containerd[1485]: time="2025-01-29T16:21:09.043574813Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 16:21:09.043614 containerd[1485]: time="2025-01-29T16:21:09.043592239Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 16:21:09.043614 containerd[1485]: time="2025-01-29T16:21:09.043607552Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 16:21:09.043682 containerd[1485]: time="2025-01-29T16:21:09.043621333Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 16:21:09.043682 containerd[1485]: time="2025-01-29T16:21:09.043634384Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 16:21:09.043682 containerd[1485]: time="2025-01-29T16:21:09.043646195Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 16:21:09.043682 containerd[1485]: time="2025-01-29T16:21:09.043659606Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 16:21:09.043682 containerd[1485]: time="2025-01-29T16:21:09.043675549Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 16:21:09.043780 containerd[1485]: time="2025-01-29T16:21:09.043688926Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 16:21:09.043780 containerd[1485]: time="2025-01-29T16:21:09.043701036Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 16:21:09.043780 containerd[1485]: time="2025-01-29T16:21:09.043711733Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 16:21:09.043780 containerd[1485]: time="2025-01-29T16:21:09.043731067Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 16:21:09.043780 containerd[1485]: time="2025-01-29T16:21:09.043744622Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 16:21:09.043780 containerd[1485]: time="2025-01-29T16:21:09.043758614Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 16:21:09.043780 containerd[1485]: time="2025-01-29T16:21:09.043770667Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 16:21:09.043924 containerd[1485]: time="2025-01-29T16:21:09.043781747Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 16:21:09.043924 containerd[1485]: time="2025-01-29T16:21:09.043793799Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 16:21:09.043924 containerd[1485]: time="2025-01-29T16:21:09.043804480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 16:21:09.043924 containerd[1485]: time="2025-01-29T16:21:09.043816440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 16:21:09.043924 containerd[1485]: time="2025-01-29T16:21:09.043827965Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 16:21:09.043924 containerd[1485]: time="2025-01-29T16:21:09.043842551Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 16:21:09.043924 containerd[1485]: time="2025-01-29T16:21:09.043853588Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 16:21:09.043924 containerd[1485]: time="2025-01-29T16:21:09.043864010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 16:21:09.043924 containerd[1485]: time="2025-01-29T16:21:09.043898290Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 16:21:09.043924 containerd[1485]: time="2025-01-29T16:21:09.043918398Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 16:21:09.044153 containerd[1485]: time="2025-01-29T16:21:09.043940085Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 16:21:09.044153 containerd[1485]: time="2025-01-29T16:21:09.043952177Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 16:21:09.044153 containerd[1485]: time="2025-01-29T16:21:09.043961960Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 16:21:09.044153 containerd[1485]: time="2025-01-29T16:21:09.044003433Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 16:21:09.044153 containerd[1485]: time="2025-01-29T16:21:09.044020311Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 16:21:09.044153 containerd[1485]: time="2025-01-29T16:21:09.044030913Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 16:21:09.044153 containerd[1485]: time="2025-01-29T16:21:09.044042260Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 16:21:09.044153 containerd[1485]: time="2025-01-29T16:21:09.044055630Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 16:21:09.044153 containerd[1485]: time="2025-01-29T16:21:09.044097775Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 16:21:09.044153 containerd[1485]: time="2025-01-29T16:21:09.044108706Z" level=info msg="NRI interface is disabled by configuration." Jan 29 16:21:09.044153 containerd[1485]: time="2025-01-29T16:21:09.044117994Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 16:21:09.045314 containerd[1485]: time="2025-01-29T16:21:09.044440570Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 16:21:09.045552 containerd[1485]: time="2025-01-29T16:21:09.045491058Z" level=info msg="Connect containerd service" Jan 29 16:21:09.045581 containerd[1485]: time="2025-01-29T16:21:09.045551581Z" level=info msg="using legacy CRI server" Jan 29 16:21:09.045581 containerd[1485]: time="2025-01-29T16:21:09.045562411Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 16:21:09.046475 containerd[1485]: time="2025-01-29T16:21:09.045740299Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 16:21:09.046584 containerd[1485]: time="2025-01-29T16:21:09.046554966Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 16:21:09.047073 containerd[1485]: time="2025-01-29T16:21:09.046692984Z" level=info msg="Start subscribing containerd event" Jan 29 16:21:09.047073 containerd[1485]: time="2025-01-29T16:21:09.046763271Z" level=info msg="Start recovering state" Jan 29 16:21:09.047073 containerd[1485]: time="2025-01-29T16:21:09.046850527Z" level=info msg="Start event monitor" Jan 29 16:21:09.047073 containerd[1485]: time="2025-01-29T16:21:09.046868489Z" level=info msg="Start snapshots syncer" Jan 29 16:21:09.047073 containerd[1485]: time="2025-01-29T16:21:09.046907304Z" level=info msg="Start cni network conf syncer for default" Jan 29 16:21:09.047073 containerd[1485]: time="2025-01-29T16:21:09.046918086Z" level=info msg="Start streaming server" Jan 29 16:21:09.049518 containerd[1485]: time="2025-01-29T16:21:09.048296024Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 16:21:09.049518 containerd[1485]: time="2025-01-29T16:21:09.048367264Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 16:21:09.048559 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 16:21:09.053439 containerd[1485]: time="2025-01-29T16:21:09.052632896Z" level=info msg="containerd successfully booted in 0.141570s" Jan 29 16:21:09.228402 systemd-networkd[1370]: eth1: Gained IPv6LL Jan 29 16:21:09.259913 sshd_keygen[1478]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 16:21:09.328392 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 16:21:09.346275 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 16:21:09.355888 systemd[1]: Started sshd@0-164.92.66.114:22-139.178.89.65:37634.service - OpenSSH per-connection server daemon (139.178.89.65:37634). Jan 29 16:21:09.400658 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 16:21:09.402884 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 16:21:09.420292 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 16:21:09.473811 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 16:21:09.486771 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 16:21:09.506774 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 29 16:21:09.512234 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 16:21:09.542584 sshd[1564]: Accepted publickey for core from 139.178.89.65 port 37634 ssh2: RSA SHA256:1yg7JhvZkrJOwhuBgQvJ79WUbQdosGJaLn9TZ7AtIqY Jan 29 16:21:09.546980 sshd-session[1564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:21:09.562508 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 16:21:09.577615 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 16:21:09.601657 systemd-logind[1467]: New session 1 of user core. Jan 29 16:21:09.636219 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 16:21:09.656716 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 16:21:09.682693 (systemd)[1576]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 16:21:09.694007 systemd-logind[1467]: New session c1 of user core. Jan 29 16:21:09.942810 tar[1479]: linux-amd64/LICENSE Jan 29 16:21:09.942810 tar[1479]: linux-amd64/README.md Jan 29 16:21:10.004279 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 29 16:21:10.019578 systemd[1576]: Queued start job for default target default.target. Jan 29 16:21:10.025141 systemd[1576]: Created slice app.slice - User Application Slice. Jan 29 16:21:10.025197 systemd[1576]: Reached target paths.target - Paths. Jan 29 16:21:10.025267 systemd[1576]: Reached target timers.target - Timers. Jan 29 16:21:10.029362 systemd[1576]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 16:21:10.050828 systemd[1576]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 16:21:10.052249 systemd[1576]: Reached target sockets.target - Sockets. Jan 29 16:21:10.052360 systemd[1576]: Reached target basic.target - Basic System. Jan 29 16:21:10.052427 systemd[1576]: Reached target default.target - Main User Target. Jan 29 16:21:10.052478 systemd[1576]: Startup finished in 339ms. Jan 29 16:21:10.053966 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 16:21:10.064482 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 16:21:10.155883 systemd[1]: Started sshd@1-164.92.66.114:22-139.178.89.65:37644.service - OpenSSH per-connection server daemon (139.178.89.65:37644). Jan 29 16:21:10.216303 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:21:10.225798 (kubelet)[1597]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:21:10.227002 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 16:21:10.229767 systemd[1]: Startup finished in 1.423s (kernel) + 7.391s (initrd) + 6.722s (userspace) = 15.537s. Jan 29 16:21:10.245159 sshd[1590]: Accepted publickey for core from 139.178.89.65 port 37644 ssh2: RSA SHA256:1yg7JhvZkrJOwhuBgQvJ79WUbQdosGJaLn9TZ7AtIqY Jan 29 16:21:10.248873 sshd-session[1590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:21:10.279185 systemd-logind[1467]: New session 2 of user core. Jan 29 16:21:10.286571 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 16:21:10.358898 sshd[1602]: Connection closed by 139.178.89.65 port 37644 Jan 29 16:21:10.360220 sshd-session[1590]: pam_unix(sshd:session): session closed for user core Jan 29 16:21:10.375834 systemd[1]: sshd@1-164.92.66.114:22-139.178.89.65:37644.service: Deactivated successfully. Jan 29 16:21:10.379896 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 16:21:10.382935 systemd-logind[1467]: Session 2 logged out. Waiting for processes to exit. Jan 29 16:21:10.393891 systemd[1]: Started sshd@2-164.92.66.114:22-139.178.89.65:37660.service - OpenSSH per-connection server daemon (139.178.89.65:37660). Jan 29 16:21:10.399651 systemd-logind[1467]: Removed session 2. Jan 29 16:21:10.459864 sshd[1611]: Accepted publickey for core from 139.178.89.65 port 37660 ssh2: RSA SHA256:1yg7JhvZkrJOwhuBgQvJ79WUbQdosGJaLn9TZ7AtIqY Jan 29 16:21:10.462027 sshd-session[1611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:21:10.469669 systemd-logind[1467]: New session 3 of user core. Jan 29 16:21:10.475396 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 16:21:10.536562 sshd[1614]: Connection closed by 139.178.89.65 port 37660 Jan 29 16:21:10.540103 sshd-session[1611]: pam_unix(sshd:session): session closed for user core Jan 29 16:21:10.557693 systemd[1]: sshd@2-164.92.66.114:22-139.178.89.65:37660.service: Deactivated successfully. Jan 29 16:21:10.560402 systemd[1]: session-3.scope: Deactivated successfully. Jan 29 16:21:10.562640 systemd-logind[1467]: Session 3 logged out. Waiting for processes to exit. Jan 29 16:21:10.571588 systemd[1]: Started sshd@3-164.92.66.114:22-139.178.89.65:37672.service - OpenSSH per-connection server daemon (139.178.89.65:37672). Jan 29 16:21:10.575058 systemd-logind[1467]: Removed session 3. Jan 29 16:21:10.637956 sshd[1619]: Accepted publickey for core from 139.178.89.65 port 37672 ssh2: RSA SHA256:1yg7JhvZkrJOwhuBgQvJ79WUbQdosGJaLn9TZ7AtIqY Jan 29 16:21:10.641280 sshd-session[1619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:21:10.651173 systemd-logind[1467]: New session 4 of user core. Jan 29 16:21:10.654870 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 16:21:10.729879 sshd[1622]: Connection closed by 139.178.89.65 port 37672 Jan 29 16:21:10.730481 sshd-session[1619]: pam_unix(sshd:session): session closed for user core Jan 29 16:21:10.745507 systemd[1]: sshd@3-164.92.66.114:22-139.178.89.65:37672.service: Deactivated successfully. Jan 29 16:21:10.750687 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 16:21:10.754462 systemd-logind[1467]: Session 4 logged out. Waiting for processes to exit. Jan 29 16:21:10.763803 systemd[1]: Started sshd@4-164.92.66.114:22-139.178.89.65:37682.service - OpenSSH per-connection server daemon (139.178.89.65:37682). Jan 29 16:21:10.768291 systemd-logind[1467]: Removed session 4. Jan 29 16:21:10.830726 sshd[1627]: Accepted publickey for core from 139.178.89.65 port 37682 ssh2: RSA SHA256:1yg7JhvZkrJOwhuBgQvJ79WUbQdosGJaLn9TZ7AtIqY Jan 29 16:21:10.834283 sshd-session[1627]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:21:10.844850 systemd-logind[1467]: New session 5 of user core. Jan 29 16:21:10.847441 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 16:21:10.930741 sudo[1633]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 16:21:10.931653 sudo[1633]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:21:10.933164 kubelet[1597]: E0129 16:21:10.933046 1597 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:21:10.938897 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:21:10.940648 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:21:10.941572 systemd[1]: kubelet.service: Consumed 1.081s CPU time, 235.1M memory peak. Jan 29 16:21:10.948252 sudo[1633]: pam_unix(sudo:session): session closed for user root Jan 29 16:21:10.952320 sshd[1630]: Connection closed by 139.178.89.65 port 37682 Jan 29 16:21:10.953271 sshd-session[1627]: pam_unix(sshd:session): session closed for user core Jan 29 16:21:10.973134 systemd[1]: sshd@4-164.92.66.114:22-139.178.89.65:37682.service: Deactivated successfully. Jan 29 16:21:10.977364 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 16:21:10.980306 systemd-logind[1467]: Session 5 logged out. Waiting for processes to exit. Jan 29 16:21:10.995635 systemd[1]: Started sshd@5-164.92.66.114:22-139.178.89.65:37694.service - OpenSSH per-connection server daemon (139.178.89.65:37694). Jan 29 16:21:10.997561 systemd-logind[1467]: Removed session 5. Jan 29 16:21:11.050289 sshd[1639]: Accepted publickey for core from 139.178.89.65 port 37694 ssh2: RSA SHA256:1yg7JhvZkrJOwhuBgQvJ79WUbQdosGJaLn9TZ7AtIqY Jan 29 16:21:11.052403 sshd-session[1639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:21:11.059396 systemd-logind[1467]: New session 6 of user core. Jan 29 16:21:11.070427 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 16:21:11.133212 sudo[1644]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 16:21:11.133578 sudo[1644]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:21:11.138341 sudo[1644]: pam_unix(sudo:session): session closed for user root Jan 29 16:21:11.146684 sudo[1643]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 29 16:21:11.147057 sudo[1643]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:21:11.174655 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 16:21:11.222869 augenrules[1666]: No rules Jan 29 16:21:11.224829 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 16:21:11.225198 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 16:21:11.227039 sudo[1643]: pam_unix(sudo:session): session closed for user root Jan 29 16:21:11.231225 sshd[1642]: Connection closed by 139.178.89.65 port 37694 Jan 29 16:21:11.232494 sshd-session[1639]: pam_unix(sshd:session): session closed for user core Jan 29 16:21:11.243724 systemd[1]: sshd@5-164.92.66.114:22-139.178.89.65:37694.service: Deactivated successfully. Jan 29 16:21:11.246840 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 16:21:11.249293 systemd-logind[1467]: Session 6 logged out. Waiting for processes to exit. Jan 29 16:21:11.254726 systemd[1]: Started sshd@6-164.92.66.114:22-139.178.89.65:52788.service - OpenSSH per-connection server daemon (139.178.89.65:52788). Jan 29 16:21:11.256808 systemd-logind[1467]: Removed session 6. Jan 29 16:21:11.324400 sshd[1674]: Accepted publickey for core from 139.178.89.65 port 52788 ssh2: RSA SHA256:1yg7JhvZkrJOwhuBgQvJ79WUbQdosGJaLn9TZ7AtIqY Jan 29 16:21:11.326336 sshd-session[1674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:21:11.335084 systemd-logind[1467]: New session 7 of user core. Jan 29 16:21:11.345452 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 16:21:11.406741 sudo[1678]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 16:21:11.407605 sudo[1678]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:21:11.942526 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 29 16:21:11.942815 (dockerd)[1695]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 29 16:21:12.922125 dockerd[1695]: time="2025-01-29T16:21:12.921995835Z" level=info msg="Starting up" Jan 29 16:21:13.068923 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1540477507-merged.mount: Deactivated successfully. Jan 29 16:21:13.182173 dockerd[1695]: time="2025-01-29T16:21:13.181641957Z" level=info msg="Loading containers: start." Jan 29 16:21:13.401121 kernel: Initializing XFRM netlink socket Jan 29 16:21:13.518381 systemd-networkd[1370]: docker0: Link UP Jan 29 16:21:13.556676 dockerd[1695]: time="2025-01-29T16:21:13.556494698Z" level=info msg="Loading containers: done." Jan 29 16:21:13.576748 dockerd[1695]: time="2025-01-29T16:21:13.576664523Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 29 16:21:13.577014 dockerd[1695]: time="2025-01-29T16:21:13.576847449Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jan 29 16:21:13.577144 dockerd[1695]: time="2025-01-29T16:21:13.577042843Z" level=info msg="Daemon has completed initialization" Jan 29 16:21:13.625305 dockerd[1695]: time="2025-01-29T16:21:13.625208954Z" level=info msg="API listen on /run/docker.sock" Jan 29 16:21:13.625966 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 29 16:21:14.694182 containerd[1485]: time="2025-01-29T16:21:14.694044110Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\"" Jan 29 16:21:15.514981 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount527700212.mount: Deactivated successfully. Jan 29 16:21:17.368453 containerd[1485]: time="2025-01-29T16:21:17.367687950Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:21:17.372450 containerd[1485]: time="2025-01-29T16:21:17.371932045Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.5: active requests=0, bytes read=27976721" Jan 29 16:21:17.373536 containerd[1485]: time="2025-01-29T16:21:17.373443032Z" level=info msg="ImageCreate event name:\"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:21:17.379818 containerd[1485]: time="2025-01-29T16:21:17.379744342Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:21:17.385586 containerd[1485]: time="2025-01-29T16:21:17.382563117Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.5\" with image id \"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\", size \"27973521\" in 2.688373007s" Jan 29 16:21:17.385586 containerd[1485]: time="2025-01-29T16:21:17.385156664Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\" returns image reference \"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\"" Jan 29 16:21:17.389470 containerd[1485]: time="2025-01-29T16:21:17.389324843Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\"" Jan 29 16:21:19.535514 containerd[1485]: time="2025-01-29T16:21:19.535436279Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:21:19.537298 containerd[1485]: time="2025-01-29T16:21:19.537168103Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.5: active requests=0, bytes read=24701143" Jan 29 16:21:19.540122 containerd[1485]: time="2025-01-29T16:21:19.539126146Z" level=info msg="ImageCreate event name:\"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:21:19.544161 containerd[1485]: time="2025-01-29T16:21:19.543809407Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:21:19.545749 containerd[1485]: time="2025-01-29T16:21:19.545470690Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.5\" with image id \"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\", size \"26147725\" in 2.156086713s" Jan 29 16:21:19.545749 containerd[1485]: time="2025-01-29T16:21:19.545545999Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\" returns image reference \"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\"" Jan 29 16:21:19.546925 containerd[1485]: time="2025-01-29T16:21:19.546864863Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\"" Jan 29 16:21:21.078749 containerd[1485]: time="2025-01-29T16:21:21.078635013Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:21:21.080710 containerd[1485]: time="2025-01-29T16:21:21.080141952Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.5: active requests=0, bytes read=18652053" Jan 29 16:21:21.086452 containerd[1485]: time="2025-01-29T16:21:21.086379602Z" level=info msg="ImageCreate event name:\"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:21:21.091988 containerd[1485]: time="2025-01-29T16:21:21.091913010Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:21:21.094594 containerd[1485]: time="2025-01-29T16:21:21.094509018Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.5\" with image id \"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\", size \"20098653\" in 1.547581254s" Jan 29 16:21:21.094893 containerd[1485]: time="2025-01-29T16:21:21.094860625Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\" returns image reference \"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\"" Jan 29 16:21:21.096325 containerd[1485]: time="2025-01-29T16:21:21.095931916Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\"" Jan 29 16:21:21.098659 systemd-resolved[1335]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Jan 29 16:21:21.190053 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 16:21:21.210574 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:21:21.483525 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:21:21.492835 (kubelet)[1958]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:21:21.615130 kubelet[1958]: E0129 16:21:21.614524 1958 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:21:21.623198 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:21:21.623426 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:21:21.624680 systemd[1]: kubelet.service: Consumed 319ms CPU time, 95.9M memory peak. Jan 29 16:21:22.304495 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3817816533.mount: Deactivated successfully. Jan 29 16:21:23.009043 containerd[1485]: time="2025-01-29T16:21:23.008972710Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:21:23.010165 containerd[1485]: time="2025-01-29T16:21:23.009880566Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.5: active requests=0, bytes read=30231128" Jan 29 16:21:23.011012 containerd[1485]: time="2025-01-29T16:21:23.010960066Z" level=info msg="ImageCreate event name:\"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:21:23.014906 containerd[1485]: time="2025-01-29T16:21:23.013816272Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:21:23.014906 containerd[1485]: time="2025-01-29T16:21:23.014728847Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.5\" with image id \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\", size \"30230147\" in 1.918739602s" Jan 29 16:21:23.014906 containerd[1485]: time="2025-01-29T16:21:23.014773309Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\" returns image reference \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\"" Jan 29 16:21:23.016371 containerd[1485]: time="2025-01-29T16:21:23.016299329Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 29 16:21:23.545889 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount774581604.mount: Deactivated successfully. Jan 29 16:21:24.204413 systemd-resolved[1335]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Jan 29 16:21:24.512502 containerd[1485]: time="2025-01-29T16:21:24.512144852Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:21:24.514042 containerd[1485]: time="2025-01-29T16:21:24.513987321Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 29 16:21:24.515123 containerd[1485]: time="2025-01-29T16:21:24.514220108Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:21:24.518970 containerd[1485]: time="2025-01-29T16:21:24.518007292Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:21:24.519720 containerd[1485]: time="2025-01-29T16:21:24.519675326Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.503319963s" Jan 29 16:21:24.519720 containerd[1485]: time="2025-01-29T16:21:24.519717843Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 29 16:21:24.520495 containerd[1485]: time="2025-01-29T16:21:24.520461535Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 29 16:21:24.978193 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4263260147.mount: Deactivated successfully. Jan 29 16:21:24.984193 containerd[1485]: time="2025-01-29T16:21:24.984120480Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:21:24.985713 containerd[1485]: time="2025-01-29T16:21:24.985588359Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 29 16:21:24.985713 containerd[1485]: time="2025-01-29T16:21:24.985652948Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:21:24.988569 containerd[1485]: time="2025-01-29T16:21:24.988475336Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:21:24.990297 containerd[1485]: time="2025-01-29T16:21:24.989562160Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 469.059702ms" Jan 29 16:21:24.990297 containerd[1485]: time="2025-01-29T16:21:24.989616020Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 29 16:21:24.990713 containerd[1485]: time="2025-01-29T16:21:24.990687591Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jan 29 16:21:25.526193 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2466151888.mount: Deactivated successfully. Jan 29 16:21:27.691318 containerd[1485]: time="2025-01-29T16:21:27.691238841Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:21:27.692842 containerd[1485]: time="2025-01-29T16:21:27.692773802Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56779973" Jan 29 16:21:27.694713 containerd[1485]: time="2025-01-29T16:21:27.693714063Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:21:27.697447 containerd[1485]: time="2025-01-29T16:21:27.697396104Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:21:27.703626 containerd[1485]: time="2025-01-29T16:21:27.703541374Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.712738216s" Jan 29 16:21:27.703894 containerd[1485]: time="2025-01-29T16:21:27.703861924Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jan 29 16:21:30.519293 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:21:30.520110 systemd[1]: kubelet.service: Consumed 319ms CPU time, 95.9M memory peak. Jan 29 16:21:30.529501 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:21:30.568706 systemd[1]: Reload requested from client PID 2102 ('systemctl') (unit session-7.scope)... Jan 29 16:21:30.568921 systemd[1]: Reloading... Jan 29 16:21:30.704124 zram_generator::config[2155]: No configuration found. Jan 29 16:21:30.841093 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:21:30.987358 systemd[1]: Reloading finished in 417 ms. Jan 29 16:21:31.048703 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:21:31.054139 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:21:31.057949 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 16:21:31.058378 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:21:31.058441 systemd[1]: kubelet.service: Consumed 103ms CPU time, 83.6M memory peak. Jan 29 16:21:31.068660 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:21:31.199321 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:21:31.210681 (kubelet)[2202]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 16:21:31.278940 kubelet[2202]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:21:31.278940 kubelet[2202]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 16:21:31.278940 kubelet[2202]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:21:31.281107 kubelet[2202]: I0129 16:21:31.280971 2202 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 16:21:31.662818 kubelet[2202]: I0129 16:21:31.662633 2202 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 29 16:21:31.665110 kubelet[2202]: I0129 16:21:31.663049 2202 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 16:21:31.665110 kubelet[2202]: I0129 16:21:31.664047 2202 server.go:929] "Client rotation is on, will bootstrap in background" Jan 29 16:21:31.728580 kubelet[2202]: I0129 16:21:31.728522 2202 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 16:21:31.729028 kubelet[2202]: E0129 16:21:31.728677 2202 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://164.92.66.114:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 164.92.66.114:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:21:31.746499 kubelet[2202]: E0129 16:21:31.746433 2202 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 16:21:31.746499 kubelet[2202]: I0129 16:21:31.746484 2202 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 16:21:31.752642 kubelet[2202]: I0129 16:21:31.752584 2202 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 16:21:31.752860 kubelet[2202]: I0129 16:21:31.752801 2202 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 29 16:21:31.753185 kubelet[2202]: I0129 16:21:31.753111 2202 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 16:21:31.753471 kubelet[2202]: I0129 16:21:31.753172 2202 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.0.0-8-df8e9582f3","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 16:21:31.753637 kubelet[2202]: I0129 16:21:31.753509 2202 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 16:21:31.753637 kubelet[2202]: I0129 16:21:31.753533 2202 container_manager_linux.go:300] "Creating device plugin manager" Jan 29 16:21:31.753801 kubelet[2202]: I0129 16:21:31.753770 2202 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:21:31.757408 kubelet[2202]: I0129 16:21:31.757022 2202 kubelet.go:408] "Attempting to sync node with API server" Jan 29 16:21:31.757408 kubelet[2202]: I0129 16:21:31.757133 2202 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 16:21:31.757408 kubelet[2202]: I0129 16:21:31.757224 2202 kubelet.go:314] "Adding apiserver pod source" Jan 29 16:21:31.757408 kubelet[2202]: I0129 16:21:31.757268 2202 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 16:21:31.766748 kubelet[2202]: W0129 16:21:31.766647 2202 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://164.92.66.114:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.0.0-8-df8e9582f3&limit=500&resourceVersion=0": dial tcp 164.92.66.114:6443: connect: connection refused Jan 29 16:21:31.767618 kubelet[2202]: E0129 16:21:31.767110 2202 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://164.92.66.114:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.0.0-8-df8e9582f3&limit=500&resourceVersion=0\": dial tcp 164.92.66.114:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:21:31.767618 kubelet[2202]: I0129 16:21:31.767350 2202 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 16:21:31.770783 kubelet[2202]: I0129 16:21:31.770735 2202 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 16:21:31.773114 kubelet[2202]: W0129 16:21:31.772839 2202 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 16:21:31.776378 kubelet[2202]: I0129 16:21:31.775627 2202 server.go:1269] "Started kubelet" Jan 29 16:21:31.787465 kubelet[2202]: I0129 16:21:31.787200 2202 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 16:21:31.799984 kubelet[2202]: E0129 16:21:31.796248 2202 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://164.92.66.114:6443/api/v1/namespaces/default/events\": dial tcp 164.92.66.114:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.0.0-8-df8e9582f3.181f364e7457ea44 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.0.0-8-df8e9582f3,UID:ci-4230.0.0-8-df8e9582f3,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.0.0-8-df8e9582f3,},FirstTimestamp:2025-01-29 16:21:31.775568452 +0000 UTC m=+0.560111316,LastTimestamp:2025-01-29 16:21:31.775568452 +0000 UTC m=+0.560111316,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.0.0-8-df8e9582f3,}" Jan 29 16:21:31.804127 kubelet[2202]: I0129 16:21:31.803465 2202 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 16:21:31.807123 kubelet[2202]: I0129 16:21:31.806708 2202 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 16:21:31.818977 kubelet[2202]: I0129 16:21:31.818934 2202 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 16:21:31.819496 kubelet[2202]: I0129 16:21:31.811988 2202 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 29 16:21:31.821132 kubelet[2202]: I0129 16:21:31.811934 2202 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 29 16:21:31.821132 kubelet[2202]: E0129 16:21:31.812309 2202 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.0.0-8-df8e9582f3\" not found" Jan 29 16:21:31.821132 kubelet[2202]: W0129 16:21:31.803846 2202 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://164.92.66.114:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 164.92.66.114:6443: connect: connection refused Jan 29 16:21:31.821132 kubelet[2202]: E0129 16:21:31.820630 2202 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://164.92.66.114:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 164.92.66.114:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:21:31.821132 kubelet[2202]: E0129 16:21:31.817143 2202 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://164.92.66.114:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.0.0-8-df8e9582f3?timeout=10s\": dial tcp 164.92.66.114:6443: connect: connection refused" interval="200ms" Jan 29 16:21:31.821132 kubelet[2202]: I0129 16:21:31.820754 2202 reconciler.go:26] "Reconciler: start to sync state" Jan 29 16:21:31.821132 kubelet[2202]: I0129 16:21:31.814213 2202 server.go:460] "Adding debug handlers to kubelet server" Jan 29 16:21:31.822828 kubelet[2202]: W0129 16:21:31.817543 2202 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://164.92.66.114:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 164.92.66.114:6443: connect: connection refused Jan 29 16:21:31.823194 kubelet[2202]: E0129 16:21:31.822864 2202 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://164.92.66.114:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 164.92.66.114:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:21:31.823251 kubelet[2202]: I0129 16:21:31.823199 2202 factory.go:221] Registration of the containerd container factory successfully Jan 29 16:21:31.823251 kubelet[2202]: I0129 16:21:31.823224 2202 factory.go:221] Registration of the systemd container factory successfully Jan 29 16:21:31.824148 kubelet[2202]: I0129 16:21:31.823383 2202 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 16:21:31.824767 kubelet[2202]: I0129 16:21:31.812221 2202 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 16:21:31.840738 kubelet[2202]: I0129 16:21:31.840677 2202 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 16:21:31.842406 kubelet[2202]: I0129 16:21:31.842369 2202 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 16:21:31.842626 kubelet[2202]: I0129 16:21:31.842613 2202 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 16:21:31.842717 kubelet[2202]: I0129 16:21:31.842709 2202 kubelet.go:2321] "Starting kubelet main sync loop" Jan 29 16:21:31.842853 kubelet[2202]: E0129 16:21:31.842835 2202 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 16:21:31.856127 kubelet[2202]: W0129 16:21:31.855979 2202 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://164.92.66.114:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 164.92.66.114:6443: connect: connection refused Jan 29 16:21:31.856295 kubelet[2202]: E0129 16:21:31.856169 2202 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://164.92.66.114:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 164.92.66.114:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:21:31.859996 kubelet[2202]: I0129 16:21:31.859685 2202 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 16:21:31.859996 kubelet[2202]: I0129 16:21:31.859705 2202 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 16:21:31.859996 kubelet[2202]: I0129 16:21:31.859732 2202 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:21:31.862308 kubelet[2202]: I0129 16:21:31.862272 2202 policy_none.go:49] "None policy: Start" Jan 29 16:21:31.863782 kubelet[2202]: I0129 16:21:31.863745 2202 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 16:21:31.863905 kubelet[2202]: I0129 16:21:31.863794 2202 state_mem.go:35] "Initializing new in-memory state store" Jan 29 16:21:31.874225 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 16:21:31.888591 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 16:21:31.896953 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 16:21:31.907573 kubelet[2202]: I0129 16:21:31.907502 2202 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 16:21:31.907857 kubelet[2202]: I0129 16:21:31.907765 2202 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 16:21:31.907857 kubelet[2202]: I0129 16:21:31.907789 2202 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 16:21:31.908526 kubelet[2202]: I0129 16:21:31.908293 2202 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 16:21:31.911504 kubelet[2202]: E0129 16:21:31.911333 2202 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230.0.0-8-df8e9582f3\" not found" Jan 29 16:21:31.957132 systemd[1]: Created slice kubepods-burstable-podab664994452859621e46938dde3bc995.slice - libcontainer container kubepods-burstable-podab664994452859621e46938dde3bc995.slice. Jan 29 16:21:31.981263 systemd[1]: Created slice kubepods-burstable-pod2c7c847e6512309498909ac41fae9e0d.slice - libcontainer container kubepods-burstable-pod2c7c847e6512309498909ac41fae9e0d.slice. Jan 29 16:21:31.994337 systemd[1]: Created slice kubepods-burstable-pod7b626fe57743684e6980ef896b717595.slice - libcontainer container kubepods-burstable-pod7b626fe57743684e6980ef896b717595.slice. Jan 29 16:21:32.009941 kubelet[2202]: I0129 16:21:32.009874 2202 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.0.0-8-df8e9582f3" Jan 29 16:21:32.010585 kubelet[2202]: E0129 16:21:32.010529 2202 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://164.92.66.114:6443/api/v1/nodes\": dial tcp 164.92.66.114:6443: connect: connection refused" node="ci-4230.0.0-8-df8e9582f3" Jan 29 16:21:32.021621 kubelet[2202]: E0129 16:21:32.021364 2202 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://164.92.66.114:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.0.0-8-df8e9582f3?timeout=10s\": dial tcp 164.92.66.114:6443: connect: connection refused" interval="400ms" Jan 29 16:21:32.021621 kubelet[2202]: I0129 16:21:32.021529 2202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ab664994452859621e46938dde3bc995-ca-certs\") pod \"kube-apiserver-ci-4230.0.0-8-df8e9582f3\" (UID: \"ab664994452859621e46938dde3bc995\") " pod="kube-system/kube-apiserver-ci-4230.0.0-8-df8e9582f3" Jan 29 16:21:32.123118 kubelet[2202]: I0129 16:21:32.122533 2202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2c7c847e6512309498909ac41fae9e0d-ca-certs\") pod \"kube-controller-manager-ci-4230.0.0-8-df8e9582f3\" (UID: \"2c7c847e6512309498909ac41fae9e0d\") " pod="kube-system/kube-controller-manager-ci-4230.0.0-8-df8e9582f3" Jan 29 16:21:32.123118 kubelet[2202]: I0129 16:21:32.122620 2202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2c7c847e6512309498909ac41fae9e0d-kubeconfig\") pod \"kube-controller-manager-ci-4230.0.0-8-df8e9582f3\" (UID: \"2c7c847e6512309498909ac41fae9e0d\") " pod="kube-system/kube-controller-manager-ci-4230.0.0-8-df8e9582f3" Jan 29 16:21:32.123118 kubelet[2202]: I0129 16:21:32.122640 2202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2c7c847e6512309498909ac41fae9e0d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.0.0-8-df8e9582f3\" (UID: \"2c7c847e6512309498909ac41fae9e0d\") " pod="kube-system/kube-controller-manager-ci-4230.0.0-8-df8e9582f3" Jan 29 16:21:32.123118 kubelet[2202]: I0129 16:21:32.122681 2202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b626fe57743684e6980ef896b717595-kubeconfig\") pod \"kube-scheduler-ci-4230.0.0-8-df8e9582f3\" (UID: \"7b626fe57743684e6980ef896b717595\") " pod="kube-system/kube-scheduler-ci-4230.0.0-8-df8e9582f3" Jan 29 16:21:32.123118 kubelet[2202]: I0129 16:21:32.122746 2202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ab664994452859621e46938dde3bc995-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.0.0-8-df8e9582f3\" (UID: \"ab664994452859621e46938dde3bc995\") " pod="kube-system/kube-apiserver-ci-4230.0.0-8-df8e9582f3" Jan 29 16:21:32.123419 kubelet[2202]: I0129 16:21:32.122765 2202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2c7c847e6512309498909ac41fae9e0d-k8s-certs\") pod \"kube-controller-manager-ci-4230.0.0-8-df8e9582f3\" (UID: \"2c7c847e6512309498909ac41fae9e0d\") " pod="kube-system/kube-controller-manager-ci-4230.0.0-8-df8e9582f3" Jan 29 16:21:32.123419 kubelet[2202]: I0129 16:21:32.122838 2202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ab664994452859621e46938dde3bc995-k8s-certs\") pod \"kube-apiserver-ci-4230.0.0-8-df8e9582f3\" (UID: \"ab664994452859621e46938dde3bc995\") " pod="kube-system/kube-apiserver-ci-4230.0.0-8-df8e9582f3" Jan 29 16:21:32.123419 kubelet[2202]: I0129 16:21:32.122878 2202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2c7c847e6512309498909ac41fae9e0d-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.0.0-8-df8e9582f3\" (UID: \"2c7c847e6512309498909ac41fae9e0d\") " pod="kube-system/kube-controller-manager-ci-4230.0.0-8-df8e9582f3" Jan 29 16:21:32.212987 kubelet[2202]: I0129 16:21:32.212835 2202 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.0.0-8-df8e9582f3" Jan 29 16:21:32.213849 kubelet[2202]: E0129 16:21:32.213775 2202 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://164.92.66.114:6443/api/v1/nodes\": dial tcp 164.92.66.114:6443: connect: connection refused" node="ci-4230.0.0-8-df8e9582f3" Jan 29 16:21:32.277254 kubelet[2202]: E0129 16:21:32.277051 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 16:21:32.278113 containerd[1485]: time="2025-01-29T16:21:32.278038137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.0.0-8-df8e9582f3,Uid:ab664994452859621e46938dde3bc995,Namespace:kube-system,Attempt:0,}" Jan 29 16:21:32.287325 systemd-resolved[1335]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Jan 29 16:21:32.290740 kubelet[2202]: E0129 16:21:32.290326 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 16:21:32.291543 containerd[1485]: time="2025-01-29T16:21:32.291499017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.0.0-8-df8e9582f3,Uid:2c7c847e6512309498909ac41fae9e0d,Namespace:kube-system,Attempt:0,}" Jan 29 16:21:32.298545 kubelet[2202]: E0129 16:21:32.298482 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 16:21:32.299115 containerd[1485]: time="2025-01-29T16:21:32.299061892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.0.0-8-df8e9582f3,Uid:7b626fe57743684e6980ef896b717595,Namespace:kube-system,Attempt:0,}" Jan 29 16:21:32.422445 kubelet[2202]: E0129 16:21:32.422366 2202 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://164.92.66.114:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.0.0-8-df8e9582f3?timeout=10s\": dial tcp 164.92.66.114:6443: connect: connection refused" interval="800ms" Jan 29 16:21:32.630320 kubelet[2202]: I0129 16:21:32.629192 2202 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.0.0-8-df8e9582f3" Jan 29 16:21:32.630320 kubelet[2202]: E0129 16:21:32.629739 2202 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://164.92.66.114:6443/api/v1/nodes\": dial tcp 164.92.66.114:6443: connect: connection refused" node="ci-4230.0.0-8-df8e9582f3" Jan 29 16:21:32.811465 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3633001844.mount: Deactivated successfully. Jan 29 16:21:32.815045 containerd[1485]: time="2025-01-29T16:21:32.814959407Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:21:32.816362 containerd[1485]: time="2025-01-29T16:21:32.816315391Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:21:32.817941 containerd[1485]: time="2025-01-29T16:21:32.817759790Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 29 16:21:32.817941 containerd[1485]: time="2025-01-29T16:21:32.817892305Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 16:21:32.818871 containerd[1485]: time="2025-01-29T16:21:32.818726514Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:21:32.819719 containerd[1485]: time="2025-01-29T16:21:32.819672833Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 16:21:32.822857 containerd[1485]: time="2025-01-29T16:21:32.822804451Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:21:32.826410 containerd[1485]: time="2025-01-29T16:21:32.825659710Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 526.483428ms" Jan 29 16:21:32.829107 containerd[1485]: time="2025-01-29T16:21:32.828056189Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:21:32.831546 containerd[1485]: time="2025-01-29T16:21:32.831480646Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 539.847127ms" Jan 29 16:21:32.842285 containerd[1485]: time="2025-01-29T16:21:32.841353331Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 557.494839ms" Jan 29 16:21:32.853659 kubelet[2202]: W0129 16:21:32.853448 2202 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://164.92.66.114:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 164.92.66.114:6443: connect: connection refused Jan 29 16:21:32.853659 kubelet[2202]: E0129 16:21:32.853584 2202 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://164.92.66.114:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 164.92.66.114:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:21:33.007624 containerd[1485]: time="2025-01-29T16:21:33.007275873Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:21:33.007624 containerd[1485]: time="2025-01-29T16:21:33.007403482Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:21:33.008251 containerd[1485]: time="2025-01-29T16:21:33.007783823Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:21:33.008251 containerd[1485]: time="2025-01-29T16:21:33.007889887Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:21:33.008251 containerd[1485]: time="2025-01-29T16:21:33.007912210Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:21:33.008251 containerd[1485]: time="2025-01-29T16:21:33.008111224Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:21:33.009622 containerd[1485]: time="2025-01-29T16:21:33.007429687Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:21:33.015229 containerd[1485]: time="2025-01-29T16:21:33.014533360Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:21:33.015229 containerd[1485]: time="2025-01-29T16:21:33.014711634Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:21:33.015229 containerd[1485]: time="2025-01-29T16:21:33.014740395Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:21:33.015229 containerd[1485]: time="2025-01-29T16:21:33.014954923Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:21:33.015229 containerd[1485]: time="2025-01-29T16:21:33.014329501Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:21:33.057444 systemd[1]: Started cri-containerd-5108a3df488be1fb490c00a483261d7f241525a585c59fb712515cf4a0f4f408.scope - libcontainer container 5108a3df488be1fb490c00a483261d7f241525a585c59fb712515cf4a0f4f408. Jan 29 16:21:33.058874 kubelet[2202]: W0129 16:21:33.058475 2202 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://164.92.66.114:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.0.0-8-df8e9582f3&limit=500&resourceVersion=0": dial tcp 164.92.66.114:6443: connect: connection refused Jan 29 16:21:33.058874 kubelet[2202]: E0129 16:21:33.058600 2202 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://164.92.66.114:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.0.0-8-df8e9582f3&limit=500&resourceVersion=0\": dial tcp 164.92.66.114:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:21:33.077250 kubelet[2202]: W0129 16:21:33.074684 2202 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://164.92.66.114:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 164.92.66.114:6443: connect: connection refused Jan 29 16:21:33.077250 kubelet[2202]: E0129 16:21:33.074813 2202 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://164.92.66.114:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 164.92.66.114:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:21:33.084034 systemd[1]: Started cri-containerd-93e15e8cd3c0788fc856491b164f49e46b7f934249d99e182c07e94dfbd3812a.scope - libcontainer container 93e15e8cd3c0788fc856491b164f49e46b7f934249d99e182c07e94dfbd3812a. Jan 29 16:21:33.098595 systemd[1]: Started cri-containerd-10ae5981cb507575471eda8f75a76ea37579db1be4fa9ae0065bb8cf152f8ab7.scope - libcontainer container 10ae5981cb507575471eda8f75a76ea37579db1be4fa9ae0065bb8cf152f8ab7. Jan 29 16:21:33.186849 containerd[1485]: time="2025-01-29T16:21:33.186541870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.0.0-8-df8e9582f3,Uid:ab664994452859621e46938dde3bc995,Namespace:kube-system,Attempt:0,} returns sandbox id \"5108a3df488be1fb490c00a483261d7f241525a585c59fb712515cf4a0f4f408\"" Jan 29 16:21:33.189676 kubelet[2202]: E0129 16:21:33.189194 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 16:21:33.199481 containerd[1485]: time="2025-01-29T16:21:33.199426312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.0.0-8-df8e9582f3,Uid:7b626fe57743684e6980ef896b717595,Namespace:kube-system,Attempt:0,} returns sandbox id \"93e15e8cd3c0788fc856491b164f49e46b7f934249d99e182c07e94dfbd3812a\"" Jan 29 16:21:33.201490 kubelet[2202]: E0129 16:21:33.201440 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 16:21:33.204771 containerd[1485]: time="2025-01-29T16:21:33.203401243Z" level=info msg="CreateContainer within sandbox \"5108a3df488be1fb490c00a483261d7f241525a585c59fb712515cf4a0f4f408\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 29 16:21:33.209019 containerd[1485]: time="2025-01-29T16:21:33.208572209Z" level=info msg="CreateContainer within sandbox \"93e15e8cd3c0788fc856491b164f49e46b7f934249d99e182c07e94dfbd3812a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 29 16:21:33.230489 containerd[1485]: time="2025-01-29T16:21:33.227532192Z" level=info msg="CreateContainer within sandbox \"5108a3df488be1fb490c00a483261d7f241525a585c59fb712515cf4a0f4f408\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"93df45adf38b295948a9414a91308289f187f1964275289b8a88fb9d7facb738\"" Jan 29 16:21:33.230489 containerd[1485]: time="2025-01-29T16:21:33.228445783Z" level=info msg="StartContainer for \"93df45adf38b295948a9414a91308289f187f1964275289b8a88fb9d7facb738\"" Jan 29 16:21:33.233518 kubelet[2202]: E0129 16:21:33.232995 2202 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://164.92.66.114:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.0.0-8-df8e9582f3?timeout=10s\": dial tcp 164.92.66.114:6443: connect: connection refused" interval="1.6s" Jan 29 16:21:33.239619 containerd[1485]: time="2025-01-29T16:21:33.239431331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.0.0-8-df8e9582f3,Uid:2c7c847e6512309498909ac41fae9e0d,Namespace:kube-system,Attempt:0,} returns sandbox id \"10ae5981cb507575471eda8f75a76ea37579db1be4fa9ae0065bb8cf152f8ab7\"" Jan 29 16:21:33.241086 kubelet[2202]: E0129 16:21:33.240607 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 16:21:33.241233 containerd[1485]: time="2025-01-29T16:21:33.240982836Z" level=info msg="CreateContainer within sandbox \"93e15e8cd3c0788fc856491b164f49e46b7f934249d99e182c07e94dfbd3812a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"cb1731e3508de57a36894eacbdeeb5269a58c458eaea247d385bacf8285f510e\"" Jan 29 16:21:33.242091 containerd[1485]: time="2025-01-29T16:21:33.241537348Z" level=info msg="StartContainer for \"cb1731e3508de57a36894eacbdeeb5269a58c458eaea247d385bacf8285f510e\"" Jan 29 16:21:33.244809 containerd[1485]: time="2025-01-29T16:21:33.244742970Z" level=info msg="CreateContainer within sandbox \"10ae5981cb507575471eda8f75a76ea37579db1be4fa9ae0065bb8cf152f8ab7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 29 16:21:33.270586 containerd[1485]: time="2025-01-29T16:21:33.269469261Z" level=info msg="CreateContainer within sandbox \"10ae5981cb507575471eda8f75a76ea37579db1be4fa9ae0065bb8cf152f8ab7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7d22d5b845a29574b06ddcdb345446cdc16d58ab995d314ab54434fd83ed8885\"" Jan 29 16:21:33.271820 containerd[1485]: time="2025-01-29T16:21:33.271772244Z" level=info msg="StartContainer for \"7d22d5b845a29574b06ddcdb345446cdc16d58ab995d314ab54434fd83ed8885\"" Jan 29 16:21:33.280024 kubelet[2202]: W0129 16:21:33.279722 2202 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://164.92.66.114:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 164.92.66.114:6443: connect: connection refused Jan 29 16:21:33.280024 kubelet[2202]: E0129 16:21:33.279803 2202 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://164.92.66.114:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 164.92.66.114:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:21:33.292693 systemd[1]: Started cri-containerd-cb1731e3508de57a36894eacbdeeb5269a58c458eaea247d385bacf8285f510e.scope - libcontainer container cb1731e3508de57a36894eacbdeeb5269a58c458eaea247d385bacf8285f510e. Jan 29 16:21:33.306647 systemd[1]: Started cri-containerd-93df45adf38b295948a9414a91308289f187f1964275289b8a88fb9d7facb738.scope - libcontainer container 93df45adf38b295948a9414a91308289f187f1964275289b8a88fb9d7facb738. Jan 29 16:21:33.363703 systemd[1]: Started cri-containerd-7d22d5b845a29574b06ddcdb345446cdc16d58ab995d314ab54434fd83ed8885.scope - libcontainer container 7d22d5b845a29574b06ddcdb345446cdc16d58ab995d314ab54434fd83ed8885. Jan 29 16:21:33.402702 containerd[1485]: time="2025-01-29T16:21:33.402493217Z" level=info msg="StartContainer for \"cb1731e3508de57a36894eacbdeeb5269a58c458eaea247d385bacf8285f510e\" returns successfully" Jan 29 16:21:33.410775 containerd[1485]: time="2025-01-29T16:21:33.410590245Z" level=info msg="StartContainer for \"93df45adf38b295948a9414a91308289f187f1964275289b8a88fb9d7facb738\" returns successfully" Jan 29 16:21:33.436827 kubelet[2202]: I0129 16:21:33.436765 2202 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.0.0-8-df8e9582f3" Jan 29 16:21:33.437523 kubelet[2202]: E0129 16:21:33.437485 2202 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://164.92.66.114:6443/api/v1/nodes\": dial tcp 164.92.66.114:6443: connect: connection refused" node="ci-4230.0.0-8-df8e9582f3" Jan 29 16:21:33.456553 containerd[1485]: time="2025-01-29T16:21:33.456361564Z" level=info msg="StartContainer for \"7d22d5b845a29574b06ddcdb345446cdc16d58ab995d314ab54434fd83ed8885\" returns successfully" Jan 29 16:21:33.869490 kubelet[2202]: E0129 16:21:33.869040 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 16:21:33.876037 kubelet[2202]: E0129 16:21:33.875836 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 16:21:33.878888 kubelet[2202]: E0129 16:21:33.878669 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 16:21:34.883160 kubelet[2202]: E0129 16:21:34.883022 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 16:21:34.883783 kubelet[2202]: E0129 16:21:34.883528 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 16:21:35.039011 kubelet[2202]: I0129 16:21:35.038969 2202 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.0.0-8-df8e9582f3" Jan 29 16:21:35.987294 kubelet[2202]: I0129 16:21:35.986857 2202 kubelet_node_status.go:75] "Successfully registered node" node="ci-4230.0.0-8-df8e9582f3" Jan 29 16:21:35.987294 kubelet[2202]: E0129 16:21:35.986917 2202 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4230.0.0-8-df8e9582f3\": node \"ci-4230.0.0-8-df8e9582f3\" not found" Jan 29 16:21:36.049416 kubelet[2202]: E0129 16:21:36.049276 2202 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Jan 29 16:21:36.796934 kubelet[2202]: I0129 16:21:36.796839 2202 apiserver.go:52] "Watching apiserver" Jan 29 16:21:36.819973 kubelet[2202]: I0129 16:21:36.819887 2202 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 29 16:21:38.670541 systemd[1]: Reload requested from client PID 2474 ('systemctl') (unit session-7.scope)... Jan 29 16:21:38.670573 systemd[1]: Reloading... Jan 29 16:21:38.864110 zram_generator::config[2518]: No configuration found. Jan 29 16:21:39.047267 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:21:39.248152 systemd[1]: Reloading finished in 577 ms. Jan 29 16:21:39.285743 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:21:39.302583 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 16:21:39.303449 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:21:39.303614 systemd[1]: kubelet.service: Consumed 1.034s CPU time, 115.9M memory peak. Jan 29 16:21:39.311490 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:21:39.457324 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:21:39.461404 (kubelet)[2569]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 16:21:39.549738 kubelet[2569]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:21:39.549738 kubelet[2569]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 16:21:39.549738 kubelet[2569]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:21:39.550271 kubelet[2569]: I0129 16:21:39.550057 2569 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 16:21:39.560566 kubelet[2569]: I0129 16:21:39.560392 2569 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 29 16:21:39.560566 kubelet[2569]: I0129 16:21:39.560435 2569 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 16:21:39.561554 kubelet[2569]: I0129 16:21:39.561348 2569 server.go:929] "Client rotation is on, will bootstrap in background" Jan 29 16:21:39.563963 kubelet[2569]: I0129 16:21:39.563925 2569 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 16:21:39.566566 kubelet[2569]: I0129 16:21:39.566503 2569 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 16:21:39.572154 kubelet[2569]: E0129 16:21:39.570965 2569 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 16:21:39.572154 kubelet[2569]: I0129 16:21:39.571001 2569 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 16:21:39.574315 kubelet[2569]: I0129 16:21:39.574273 2569 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 16:21:39.574498 kubelet[2569]: I0129 16:21:39.574441 2569 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 29 16:21:39.574604 kubelet[2569]: I0129 16:21:39.574554 2569 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 16:21:39.574783 kubelet[2569]: I0129 16:21:39.574604 2569 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.0.0-8-df8e9582f3","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 16:21:39.574873 kubelet[2569]: I0129 16:21:39.574788 2569 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 16:21:39.574873 kubelet[2569]: I0129 16:21:39.574797 2569 container_manager_linux.go:300] "Creating device plugin manager" Jan 29 16:21:39.574873 kubelet[2569]: I0129 16:21:39.574833 2569 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:21:39.574958 kubelet[2569]: I0129 16:21:39.574954 2569 kubelet.go:408] "Attempting to sync node with API server" Jan 29 16:21:39.574986 kubelet[2569]: I0129 16:21:39.574965 2569 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 16:21:39.575011 kubelet[2569]: I0129 16:21:39.574994 2569 kubelet.go:314] "Adding apiserver pod source" Jan 29 16:21:39.575011 kubelet[2569]: I0129 16:21:39.575008 2569 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 16:21:39.578102 kubelet[2569]: I0129 16:21:39.578037 2569 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 16:21:39.580360 kubelet[2569]: I0129 16:21:39.579642 2569 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 16:21:39.588634 kubelet[2569]: I0129 16:21:39.588589 2569 server.go:1269] "Started kubelet" Jan 29 16:21:39.595779 kubelet[2569]: I0129 16:21:39.595694 2569 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 16:21:39.596991 kubelet[2569]: I0129 16:21:39.596854 2569 server.go:460] "Adding debug handlers to kubelet server" Jan 29 16:21:39.598955 kubelet[2569]: I0129 16:21:39.598386 2569 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 16:21:39.600073 kubelet[2569]: I0129 16:21:39.599902 2569 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 16:21:39.600260 kubelet[2569]: I0129 16:21:39.600211 2569 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 16:21:39.611224 kubelet[2569]: I0129 16:21:39.611195 2569 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 29 16:21:39.611973 kubelet[2569]: I0129 16:21:39.611950 2569 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 29 16:21:39.612401 kubelet[2569]: I0129 16:21:39.612387 2569 reconciler.go:26] "Reconciler: start to sync state" Jan 29 16:21:39.614128 kubelet[2569]: I0129 16:21:39.614100 2569 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 16:21:39.622358 kubelet[2569]: E0129 16:21:39.622291 2569 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 16:21:39.627291 kubelet[2569]: I0129 16:21:39.626512 2569 factory.go:221] Registration of the containerd container factory successfully Jan 29 16:21:39.627291 kubelet[2569]: I0129 16:21:39.626540 2569 factory.go:221] Registration of the systemd container factory successfully Jan 29 16:21:39.627291 kubelet[2569]: I0129 16:21:39.626640 2569 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 16:21:39.635282 kubelet[2569]: I0129 16:21:39.635218 2569 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 16:21:39.638169 kubelet[2569]: I0129 16:21:39.638127 2569 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 16:21:39.638359 kubelet[2569]: I0129 16:21:39.638350 2569 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 16:21:39.638436 kubelet[2569]: I0129 16:21:39.638429 2569 kubelet.go:2321] "Starting kubelet main sync loop" Jan 29 16:21:39.638576 kubelet[2569]: E0129 16:21:39.638556 2569 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 16:21:39.690882 sudo[2600]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 29 16:21:39.692393 sudo[2600]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 29 16:21:39.718492 kubelet[2569]: I0129 16:21:39.716899 2569 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 16:21:39.718492 kubelet[2569]: I0129 16:21:39.716920 2569 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 16:21:39.718492 kubelet[2569]: I0129 16:21:39.716945 2569 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:21:39.718492 kubelet[2569]: I0129 16:21:39.717142 2569 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 29 16:21:39.718492 kubelet[2569]: I0129 16:21:39.717153 2569 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 29 16:21:39.718492 kubelet[2569]: I0129 16:21:39.717172 2569 policy_none.go:49] "None policy: Start" Jan 29 16:21:39.720660 kubelet[2569]: I0129 16:21:39.720631 2569 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 16:21:39.721104 kubelet[2569]: I0129 16:21:39.720902 2569 state_mem.go:35] "Initializing new in-memory state store" Jan 29 16:21:39.721534 kubelet[2569]: I0129 16:21:39.721382 2569 state_mem.go:75] "Updated machine memory state" Jan 29 16:21:39.731350 kubelet[2569]: I0129 16:21:39.730593 2569 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 16:21:39.731350 kubelet[2569]: I0129 16:21:39.730784 2569 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 16:21:39.731350 kubelet[2569]: I0129 16:21:39.730797 2569 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 16:21:39.731350 kubelet[2569]: I0129 16:21:39.731237 2569 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 16:21:39.772641 kubelet[2569]: W0129 16:21:39.772453 2569 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 16:21:39.774625 kubelet[2569]: W0129 16:21:39.774591 2569 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 16:21:39.777549 kubelet[2569]: W0129 16:21:39.777516 2569 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 16:21:39.814437 kubelet[2569]: I0129 16:21:39.814204 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b626fe57743684e6980ef896b717595-kubeconfig\") pod \"kube-scheduler-ci-4230.0.0-8-df8e9582f3\" (UID: \"7b626fe57743684e6980ef896b717595\") " pod="kube-system/kube-scheduler-ci-4230.0.0-8-df8e9582f3" Jan 29 16:21:39.814437 kubelet[2569]: I0129 16:21:39.814294 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2c7c847e6512309498909ac41fae9e0d-ca-certs\") pod \"kube-controller-manager-ci-4230.0.0-8-df8e9582f3\" (UID: \"2c7c847e6512309498909ac41fae9e0d\") " pod="kube-system/kube-controller-manager-ci-4230.0.0-8-df8e9582f3" Jan 29 16:21:39.814437 kubelet[2569]: I0129 16:21:39.814313 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2c7c847e6512309498909ac41fae9e0d-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.0.0-8-df8e9582f3\" (UID: \"2c7c847e6512309498909ac41fae9e0d\") " pod="kube-system/kube-controller-manager-ci-4230.0.0-8-df8e9582f3" Jan 29 16:21:39.814437 kubelet[2569]: I0129 16:21:39.814329 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2c7c847e6512309498909ac41fae9e0d-k8s-certs\") pod \"kube-controller-manager-ci-4230.0.0-8-df8e9582f3\" (UID: \"2c7c847e6512309498909ac41fae9e0d\") " pod="kube-system/kube-controller-manager-ci-4230.0.0-8-df8e9582f3" Jan 29 16:21:39.814916 kubelet[2569]: I0129 16:21:39.814751 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2c7c847e6512309498909ac41fae9e0d-kubeconfig\") pod \"kube-controller-manager-ci-4230.0.0-8-df8e9582f3\" (UID: \"2c7c847e6512309498909ac41fae9e0d\") " pod="kube-system/kube-controller-manager-ci-4230.0.0-8-df8e9582f3" Jan 29 16:21:39.814916 kubelet[2569]: I0129 16:21:39.814804 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2c7c847e6512309498909ac41fae9e0d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.0.0-8-df8e9582f3\" (UID: \"2c7c847e6512309498909ac41fae9e0d\") " pod="kube-system/kube-controller-manager-ci-4230.0.0-8-df8e9582f3" Jan 29 16:21:39.814916 kubelet[2569]: I0129 16:21:39.814825 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ab664994452859621e46938dde3bc995-ca-certs\") pod \"kube-apiserver-ci-4230.0.0-8-df8e9582f3\" (UID: \"ab664994452859621e46938dde3bc995\") " pod="kube-system/kube-apiserver-ci-4230.0.0-8-df8e9582f3" Jan 29 16:21:39.814916 kubelet[2569]: I0129 16:21:39.814859 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ab664994452859621e46938dde3bc995-k8s-certs\") pod \"kube-apiserver-ci-4230.0.0-8-df8e9582f3\" (UID: \"ab664994452859621e46938dde3bc995\") " pod="kube-system/kube-apiserver-ci-4230.0.0-8-df8e9582f3" Jan 29 16:21:39.814916 kubelet[2569]: I0129 16:21:39.814875 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ab664994452859621e46938dde3bc995-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.0.0-8-df8e9582f3\" (UID: \"ab664994452859621e46938dde3bc995\") " pod="kube-system/kube-apiserver-ci-4230.0.0-8-df8e9582f3" Jan 29 16:21:39.846919 kubelet[2569]: I0129 16:21:39.846616 2569 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.0.0-8-df8e9582f3" Jan 29 16:21:39.864237 kubelet[2569]: I0129 16:21:39.864192 2569 kubelet_node_status.go:111] "Node was previously registered" node="ci-4230.0.0-8-df8e9582f3" Jan 29 16:21:39.864399 kubelet[2569]: I0129 16:21:39.864285 2569 kubelet_node_status.go:75] "Successfully registered node" node="ci-4230.0.0-8-df8e9582f3" Jan 29 16:21:40.075203 kubelet[2569]: E0129 16:21:40.074269 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 16:21:40.075914 kubelet[2569]: E0129 16:21:40.075876 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 16:21:40.078342 kubelet[2569]: E0129 16:21:40.078304 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 16:21:40.520254 sudo[2600]: pam_unix(sudo:session): session closed for user root Jan 29 16:21:40.576906 kubelet[2569]: I0129 16:21:40.576840 2569 apiserver.go:52] "Watching apiserver" Jan 29 16:21:40.612569 kubelet[2569]: I0129 16:21:40.612516 2569 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 29 16:21:40.663799 kubelet[2569]: E0129 16:21:40.663196 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 16:21:40.677175 kubelet[2569]: W0129 16:21:40.677138 2569 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 16:21:40.678555 kubelet[2569]: E0129 16:21:40.678032 2569 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4230.0.0-8-df8e9582f3\" already exists" pod="kube-system/kube-apiserver-ci-4230.0.0-8-df8e9582f3" Jan 29 16:21:40.678555 kubelet[2569]: E0129 16:21:40.678250 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 16:21:40.678555 kubelet[2569]: W0129 16:21:40.677851 2569 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 16:21:40.678555 kubelet[2569]: E0129 16:21:40.678369 2569 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4230.0.0-8-df8e9582f3\" already exists" pod="kube-system/kube-scheduler-ci-4230.0.0-8-df8e9582f3" Jan 29 16:21:40.678555 kubelet[2569]: E0129 16:21:40.678463 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 16:21:40.745849 kubelet[2569]: I0129 16:21:40.745765 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230.0.0-8-df8e9582f3" podStartSLOduration=1.745690307 podStartE2EDuration="1.745690307s" podCreationTimestamp="2025-01-29 16:21:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:21:40.743997356 +0000 UTC m=+1.264710297" watchObservedRunningTime="2025-01-29 16:21:40.745690307 +0000 UTC m=+1.266403248" Jan 29 16:21:40.746369 kubelet[2569]: I0129 16:21:40.746230 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230.0.0-8-df8e9582f3" podStartSLOduration=1.746215592 podStartE2EDuration="1.746215592s" podCreationTimestamp="2025-01-29 16:21:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:21:40.717451348 +0000 UTC m=+1.238164263" watchObservedRunningTime="2025-01-29 16:21:40.746215592 +0000 UTC m=+1.266928524" Jan 29 16:21:41.667308 kubelet[2569]: E0129 16:21:41.665826 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 16:21:41.669656 kubelet[2569]: E0129 16:21:41.668810 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 16:21:42.669338 kubelet[2569]: E0129 16:21:42.669268 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 16:21:42.936630 sudo[1678]: pam_unix(sudo:session): session closed for user root Jan 29 16:21:42.940820 sshd[1677]: Connection closed by 139.178.89.65 port 52788 Jan 29 16:21:42.942372 sshd-session[1674]: pam_unix(sshd:session): session closed for user core Jan 29 16:21:42.949215 systemd-logind[1467]: Session 7 logged out. Waiting for processes to exit. Jan 29 16:21:42.951502 systemd[1]: sshd@6-164.92.66.114:22-139.178.89.65:52788.service: Deactivated successfully. Jan 29 16:21:42.956395 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 16:21:42.956885 systemd[1]: session-7.scope: Consumed 6.056s CPU time, 220.2M memory peak. Jan 29 16:21:42.962168 systemd-logind[1467]: Removed session 7. Jan 29 16:21:44.656514 kubelet[2569]: I0129 16:21:44.656322 2569 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 29 16:21:44.658217 containerd[1485]: time="2025-01-29T16:21:44.658044466Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 16:21:44.660840 kubelet[2569]: I0129 16:21:44.660260 2569 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 29 16:21:45.337249 kubelet[2569]: I0129 16:21:45.336911 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230.0.0-8-df8e9582f3" podStartSLOduration=6.336879959 podStartE2EDuration="6.336879959s" podCreationTimestamp="2025-01-29 16:21:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:21:40.773010745 +0000 UTC m=+1.293723688" watchObservedRunningTime="2025-01-29 16:21:45.336879959 +0000 UTC m=+5.857592898" Jan 29 16:21:45.356174 kubelet[2569]: I0129 16:21:45.356131 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4c669c46-5448-4116-9309-0785e6df6a4d-xtables-lock\") pod \"kube-proxy-cftt8\" (UID: \"4c669c46-5448-4116-9309-0785e6df6a4d\") " pod="kube-system/kube-proxy-cftt8" Jan 29 16:21:45.356565 kubelet[2569]: I0129 16:21:45.356439 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4c669c46-5448-4116-9309-0785e6df6a4d-kube-proxy\") pod \"kube-proxy-cftt8\" (UID: \"4c669c46-5448-4116-9309-0785e6df6a4d\") " pod="kube-system/kube-proxy-cftt8" Jan 29 16:21:45.356565 kubelet[2569]: I0129 16:21:45.356475 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4c669c46-5448-4116-9309-0785e6df6a4d-lib-modules\") pod \"kube-proxy-cftt8\" (UID: \"4c669c46-5448-4116-9309-0785e6df6a4d\") " pod="kube-system/kube-proxy-cftt8" Jan 29 16:21:45.356565 kubelet[2569]: I0129 16:21:45.356500 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z46tx\" (UniqueName: \"kubernetes.io/projected/4c669c46-5448-4116-9309-0785e6df6a4d-kube-api-access-z46tx\") pod \"kube-proxy-cftt8\" (UID: \"4c669c46-5448-4116-9309-0785e6df6a4d\") " pod="kube-system/kube-proxy-cftt8" Jan 29 16:21:45.356635 systemd[1]: Created slice kubepods-besteffort-pod4c669c46_5448_4116_9309_0785e6df6a4d.slice - libcontainer container kubepods-besteffort-pod4c669c46_5448_4116_9309_0785e6df6a4d.slice. Jan 29 16:21:45.372839 kubelet[2569]: W0129 16:21:45.372794 2569 reflector.go:561] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4230.0.0-8-df8e9582f3" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230.0.0-8-df8e9582f3' and this object Jan 29 16:21:45.373135 kubelet[2569]: E0129 16:21:45.373081 2569 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ci-4230.0.0-8-df8e9582f3\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230.0.0-8-df8e9582f3' and this object" logger="UnhandledError" Jan 29 16:21:45.376389 systemd[1]: Created slice kubepods-burstable-podf08433b6_100d_418e_a7a8_2d1174bf43b0.slice - libcontainer container kubepods-burstable-podf08433b6_100d_418e_a7a8_2d1174bf43b0.slice. Jan 29 16:21:45.457541 kubelet[2569]: I0129 16:21:45.457485 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f08433b6-100d-418e-a7a8-2d1174bf43b0-host-proc-sys-kernel\") pod \"cilium-lrwsg\" (UID: \"f08433b6-100d-418e-a7a8-2d1174bf43b0\") " pod="kube-system/cilium-lrwsg" Jan 29 16:21:45.458059 kubelet[2569]: I0129 16:21:45.457917 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f08433b6-100d-418e-a7a8-2d1174bf43b0-hubble-tls\") pod \"cilium-lrwsg\" (UID: \"f08433b6-100d-418e-a7a8-2d1174bf43b0\") " pod="kube-system/cilium-lrwsg" Jan 29 16:21:45.460091 kubelet[2569]: I0129 16:21:45.458264 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f08433b6-100d-418e-a7a8-2d1174bf43b0-xtables-lock\") pod \"cilium-lrwsg\" (UID: \"f08433b6-100d-418e-a7a8-2d1174bf43b0\") " pod="kube-system/cilium-lrwsg" Jan 29 16:21:45.460091 kubelet[2569]: I0129 16:21:45.458304 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f08433b6-100d-418e-a7a8-2d1174bf43b0-hostproc\") pod \"cilium-lrwsg\" (UID: \"f08433b6-100d-418e-a7a8-2d1174bf43b0\") " pod="kube-system/cilium-lrwsg" Jan 29 16:21:45.460091 kubelet[2569]: I0129 16:21:45.458330 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f08433b6-100d-418e-a7a8-2d1174bf43b0-cilium-config-path\") pod \"cilium-lrwsg\" (UID: \"f08433b6-100d-418e-a7a8-2d1174bf43b0\") " pod="kube-system/cilium-lrwsg" Jan 29 16:21:45.460091 kubelet[2569]: I0129 16:21:45.458353 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f08433b6-100d-418e-a7a8-2d1174bf43b0-host-proc-sys-net\") pod \"cilium-lrwsg\" (UID: \"f08433b6-100d-418e-a7a8-2d1174bf43b0\") " pod="kube-system/cilium-lrwsg" Jan 29 16:21:45.460091 kubelet[2569]: I0129 16:21:45.458376 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f08433b6-100d-418e-a7a8-2d1174bf43b0-cilium-cgroup\") pod \"cilium-lrwsg\" (UID: \"f08433b6-100d-418e-a7a8-2d1174bf43b0\") " pod="kube-system/cilium-lrwsg" Jan 29 16:21:45.460091 kubelet[2569]: I0129 16:21:45.458422 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f08433b6-100d-418e-a7a8-2d1174bf43b0-cilium-run\") pod \"cilium-lrwsg\" (UID: \"f08433b6-100d-418e-a7a8-2d1174bf43b0\") " pod="kube-system/cilium-lrwsg" Jan 29 16:21:45.460458 kubelet[2569]: I0129 16:21:45.458446 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f08433b6-100d-418e-a7a8-2d1174bf43b0-lib-modules\") pod \"cilium-lrwsg\" (UID: \"f08433b6-100d-418e-a7a8-2d1174bf43b0\") " pod="kube-system/cilium-lrwsg" Jan 29 16:21:45.460458 kubelet[2569]: I0129 16:21:45.458468 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f08433b6-100d-418e-a7a8-2d1174bf43b0-clustermesh-secrets\") pod \"cilium-lrwsg\" (UID: \"f08433b6-100d-418e-a7a8-2d1174bf43b0\") " pod="kube-system/cilium-lrwsg" Jan 29 16:21:45.460458 kubelet[2569]: I0129 16:21:45.458494 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwvf5\" (UniqueName: \"kubernetes.io/projected/f08433b6-100d-418e-a7a8-2d1174bf43b0-kube-api-access-lwvf5\") pod \"cilium-lrwsg\" (UID: \"f08433b6-100d-418e-a7a8-2d1174bf43b0\") " pod="kube-system/cilium-lrwsg" Jan 29 16:21:45.460458 kubelet[2569]: I0129 16:21:45.458558 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f08433b6-100d-418e-a7a8-2d1174bf43b0-cni-path\") pod \"cilium-lrwsg\" (UID: \"f08433b6-100d-418e-a7a8-2d1174bf43b0\") " pod="kube-system/cilium-lrwsg" Jan 29 16:21:45.460458 kubelet[2569]: I0129 16:21:45.458581 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f08433b6-100d-418e-a7a8-2d1174bf43b0-etc-cni-netd\") pod \"cilium-lrwsg\" (UID: \"f08433b6-100d-418e-a7a8-2d1174bf43b0\") " pod="kube-system/cilium-lrwsg" Jan 29 16:21:45.460458 kubelet[2569]: I0129 16:21:45.458630 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f08433b6-100d-418e-a7a8-2d1174bf43b0-bpf-maps\") pod \"cilium-lrwsg\" (UID: \"f08433b6-100d-418e-a7a8-2d1174bf43b0\") " pod="kube-system/cilium-lrwsg" Jan 29 16:21:45.477763 kubelet[2569]: E0129 16:21:45.477716 2569 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 29 16:21:45.478237 kubelet[2569]: E0129 16:21:45.477998 2569 projected.go:194] Error preparing data for projected volume kube-api-access-z46tx for pod kube-system/kube-proxy-cftt8: configmap "kube-root-ca.crt" not found Jan 29 16:21:45.478237 kubelet[2569]: E0129 16:21:45.478174 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4c669c46-5448-4116-9309-0785e6df6a4d-kube-api-access-z46tx podName:4c669c46-5448-4116-9309-0785e6df6a4d nodeName:}" failed. No retries permitted until 2025-01-29 16:21:45.978118464 +0000 UTC m=+6.498831404 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-z46tx" (UniqueName: "kubernetes.io/projected/4c669c46-5448-4116-9309-0785e6df6a4d-kube-api-access-z46tx") pod "kube-proxy-cftt8" (UID: "4c669c46-5448-4116-9309-0785e6df6a4d") : configmap "kube-root-ca.crt" not found Jan 29 16:21:45.573344 kubelet[2569]: E0129 16:21:45.573312 2569 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 29 16:21:45.573880 kubelet[2569]: E0129 16:21:45.573497 2569 projected.go:194] Error preparing data for projected volume kube-api-access-lwvf5 for pod kube-system/cilium-lrwsg: configmap "kube-root-ca.crt" not found Jan 29 16:21:45.573880 kubelet[2569]: E0129 16:21:45.573581 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f08433b6-100d-418e-a7a8-2d1174bf43b0-kube-api-access-lwvf5 podName:f08433b6-100d-418e-a7a8-2d1174bf43b0 nodeName:}" failed. No retries permitted until 2025-01-29 16:21:46.073559384 +0000 UTC m=+6.594272313 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-lwvf5" (UniqueName: "kubernetes.io/projected/f08433b6-100d-418e-a7a8-2d1174bf43b0-kube-api-access-lwvf5") pod "cilium-lrwsg" (UID: "f08433b6-100d-418e-a7a8-2d1174bf43b0") : configmap "kube-root-ca.crt" not found Jan 29 16:21:45.772025 systemd[1]: Created slice kubepods-besteffort-pod4221b60b_8726_4e68_b252_a62f89e937dd.slice - libcontainer container kubepods-besteffort-pod4221b60b_8726_4e68_b252_a62f89e937dd.slice. Jan 29 16:21:45.861549 kubelet[2569]: I0129 16:21:45.861483 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m58cl\" (UniqueName: \"kubernetes.io/projected/4221b60b-8726-4e68-b252-a62f89e937dd-kube-api-access-m58cl\") pod \"cilium-operator-5d85765b45-8fxvk\" (UID: \"4221b60b-8726-4e68-b252-a62f89e937dd\") " pod="kube-system/cilium-operator-5d85765b45-8fxvk" Jan 29 16:21:45.862018 kubelet[2569]: I0129 16:21:45.861575 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4221b60b-8726-4e68-b252-a62f89e937dd-cilium-config-path\") pod \"cilium-operator-5d85765b45-8fxvk\" (UID: \"4221b60b-8726-4e68-b252-a62f89e937dd\") " pod="kube-system/cilium-operator-5d85765b45-8fxvk" Jan 29 16:21:46.077395 kubelet[2569]: E0129 16:21:46.076820 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 16:21:46.080153 containerd[1485]: time="2025-01-29T16:21:46.080092772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-8fxvk,Uid:4221b60b-8726-4e68-b252-a62f89e937dd,Namespace:kube-system,Attempt:0,}" Jan 29 16:21:46.125130 containerd[1485]: time="2025-01-29T16:21:46.124581768Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:21:46.125130 containerd[1485]: time="2025-01-29T16:21:46.124684160Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:21:46.125130 containerd[1485]: time="2025-01-29T16:21:46.124696688Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:21:46.125130 containerd[1485]: time="2025-01-29T16:21:46.124801445Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:21:46.155451 systemd[1]: Started cri-containerd-c1a581d0cb691ce7bd73cf3d6b3cdebc7bb48ac67ef4044adbd19f228cf66340.scope - libcontainer container c1a581d0cb691ce7bd73cf3d6b3cdebc7bb48ac67ef4044adbd19f228cf66340. Jan 29 16:21:46.224485 containerd[1485]: time="2025-01-29T16:21:46.223419799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-8fxvk,Uid:4221b60b-8726-4e68-b252-a62f89e937dd,Namespace:kube-system,Attempt:0,} returns sandbox id \"c1a581d0cb691ce7bd73cf3d6b3cdebc7bb48ac67ef4044adbd19f228cf66340\"" Jan 29 16:21:46.226226 kubelet[2569]: E0129 16:21:46.226012 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 16:21:46.228737 containerd[1485]: time="2025-01-29T16:21:46.228674329Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 29 16:21:46.266325 kubelet[2569]: E0129 16:21:46.266253 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 16:21:46.267823 containerd[1485]: time="2025-01-29T16:21:46.267005996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cftt8,Uid:4c669c46-5448-4116-9309-0785e6df6a4d,Namespace:kube-system,Attempt:0,}" Jan 29 16:21:46.299555 containerd[1485]: time="2025-01-29T16:21:46.299299368Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:21:46.299555 containerd[1485]: time="2025-01-29T16:21:46.299421760Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:21:46.300478 containerd[1485]: time="2025-01-29T16:21:46.299715136Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:21:46.300478 containerd[1485]: time="2025-01-29T16:21:46.299908570Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:21:46.323450 systemd[1]: Started cri-containerd-9ba2eb339e530f9ca4bf47e40b36fb51225e4dc80282935406af523b5abc70b6.scope - libcontainer container 9ba2eb339e530f9ca4bf47e40b36fb51225e4dc80282935406af523b5abc70b6. Jan 29 16:21:46.371914 containerd[1485]: time="2025-01-29T16:21:46.371322030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cftt8,Uid:4c669c46-5448-4116-9309-0785e6df6a4d,Namespace:kube-system,Attempt:0,} returns sandbox id \"9ba2eb339e530f9ca4bf47e40b36fb51225e4dc80282935406af523b5abc70b6\"" Jan 29 16:21:46.373783 kubelet[2569]: E0129 16:21:46.372962 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 16:21:46.378343 containerd[1485]: time="2025-01-29T16:21:46.378285133Z" level=info msg="CreateContainer within sandbox \"9ba2eb339e530f9ca4bf47e40b36fb51225e4dc80282935406af523b5abc70b6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 16:21:46.399590 containerd[1485]: time="2025-01-29T16:21:46.399386181Z" level=info msg="CreateContainer within sandbox \"9ba2eb339e530f9ca4bf47e40b36fb51225e4dc80282935406af523b5abc70b6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a741cf0db758e758a2339aa1fdd034dcc6a5ec01831fc56c834ff7554a290f40\"" Jan 29 16:21:46.401450 containerd[1485]: time="2025-01-29T16:21:46.401354575Z" level=info msg="StartContainer for \"a741cf0db758e758a2339aa1fdd034dcc6a5ec01831fc56c834ff7554a290f40\"" Jan 29 16:21:46.448418 systemd[1]: Started cri-containerd-a741cf0db758e758a2339aa1fdd034dcc6a5ec01831fc56c834ff7554a290f40.scope - libcontainer container a741cf0db758e758a2339aa1fdd034dcc6a5ec01831fc56c834ff7554a290f40. Jan 29 16:21:46.501386 containerd[1485]: time="2025-01-29T16:21:46.501312733Z" level=info msg="StartContainer for \"a741cf0db758e758a2339aa1fdd034dcc6a5ec01831fc56c834ff7554a290f40\" returns successfully" Jan 29 16:21:46.560912 kubelet[2569]: E0129 16:21:46.560233 2569 secret.go:188] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Jan 29 16:21:46.560912 kubelet[2569]: E0129 16:21:46.560385 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f08433b6-100d-418e-a7a8-2d1174bf43b0-clustermesh-secrets podName:f08433b6-100d-418e-a7a8-2d1174bf43b0 nodeName:}" failed. No retries permitted until 2025-01-29 16:21:47.060354859 +0000 UTC m=+7.581067798 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/f08433b6-100d-418e-a7a8-2d1174bf43b0-clustermesh-secrets") pod "cilium-lrwsg" (UID: "f08433b6-100d-418e-a7a8-2d1174bf43b0") : failed to sync secret cache: timed out waiting for the condition Jan 29 16:21:46.686172 kubelet[2569]: E0129 16:21:46.685791 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 16:21:46.715531 kubelet[2569]: I0129 16:21:46.714730 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-cftt8" podStartSLOduration=1.714696183 podStartE2EDuration="1.714696183s" podCreationTimestamp="2025-01-29 16:21:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:21:46.714671435 +0000 UTC m=+7.235384374" watchObservedRunningTime="2025-01-29 16:21:46.714696183 +0000 UTC m=+7.235409122" Jan 29 16:21:47.080601 kubelet[2569]: E0129 16:21:47.079838 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 16:21:47.183509 kubelet[2569]: E0129 16:21:47.183140 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 16:21:47.185232 containerd[1485]: time="2025-01-29T16:21:47.185148929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lrwsg,Uid:f08433b6-100d-418e-a7a8-2d1174bf43b0,Namespace:kube-system,Attempt:0,}" Jan 29 16:21:47.216348 containerd[1485]: time="2025-01-29T16:21:47.216164310Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:21:47.216348 containerd[1485]: time="2025-01-29T16:21:47.216270343Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:21:47.216348 containerd[1485]: time="2025-01-29T16:21:47.216299697Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:21:47.216947 containerd[1485]: time="2025-01-29T16:21:47.216459084Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:21:47.251499 systemd[1]: Started cri-containerd-6604940116418ca541fa7e6e1526d91ab216d03da5313961b239caf2d8030af2.scope - libcontainer container 6604940116418ca541fa7e6e1526d91ab216d03da5313961b239caf2d8030af2. Jan 29 16:21:47.296102 containerd[1485]: time="2025-01-29T16:21:47.295993539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lrwsg,Uid:f08433b6-100d-418e-a7a8-2d1174bf43b0,Namespace:kube-system,Attempt:0,} returns sandbox id \"6604940116418ca541fa7e6e1526d91ab216d03da5313961b239caf2d8030af2\"" Jan 29 16:21:47.297865 kubelet[2569]: E0129 16:21:47.297779 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 16:21:47.695999 kubelet[2569]: E0129 16:21:47.695951 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 16:21:49.172108 containerd[1485]: time="2025-01-29T16:21:49.170463567Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:21:49.172108 containerd[1485]: time="2025-01-29T16:21:49.171693119Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 29 16:21:49.173245 containerd[1485]: time="2025-01-29T16:21:49.173204161Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:21:49.174707 containerd[1485]: time="2025-01-29T16:21:49.174642808Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.945906287s" Jan 29 16:21:49.174707 containerd[1485]: time="2025-01-29T16:21:49.174707063Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 29 16:21:49.177037 containerd[1485]: time="2025-01-29T16:21:49.176966542Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 29 16:21:49.178937 containerd[1485]: time="2025-01-29T16:21:49.178875139Z" level=info msg="CreateContainer within sandbox \"c1a581d0cb691ce7bd73cf3d6b3cdebc7bb48ac67ef4044adbd19f228cf66340\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 29 16:21:49.210113 containerd[1485]: time="2025-01-29T16:21:49.208913684Z" level=info msg="CreateContainer within sandbox \"c1a581d0cb691ce7bd73cf3d6b3cdebc7bb48ac67ef4044adbd19f228cf66340\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"2ec1344e60dea9861f1d241b4432bf4ecf2c6f239abf4eb769ddf79645515620\"" Jan 29 16:21:49.213115 containerd[1485]: time="2025-01-29T16:21:49.212653975Z" level=info msg="StartContainer for \"2ec1344e60dea9861f1d241b4432bf4ecf2c6f239abf4eb769ddf79645515620\"" Jan 29 16:21:49.268776 systemd[1]: run-containerd-runc-k8s.io-2ec1344e60dea9861f1d241b4432bf4ecf2c6f239abf4eb769ddf79645515620-runc.FEp5xm.mount: Deactivated successfully. Jan 29 16:21:49.278637 systemd[1]: Started cri-containerd-2ec1344e60dea9861f1d241b4432bf4ecf2c6f239abf4eb769ddf79645515620.scope - libcontainer container 2ec1344e60dea9861f1d241b4432bf4ecf2c6f239abf4eb769ddf79645515620. Jan 29 16:21:49.327380 containerd[1485]: time="2025-01-29T16:21:49.327251121Z" level=info msg="StartContainer for \"2ec1344e60dea9861f1d241b4432bf4ecf2c6f239abf4eb769ddf79645515620\" returns successfully" Jan 29 16:21:49.704511 kubelet[2569]: E0129 16:21:49.704453 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 16:21:50.564638 kubelet[2569]: E0129 16:21:50.560046 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 16:21:50.759182 kubelet[2569]: I0129 16:21:50.754804 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-8fxvk" podStartSLOduration=2.806122003 podStartE2EDuration="5.75475717s" podCreationTimestamp="2025-01-29 16:21:45 +0000 UTC" firstStartedPulling="2025-01-29 16:21:46.227985219 +0000 UTC m=+6.748698148" lastFinishedPulling="2025-01-29 16:21:49.176620381 +0000 UTC m=+9.697333315" observedRunningTime="2025-01-29 16:21:49.740859824 +0000 UTC m=+10.261572765" watchObservedRunningTime="2025-01-29 16:21:50.75475717 +0000 UTC m=+11.275470112" Jan 29 16:21:50.761599 kubelet[2569]: E0129 16:21:50.761506 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 16:21:52.092957 kubelet[2569]: E0129 16:21:52.092910 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 16:21:53.696397 update_engine[1468]: I20250129 16:21:53.696234 1468 update_attempter.cc:509] Updating boot flags... Jan 29 16:21:54.751337 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3002) Jan 29 16:21:55.146365 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1994346491.mount: Deactivated successfully. Jan 29 16:21:57.930367 containerd[1485]: time="2025-01-29T16:21:57.930300720Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:21:57.932122 containerd[1485]: time="2025-01-29T16:21:57.931727113Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 29 16:21:57.932329 containerd[1485]: time="2025-01-29T16:21:57.932219193Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:21:57.935576 containerd[1485]: time="2025-01-29T16:21:57.935361588Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.75832059s" Jan 29 16:21:57.935576 containerd[1485]: time="2025-01-29T16:21:57.935422885Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 29 16:21:57.941592 containerd[1485]: time="2025-01-29T16:21:57.941524907Z" level=info msg="CreateContainer within sandbox \"6604940116418ca541fa7e6e1526d91ab216d03da5313961b239caf2d8030af2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 16:21:58.039058 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3532429669.mount: Deactivated successfully. Jan 29 16:21:58.042745 containerd[1485]: time="2025-01-29T16:21:58.042691110Z" level=info msg="CreateContainer within sandbox \"6604940116418ca541fa7e6e1526d91ab216d03da5313961b239caf2d8030af2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8ba4541eee9828c902fb6727af9c45b5b47c27b71da8eed2ff4f9355d0dcfb5a\"" Jan 29 16:21:58.046330 containerd[1485]: time="2025-01-29T16:21:58.045072765Z" level=info msg="StartContainer for \"8ba4541eee9828c902fb6727af9c45b5b47c27b71da8eed2ff4f9355d0dcfb5a\"" Jan 29 16:21:58.218642 systemd[1]: Started cri-containerd-8ba4541eee9828c902fb6727af9c45b5b47c27b71da8eed2ff4f9355d0dcfb5a.scope - libcontainer container 8ba4541eee9828c902fb6727af9c45b5b47c27b71da8eed2ff4f9355d0dcfb5a. Jan 29 16:21:58.251271 containerd[1485]: time="2025-01-29T16:21:58.251173674Z" level=info msg="StartContainer for \"8ba4541eee9828c902fb6727af9c45b5b47c27b71da8eed2ff4f9355d0dcfb5a\" returns successfully" Jan 29 16:21:58.271317 systemd[1]: cri-containerd-8ba4541eee9828c902fb6727af9c45b5b47c27b71da8eed2ff4f9355d0dcfb5a.scope: Deactivated successfully. Jan 29 16:21:58.392678 containerd[1485]: time="2025-01-29T16:21:58.383858315Z" level=info msg="shim disconnected" id=8ba4541eee9828c902fb6727af9c45b5b47c27b71da8eed2ff4f9355d0dcfb5a namespace=k8s.io Jan 29 16:21:58.392982 containerd[1485]: time="2025-01-29T16:21:58.392673786Z" level=warning msg="cleaning up after shim disconnected" id=8ba4541eee9828c902fb6727af9c45b5b47c27b71da8eed2ff4f9355d0dcfb5a namespace=k8s.io Jan 29 16:21:58.392982 containerd[1485]: time="2025-01-29T16:21:58.392710799Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:21:58.845168 kubelet[2569]: E0129 16:21:58.845107 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 16:21:58.851486 containerd[1485]: time="2025-01-29T16:21:58.851417892Z" level=info msg="CreateContainer within sandbox \"6604940116418ca541fa7e6e1526d91ab216d03da5313961b239caf2d8030af2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 16:21:58.867376 containerd[1485]: time="2025-01-29T16:21:58.867185157Z" level=info msg="CreateContainer within sandbox \"6604940116418ca541fa7e6e1526d91ab216d03da5313961b239caf2d8030af2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b6c645d2259e627b5451f474c013505a920ed17c9af1d1f804d7cec870589cca\"" Jan 29 16:21:58.868346 containerd[1485]: time="2025-01-29T16:21:58.868301459Z" level=info msg="StartContainer for \"b6c645d2259e627b5451f474c013505a920ed17c9af1d1f804d7cec870589cca\"" Jan 29 16:21:58.918428 systemd[1]: Started cri-containerd-b6c645d2259e627b5451f474c013505a920ed17c9af1d1f804d7cec870589cca.scope - libcontainer container b6c645d2259e627b5451f474c013505a920ed17c9af1d1f804d7cec870589cca. Jan 29 16:21:58.958826 containerd[1485]: time="2025-01-29T16:21:58.958688928Z" level=info msg="StartContainer for \"b6c645d2259e627b5451f474c013505a920ed17c9af1d1f804d7cec870589cca\" returns successfully" Jan 29 16:21:58.979104 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 16:21:58.979741 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:21:58.980434 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:21:58.986768 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:21:58.987562 systemd[1]: cri-containerd-b6c645d2259e627b5451f474c013505a920ed17c9af1d1f804d7cec870589cca.scope: Deactivated successfully. Jan 29 16:21:58.988401 systemd[1]: cri-containerd-b6c645d2259e627b5451f474c013505a920ed17c9af1d1f804d7cec870589cca.scope: Consumed 31ms CPU time, 5.4M memory peak, 8K read from disk, 2.2M written to disk. Jan 29 16:21:59.035388 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8ba4541eee9828c902fb6727af9c45b5b47c27b71da8eed2ff4f9355d0dcfb5a-rootfs.mount: Deactivated successfully. Jan 29 16:21:59.035714 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 29 16:21:59.050728 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b6c645d2259e627b5451f474c013505a920ed17c9af1d1f804d7cec870589cca-rootfs.mount: Deactivated successfully. Jan 29 16:21:59.055148 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:21:59.065427 containerd[1485]: time="2025-01-29T16:21:59.062707218Z" level=info msg="shim disconnected" id=b6c645d2259e627b5451f474c013505a920ed17c9af1d1f804d7cec870589cca namespace=k8s.io Jan 29 16:21:59.065427 containerd[1485]: time="2025-01-29T16:21:59.062811613Z" level=warning msg="cleaning up after shim disconnected" id=b6c645d2259e627b5451f474c013505a920ed17c9af1d1f804d7cec870589cca namespace=k8s.io Jan 29 16:21:59.065427 containerd[1485]: time="2025-01-29T16:21:59.062827214Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:21:59.851622 kubelet[2569]: E0129 16:21:59.851393 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 16:21:59.858145 containerd[1485]: time="2025-01-29T16:21:59.857770196Z" level=info msg="CreateContainer within sandbox \"6604940116418ca541fa7e6e1526d91ab216d03da5313961b239caf2d8030af2\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 16:21:59.905940 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1634373033.mount: Deactivated successfully. Jan 29 16:21:59.909730 containerd[1485]: time="2025-01-29T16:21:59.909482707Z" level=info msg="CreateContainer within sandbox \"6604940116418ca541fa7e6e1526d91ab216d03da5313961b239caf2d8030af2\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6399f97224a518be73442036b5a69028065c9359c3d28e739aa8879e98b98559\"" Jan 29 16:21:59.913303 containerd[1485]: time="2025-01-29T16:21:59.913241273Z" level=info msg="StartContainer for \"6399f97224a518be73442036b5a69028065c9359c3d28e739aa8879e98b98559\"" Jan 29 16:21:59.958403 systemd[1]: Started cri-containerd-6399f97224a518be73442036b5a69028065c9359c3d28e739aa8879e98b98559.scope - libcontainer container 6399f97224a518be73442036b5a69028065c9359c3d28e739aa8879e98b98559. Jan 29 16:21:59.999741 containerd[1485]: time="2025-01-29T16:21:59.999592001Z" level=info msg="StartContainer for \"6399f97224a518be73442036b5a69028065c9359c3d28e739aa8879e98b98559\" returns successfully" Jan 29 16:22:00.006841 systemd[1]: cri-containerd-6399f97224a518be73442036b5a69028065c9359c3d28e739aa8879e98b98559.scope: Deactivated successfully. Jan 29 16:22:00.050627 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6399f97224a518be73442036b5a69028065c9359c3d28e739aa8879e98b98559-rootfs.mount: Deactivated successfully. Jan 29 16:22:00.053292 containerd[1485]: time="2025-01-29T16:22:00.052833738Z" level=info msg="shim disconnected" id=6399f97224a518be73442036b5a69028065c9359c3d28e739aa8879e98b98559 namespace=k8s.io Jan 29 16:22:00.053292 containerd[1485]: time="2025-01-29T16:22:00.052930159Z" level=warning msg="cleaning up after shim disconnected" id=6399f97224a518be73442036b5a69028065c9359c3d28e739aa8879e98b98559 namespace=k8s.io Jan 29 16:22:00.053292 containerd[1485]: time="2025-01-29T16:22:00.052943186Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:22:00.856375 kubelet[2569]: E0129 16:22:00.856235 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 16:22:00.861474 containerd[1485]: time="2025-01-29T16:22:00.861104404Z" level=info msg="CreateContainer within sandbox \"6604940116418ca541fa7e6e1526d91ab216d03da5313961b239caf2d8030af2\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 16:22:00.883165 containerd[1485]: time="2025-01-29T16:22:00.882268313Z" level=info msg="CreateContainer within sandbox \"6604940116418ca541fa7e6e1526d91ab216d03da5313961b239caf2d8030af2\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"3cbb75c24464520988bddbe42ca2e8450a9e3d71f2ed1f8d14657d747004d0a7\"" Jan 29 16:22:00.885669 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2690846033.mount: Deactivated successfully. Jan 29 16:22:00.888764 containerd[1485]: time="2025-01-29T16:22:00.888569650Z" level=info msg="StartContainer for \"3cbb75c24464520988bddbe42ca2e8450a9e3d71f2ed1f8d14657d747004d0a7\"" Jan 29 16:22:00.937468 systemd[1]: Started cri-containerd-3cbb75c24464520988bddbe42ca2e8450a9e3d71f2ed1f8d14657d747004d0a7.scope - libcontainer container 3cbb75c24464520988bddbe42ca2e8450a9e3d71f2ed1f8d14657d747004d0a7. Jan 29 16:22:00.982119 systemd[1]: cri-containerd-3cbb75c24464520988bddbe42ca2e8450a9e3d71f2ed1f8d14657d747004d0a7.scope: Deactivated successfully. Jan 29 16:22:00.984323 containerd[1485]: time="2025-01-29T16:22:00.983980064Z" level=info msg="StartContainer for \"3cbb75c24464520988bddbe42ca2e8450a9e3d71f2ed1f8d14657d747004d0a7\" returns successfully" Jan 29 16:22:01.016632 containerd[1485]: time="2025-01-29T16:22:01.016378418Z" level=info msg="shim disconnected" id=3cbb75c24464520988bddbe42ca2e8450a9e3d71f2ed1f8d14657d747004d0a7 namespace=k8s.io Jan 29 16:22:01.016632 containerd[1485]: time="2025-01-29T16:22:01.016455527Z" level=warning msg="cleaning up after shim disconnected" id=3cbb75c24464520988bddbe42ca2e8450a9e3d71f2ed1f8d14657d747004d0a7 namespace=k8s.io Jan 29 16:22:01.016632 containerd[1485]: time="2025-01-29T16:22:01.016468138Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:22:01.036570 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3cbb75c24464520988bddbe42ca2e8450a9e3d71f2ed1f8d14657d747004d0a7-rootfs.mount: Deactivated successfully. Jan 29 16:22:01.864137 kubelet[2569]: E0129 16:22:01.862261 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 16:22:01.866846 containerd[1485]: time="2025-01-29T16:22:01.866786635Z" level=info msg="CreateContainer within sandbox \"6604940116418ca541fa7e6e1526d91ab216d03da5313961b239caf2d8030af2\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 16:22:01.928510 containerd[1485]: time="2025-01-29T16:22:01.925845479Z" level=info msg="CreateContainer within sandbox \"6604940116418ca541fa7e6e1526d91ab216d03da5313961b239caf2d8030af2\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5a5fbe3cf78a1158ec4480f449823ce1970e3f796bd280e5dc776e10f7cb8f95\"" Jan 29 16:22:01.928510 containerd[1485]: time="2025-01-29T16:22:01.927351511Z" level=info msg="StartContainer for \"5a5fbe3cf78a1158ec4480f449823ce1970e3f796bd280e5dc776e10f7cb8f95\"" Jan 29 16:22:01.969404 systemd[1]: Started cri-containerd-5a5fbe3cf78a1158ec4480f449823ce1970e3f796bd280e5dc776e10f7cb8f95.scope - libcontainer container 5a5fbe3cf78a1158ec4480f449823ce1970e3f796bd280e5dc776e10f7cb8f95. Jan 29 16:22:02.047202 containerd[1485]: time="2025-01-29T16:22:02.047140466Z" level=info msg="StartContainer for \"5a5fbe3cf78a1158ec4480f449823ce1970e3f796bd280e5dc776e10f7cb8f95\" returns successfully" Jan 29 16:22:02.319505 kubelet[2569]: I0129 16:22:02.319316 2569 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 29 16:22:02.386574 systemd[1]: Created slice kubepods-burstable-poda1eeba69_c15e_494a_be84_f41668dde079.slice - libcontainer container kubepods-burstable-poda1eeba69_c15e_494a_be84_f41668dde079.slice. Jan 29 16:22:02.412520 systemd[1]: Created slice kubepods-burstable-podb4fcd33b_5bf0_4449_8868_3770e93403be.slice - libcontainer container kubepods-burstable-podb4fcd33b_5bf0_4449_8868_3770e93403be.slice. Jan 29 16:22:02.429105 kubelet[2569]: I0129 16:22:02.428871 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a1eeba69-c15e-494a-be84-f41668dde079-config-volume\") pod \"coredns-6f6b679f8f-slmg9\" (UID: \"a1eeba69-c15e-494a-be84-f41668dde079\") " pod="kube-system/coredns-6f6b679f8f-slmg9" Jan 29 16:22:02.429838 kubelet[2569]: I0129 16:22:02.429735 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jw72r\" (UniqueName: \"kubernetes.io/projected/b4fcd33b-5bf0-4449-8868-3770e93403be-kube-api-access-jw72r\") pod \"coredns-6f6b679f8f-lzn9l\" (UID: \"b4fcd33b-5bf0-4449-8868-3770e93403be\") " pod="kube-system/coredns-6f6b679f8f-lzn9l" Jan 29 16:22:02.430341 kubelet[2569]: I0129 16:22:02.430079 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b4fcd33b-5bf0-4449-8868-3770e93403be-config-volume\") pod \"coredns-6f6b679f8f-lzn9l\" (UID: \"b4fcd33b-5bf0-4449-8868-3770e93403be\") " pod="kube-system/coredns-6f6b679f8f-lzn9l" Jan 29 16:22:02.430341 kubelet[2569]: I0129 16:22:02.430120 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpqfh\" (UniqueName: \"kubernetes.io/projected/a1eeba69-c15e-494a-be84-f41668dde079-kube-api-access-tpqfh\") pod \"coredns-6f6b679f8f-slmg9\" (UID: \"a1eeba69-c15e-494a-be84-f41668dde079\") " pod="kube-system/coredns-6f6b679f8f-slmg9" Jan 29 16:22:02.697026 kubelet[2569]: E0129 16:22:02.696815 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 16:22:02.698581 containerd[1485]: time="2025-01-29T16:22:02.698249724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-slmg9,Uid:a1eeba69-c15e-494a-be84-f41668dde079,Namespace:kube-system,Attempt:0,}" Jan 29 16:22:02.722958 kubelet[2569]: E0129 16:22:02.722523 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 16:22:02.732129 containerd[1485]: time="2025-01-29T16:22:02.730300348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-lzn9l,Uid:b4fcd33b-5bf0-4449-8868-3770e93403be,Namespace:kube-system,Attempt:0,}" Jan 29 16:22:02.867876 kubelet[2569]: E0129 16:22:02.867811 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 16:22:02.909713 kubelet[2569]: I0129 16:22:02.908673 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-lrwsg" podStartSLOduration=7.272546355 podStartE2EDuration="17.908638647s" podCreationTimestamp="2025-01-29 16:21:45 +0000 UTC" firstStartedPulling="2025-01-29 16:21:47.300196359 +0000 UTC m=+7.820909275" lastFinishedPulling="2025-01-29 16:21:57.936288636 +0000 UTC m=+18.457001567" observedRunningTime="2025-01-29 16:22:02.908425695 +0000 UTC m=+23.429138633" watchObservedRunningTime="2025-01-29 16:22:02.908638647 +0000 UTC m=+23.429351585" Jan 29 16:22:03.869809 kubelet[2569]: E0129 16:22:03.869662 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 16:22:04.289628 systemd-networkd[1370]: cilium_host: Link UP Jan 29 16:22:04.289927 systemd-networkd[1370]: cilium_net: Link UP Jan 29 16:22:04.290186 systemd-networkd[1370]: cilium_net: Gained carrier Jan 29 16:22:04.290392 systemd-networkd[1370]: cilium_host: Gained carrier Jan 29 16:22:04.404300 systemd-networkd[1370]: cilium_net: Gained IPv6LL Jan 29 16:22:04.464819 systemd-networkd[1370]: cilium_vxlan: Link UP Jan 29 16:22:04.464829 systemd-networkd[1370]: cilium_vxlan: Gained carrier Jan 29 16:22:04.847298 kernel: NET: Registered PF_ALG protocol family Jan 29 16:22:04.871944 kubelet[2569]: E0129 16:22:04.871897 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 16:22:04.971314 systemd-networkd[1370]: cilium_host: Gained IPv6LL Jan 29 16:22:05.612336 systemd-networkd[1370]: cilium_vxlan: Gained IPv6LL Jan 29 16:22:05.857639 systemd-networkd[1370]: lxc_health: Link UP Jan 29 16:22:05.868396 systemd-networkd[1370]: lxc_health: Gained carrier Jan 29 16:22:06.302499 systemd-networkd[1370]: lxc0324be5ab319: Link UP Jan 29 16:22:06.321348 kernel: eth0: renamed from tmpfea34 Jan 29 16:22:06.325185 kernel: eth0: renamed from tmp41c2b Jan 29 16:22:06.326990 systemd-networkd[1370]: lxc6c67208f0c86: Link UP Jan 29 16:22:06.329798 systemd-networkd[1370]: lxc0324be5ab319: Gained carrier Jan 29 16:22:06.332382 systemd-networkd[1370]: lxc6c67208f0c86: Gained carrier Jan 29 16:22:07.187102 kubelet[2569]: E0129 16:22:07.186126 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 16:22:07.469227 systemd-networkd[1370]: lxc_health: Gained IPv6LL Jan 29 16:22:07.886996 kubelet[2569]: E0129 16:22:07.886162 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 16:22:08.044309 systemd-networkd[1370]: lxc0324be5ab319: Gained IPv6LL Jan 29 16:22:08.045785 systemd-networkd[1370]: lxc6c67208f0c86: Gained IPv6LL Jan 29 16:22:08.888101 kubelet[2569]: E0129 16:22:08.888023 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 16:22:12.376020 containerd[1485]: time="2025-01-29T16:22:12.375864658Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:22:12.379011 containerd[1485]: time="2025-01-29T16:22:12.376335796Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:22:12.380333 containerd[1485]: time="2025-01-29T16:22:12.380232705Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:22:12.380788 containerd[1485]: time="2025-01-29T16:22:12.380660673Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:22:12.407621 containerd[1485]: time="2025-01-29T16:22:12.407425577Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:22:12.407621 containerd[1485]: time="2025-01-29T16:22:12.407564928Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:22:12.410194 containerd[1485]: time="2025-01-29T16:22:12.407590806Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:22:12.410194 containerd[1485]: time="2025-01-29T16:22:12.407725519Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:22:12.461637 systemd[1]: Started cri-containerd-fea3498b227490ab4a0b4e02a79a82295abc1f8d8d4f6cc3668a27c629c12904.scope - libcontainer container fea3498b227490ab4a0b4e02a79a82295abc1f8d8d4f6cc3668a27c629c12904. Jan 29 16:22:12.492368 systemd[1]: Started cri-containerd-41c2b6d25c136eabdb971c060b8270becc056b33e556f4fce4a4073ff317e6db.scope - libcontainer container 41c2b6d25c136eabdb971c060b8270becc056b33e556f4fce4a4073ff317e6db. Jan 29 16:22:12.608586 containerd[1485]: time="2025-01-29T16:22:12.608512350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-lzn9l,Uid:b4fcd33b-5bf0-4449-8868-3770e93403be,Namespace:kube-system,Attempt:0,} returns sandbox id \"fea3498b227490ab4a0b4e02a79a82295abc1f8d8d4f6cc3668a27c629c12904\"" Jan 29 16:22:12.615106 kubelet[2569]: E0129 16:22:12.612962 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 16:22:12.620649 containerd[1485]: time="2025-01-29T16:22:12.620564513Z" level=info msg="CreateContainer within sandbox \"fea3498b227490ab4a0b4e02a79a82295abc1f8d8d4f6cc3668a27c629c12904\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 16:22:12.628567 containerd[1485]: time="2025-01-29T16:22:12.627528748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-slmg9,Uid:a1eeba69-c15e-494a-be84-f41668dde079,Namespace:kube-system,Attempt:0,} returns sandbox id \"41c2b6d25c136eabdb971c060b8270becc056b33e556f4fce4a4073ff317e6db\"" Jan 29 16:22:12.631794 kubelet[2569]: E0129 16:22:12.631538 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 16:22:12.638844 containerd[1485]: time="2025-01-29T16:22:12.638779940Z" level=info msg="CreateContainer within sandbox \"41c2b6d25c136eabdb971c060b8270becc056b33e556f4fce4a4073ff317e6db\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 16:22:12.669890 containerd[1485]: time="2025-01-29T16:22:12.669581160Z" level=info msg="CreateContainer within sandbox \"fea3498b227490ab4a0b4e02a79a82295abc1f8d8d4f6cc3668a27c629c12904\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6b4d653a1d8a08ba0cf018d4f168cc8c7c9bea39bbcc6860c9256b380fe2ab93\"" Jan 29 16:22:12.671489 containerd[1485]: time="2025-01-29T16:22:12.670981388Z" level=info msg="StartContainer for \"6b4d653a1d8a08ba0cf018d4f168cc8c7c9bea39bbcc6860c9256b380fe2ab93\"" Jan 29 16:22:12.674730 containerd[1485]: time="2025-01-29T16:22:12.674451970Z" level=info msg="CreateContainer within sandbox \"41c2b6d25c136eabdb971c060b8270becc056b33e556f4fce4a4073ff317e6db\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f49fe12b9812b13fb98ffd1df21ebab8717a1a793d7837d0cb297c51ab3970b3\"" Jan 29 16:22:12.676257 containerd[1485]: time="2025-01-29T16:22:12.675610276Z" level=info msg="StartContainer for \"f49fe12b9812b13fb98ffd1df21ebab8717a1a793d7837d0cb297c51ab3970b3\"" Jan 29 16:22:12.752921 systemd[1]: Started cri-containerd-6b4d653a1d8a08ba0cf018d4f168cc8c7c9bea39bbcc6860c9256b380fe2ab93.scope - libcontainer container 6b4d653a1d8a08ba0cf018d4f168cc8c7c9bea39bbcc6860c9256b380fe2ab93. Jan 29 16:22:12.756127 systemd[1]: Started cri-containerd-f49fe12b9812b13fb98ffd1df21ebab8717a1a793d7837d0cb297c51ab3970b3.scope - libcontainer container f49fe12b9812b13fb98ffd1df21ebab8717a1a793d7837d0cb297c51ab3970b3. Jan 29 16:22:12.824906 containerd[1485]: time="2025-01-29T16:22:12.824840670Z" level=info msg="StartContainer for \"6b4d653a1d8a08ba0cf018d4f168cc8c7c9bea39bbcc6860c9256b380fe2ab93\" returns successfully" Jan 29 16:22:12.840511 containerd[1485]: time="2025-01-29T16:22:12.839711142Z" level=info msg="StartContainer for \"f49fe12b9812b13fb98ffd1df21ebab8717a1a793d7837d0cb297c51ab3970b3\" returns successfully" Jan 29 16:22:12.908665 kubelet[2569]: E0129 16:22:12.907985 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 16:22:12.915861 kubelet[2569]: E0129 16:22:12.915575 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 16:22:12.992711 kubelet[2569]: I0129 16:22:12.992236 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-slmg9" podStartSLOduration=27.992193096 podStartE2EDuration="27.992193096s" podCreationTimestamp="2025-01-29 16:21:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:22:12.990048575 +0000 UTC m=+33.510761517" watchObservedRunningTime="2025-01-29 16:22:12.992193096 +0000 UTC m=+33.512906037" Jan 29 16:22:12.993332 kubelet[2569]: I0129 16:22:12.992656 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-lzn9l" podStartSLOduration=27.992633864 podStartE2EDuration="27.992633864s" podCreationTimestamp="2025-01-29 16:21:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:22:12.951766195 +0000 UTC m=+33.472479135" watchObservedRunningTime="2025-01-29 16:22:12.992633864 +0000 UTC m=+33.513346805" Jan 29 16:22:13.394681 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2346767218.mount: Deactivated successfully. Jan 29 16:22:13.919271 kubelet[2569]: E0129 16:22:13.918645 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 16:22:13.920287 kubelet[2569]: E0129 16:22:13.920260 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 16:22:14.921739 kubelet[2569]: E0129 16:22:14.921545 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 16:22:14.921739 kubelet[2569]: E0129 16:22:14.921545 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 16:22:33.335519 systemd[1]: Started sshd@7-164.92.66.114:22-139.178.89.65:47896.service - OpenSSH per-connection server daemon (139.178.89.65:47896). Jan 29 16:22:33.457789 sshd[3965]: Accepted publickey for core from 139.178.89.65 port 47896 ssh2: RSA SHA256:1yg7JhvZkrJOwhuBgQvJ79WUbQdosGJaLn9TZ7AtIqY Jan 29 16:22:33.461880 sshd-session[3965]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:22:33.472793 systemd-logind[1467]: New session 8 of user core. Jan 29 16:22:33.479449 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 16:22:34.205220 sshd[3967]: Connection closed by 139.178.89.65 port 47896 Jan 29 16:22:34.206314 sshd-session[3965]: pam_unix(sshd:session): session closed for user core Jan 29 16:22:34.213691 systemd[1]: sshd@7-164.92.66.114:22-139.178.89.65:47896.service: Deactivated successfully. Jan 29 16:22:34.217814 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 16:22:34.219234 systemd-logind[1467]: Session 8 logged out. Waiting for processes to exit. Jan 29 16:22:34.220869 systemd-logind[1467]: Removed session 8. Jan 29 16:22:39.244960 systemd[1]: Started sshd@8-164.92.66.114:22-139.178.89.65:47910.service - OpenSSH per-connection server daemon (139.178.89.65:47910). Jan 29 16:22:39.339756 sshd[3980]: Accepted publickey for core from 139.178.89.65 port 47910 ssh2: RSA SHA256:1yg7JhvZkrJOwhuBgQvJ79WUbQdosGJaLn9TZ7AtIqY Jan 29 16:22:39.341325 sshd-session[3980]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:22:39.353043 systemd-logind[1467]: New session 9 of user core. Jan 29 16:22:39.357673 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 16:22:39.647110 sshd[3982]: Connection closed by 139.178.89.65 port 47910 Jan 29 16:22:39.648729 sshd-session[3980]: pam_unix(sshd:session): session closed for user core Jan 29 16:22:39.661296 systemd[1]: sshd@8-164.92.66.114:22-139.178.89.65:47910.service: Deactivated successfully. Jan 29 16:22:39.667828 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 16:22:39.672648 systemd-logind[1467]: Session 9 logged out. Waiting for processes to exit. Jan 29 16:22:39.675315 systemd-logind[1467]: Removed session 9. Jan 29 16:22:44.676542 systemd[1]: Started sshd@9-164.92.66.114:22-139.178.89.65:36854.service - OpenSSH per-connection server daemon (139.178.89.65:36854). Jan 29 16:22:44.743820 sshd[3997]: Accepted publickey for core from 139.178.89.65 port 36854 ssh2: RSA SHA256:1yg7JhvZkrJOwhuBgQvJ79WUbQdosGJaLn9TZ7AtIqY Jan 29 16:22:44.746272 sshd-session[3997]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:22:44.754311 systemd-logind[1467]: New session 10 of user core. Jan 29 16:22:44.761515 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 16:22:44.998369 sshd[3999]: Connection closed by 139.178.89.65 port 36854 Jan 29 16:22:44.999339 sshd-session[3997]: pam_unix(sshd:session): session closed for user core Jan 29 16:22:45.006438 systemd-logind[1467]: Session 10 logged out. Waiting for processes to exit. Jan 29 16:22:45.010859 systemd[1]: sshd@9-164.92.66.114:22-139.178.89.65:36854.service: Deactivated successfully. Jan 29 16:22:45.019450 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 16:22:45.025993 systemd-logind[1467]: Removed session 10. Jan 29 16:22:50.021528 systemd[1]: Started sshd@10-164.92.66.114:22-139.178.89.65:36866.service - OpenSSH per-connection server daemon (139.178.89.65:36866). Jan 29 16:22:50.088017 sshd[4014]: Accepted publickey for core from 139.178.89.65 port 36866 ssh2: RSA SHA256:1yg7JhvZkrJOwhuBgQvJ79WUbQdosGJaLn9TZ7AtIqY Jan 29 16:22:50.091474 sshd-session[4014]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:22:50.102895 systemd-logind[1467]: New session 11 of user core. Jan 29 16:22:50.110446 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 16:22:50.285964 sshd[4016]: Connection closed by 139.178.89.65 port 36866 Jan 29 16:22:50.287986 sshd-session[4014]: pam_unix(sshd:session): session closed for user core Jan 29 16:22:50.302772 systemd[1]: sshd@10-164.92.66.114:22-139.178.89.65:36866.service: Deactivated successfully. Jan 29 16:22:50.308676 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 16:22:50.311391 systemd-logind[1467]: Session 11 logged out. Waiting for processes to exit. Jan 29 16:22:50.321348 systemd[1]: Started sshd@11-164.92.66.114:22-139.178.89.65:36870.service - OpenSSH per-connection server daemon (139.178.89.65:36870). Jan 29 16:22:50.322815 systemd-logind[1467]: Removed session 11. Jan 29 16:22:50.404220 sshd[4028]: Accepted publickey for core from 139.178.89.65 port 36870 ssh2: RSA SHA256:1yg7JhvZkrJOwhuBgQvJ79WUbQdosGJaLn9TZ7AtIqY Jan 29 16:22:50.407459 sshd-session[4028]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:22:50.416592 systemd-logind[1467]: New session 12 of user core. Jan 29 16:22:50.423369 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 29 16:22:50.680127 sshd[4031]: Connection closed by 139.178.89.65 port 36870 Jan 29 16:22:50.680878 sshd-session[4028]: pam_unix(sshd:session): session closed for user core Jan 29 16:22:50.698233 systemd[1]: sshd@11-164.92.66.114:22-139.178.89.65:36870.service: Deactivated successfully. Jan 29 16:22:50.706205 systemd[1]: session-12.scope: Deactivated successfully. Jan 29 16:22:50.710744 systemd-logind[1467]: Session 12 logged out. Waiting for processes to exit. Jan 29 16:22:50.719666 systemd[1]: Started sshd@12-164.92.66.114:22-139.178.89.65:36884.service - OpenSSH per-connection server daemon (139.178.89.65:36884). Jan 29 16:22:50.726870 systemd-logind[1467]: Removed session 12. Jan 29 16:22:50.797466 sshd[4040]: Accepted publickey for core from 139.178.89.65 port 36884 ssh2: RSA SHA256:1yg7JhvZkrJOwhuBgQvJ79WUbQdosGJaLn9TZ7AtIqY Jan 29 16:22:50.801218 sshd-session[4040]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:22:50.811396 systemd-logind[1467]: New session 13 of user core. Jan 29 16:22:50.820432 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 29 16:22:50.979457 sshd[4043]: Connection closed by 139.178.89.65 port 36884 Jan 29 16:22:50.980542 sshd-session[4040]: pam_unix(sshd:session): session closed for user core Jan 29 16:22:50.986259 systemd-logind[1467]: Session 13 logged out. Waiting for processes to exit. Jan 29 16:22:50.987340 systemd[1]: sshd@12-164.92.66.114:22-139.178.89.65:36884.service: Deactivated successfully. Jan 29 16:22:50.996569 systemd[1]: session-13.scope: Deactivated successfully. Jan 29 16:22:50.998530 systemd-logind[1467]: Removed session 13. Jan 29 16:22:51.640142 kubelet[2569]: E0129 16:22:51.639817 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 16:22:56.005356 systemd[1]: Started sshd@13-164.92.66.114:22-139.178.89.65:39604.service - OpenSSH per-connection server daemon (139.178.89.65:39604). Jan 29 16:22:56.059484 sshd[4056]: Accepted publickey for core from 139.178.89.65 port 39604 ssh2: RSA SHA256:1yg7JhvZkrJOwhuBgQvJ79WUbQdosGJaLn9TZ7AtIqY Jan 29 16:22:56.061357 sshd-session[4056]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:22:56.070952 systemd-logind[1467]: New session 14 of user core. Jan 29 16:22:56.076440 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 29 16:22:56.249091 sshd[4058]: Connection closed by 139.178.89.65 port 39604 Jan 29 16:22:56.249985 sshd-session[4056]: pam_unix(sshd:session): session closed for user core Jan 29 16:22:56.257676 systemd-logind[1467]: Session 14 logged out. Waiting for processes to exit. Jan 29 16:22:56.259009 systemd[1]: sshd@13-164.92.66.114:22-139.178.89.65:39604.service: Deactivated successfully. Jan 29 16:22:56.263932 systemd[1]: session-14.scope: Deactivated successfully. Jan 29 16:22:56.266541 systemd-logind[1467]: Removed session 14. Jan 29 16:22:58.640605 kubelet[2569]: E0129 16:22:58.640022 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 16:22:59.317256 systemd[1]: Started sshd@14-164.92.66.114:22-156.229.233.219:60052.service - OpenSSH per-connection server daemon (156.229.233.219:60052). Jan 29 16:23:01.270459 systemd[1]: Started sshd@15-164.92.66.114:22-139.178.89.65:43668.service - OpenSSH per-connection server daemon (139.178.89.65:43668). Jan 29 16:23:01.328917 sshd[4073]: Accepted publickey for core from 139.178.89.65 port 43668 ssh2: RSA SHA256:1yg7JhvZkrJOwhuBgQvJ79WUbQdosGJaLn9TZ7AtIqY Jan 29 16:23:01.329357 sshd-session[4073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:23:01.337526 systemd-logind[1467]: New session 15 of user core. Jan 29 16:23:01.345455 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 29 16:23:01.507001 sshd[4075]: Connection closed by 139.178.89.65 port 43668 Jan 29 16:23:01.508349 sshd-session[4073]: pam_unix(sshd:session): session closed for user core Jan 29 16:23:01.516980 systemd[1]: sshd@15-164.92.66.114:22-139.178.89.65:43668.service: Deactivated successfully. Jan 29 16:23:01.522947 systemd[1]: session-15.scope: Deactivated successfully. Jan 29 16:23:01.525679 systemd-logind[1467]: Session 15 logged out. Waiting for processes to exit. Jan 29 16:23:01.528252 systemd-logind[1467]: Removed session 15. Jan 29 16:23:02.727424 sshd[4070]: Invalid user nginx from 156.229.233.219 port 60052 Jan 29 16:23:03.469163 sshd[4070]: Connection closed by invalid user nginx 156.229.233.219 port 60052 [preauth] Jan 29 16:23:03.472882 systemd[1]: sshd@14-164.92.66.114:22-156.229.233.219:60052.service: Deactivated successfully. Jan 29 16:23:03.640135 kubelet[2569]: E0129 16:23:03.639285 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 16:23:06.535718 systemd[1]: Started sshd@16-164.92.66.114:22-139.178.89.65:43682.service - OpenSSH per-connection server daemon (139.178.89.65:43682). Jan 29 16:23:06.603402 sshd[4090]: Accepted publickey for core from 139.178.89.65 port 43682 ssh2: RSA SHA256:1yg7JhvZkrJOwhuBgQvJ79WUbQdosGJaLn9TZ7AtIqY Jan 29 16:23:06.605179 sshd-session[4090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:23:06.613513 systemd-logind[1467]: New session 16 of user core. Jan 29 16:23:06.618550 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 29 16:23:06.640517 kubelet[2569]: E0129 16:23:06.640461 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 16:23:06.756853 sshd[4092]: Connection closed by 139.178.89.65 port 43682 Jan 29 16:23:06.757776 sshd-session[4090]: pam_unix(sshd:session): session closed for user core Jan 29 16:23:06.762552 systemd[1]: sshd@16-164.92.66.114:22-139.178.89.65:43682.service: Deactivated successfully. Jan 29 16:23:06.765628 systemd[1]: session-16.scope: Deactivated successfully. Jan 29 16:23:06.766981 systemd-logind[1467]: Session 16 logged out. Waiting for processes to exit. Jan 29 16:23:06.768283 systemd-logind[1467]: Removed session 16. Jan 29 16:23:11.786948 systemd[1]: Started sshd@17-164.92.66.114:22-139.178.89.65:37932.service - OpenSSH per-connection server daemon (139.178.89.65:37932). Jan 29 16:23:11.843607 sshd[4105]: Accepted publickey for core from 139.178.89.65 port 37932 ssh2: RSA SHA256:1yg7JhvZkrJOwhuBgQvJ79WUbQdosGJaLn9TZ7AtIqY Jan 29 16:23:11.845553 sshd-session[4105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:23:11.851707 systemd-logind[1467]: New session 17 of user core. Jan 29 16:23:11.857516 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 29 16:23:12.018639 sshd[4107]: Connection closed by 139.178.89.65 port 37932 Jan 29 16:23:12.021469 sshd-session[4105]: pam_unix(sshd:session): session closed for user core Jan 29 16:23:12.035677 systemd[1]: sshd@17-164.92.66.114:22-139.178.89.65:37932.service: Deactivated successfully. Jan 29 16:23:12.040858 systemd[1]: session-17.scope: Deactivated successfully. Jan 29 16:23:12.044287 systemd-logind[1467]: Session 17 logged out. Waiting for processes to exit. Jan 29 16:23:12.052719 systemd[1]: Started sshd@18-164.92.66.114:22-139.178.89.65:37934.service - OpenSSH per-connection server daemon (139.178.89.65:37934). Jan 29 16:23:12.054541 systemd-logind[1467]: Removed session 17. Jan 29 16:23:12.123264 sshd[4118]: Accepted publickey for core from 139.178.89.65 port 37934 ssh2: RSA SHA256:1yg7JhvZkrJOwhuBgQvJ79WUbQdosGJaLn9TZ7AtIqY Jan 29 16:23:12.124952 sshd-session[4118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:23:12.132262 systemd-logind[1467]: New session 18 of user core. Jan 29 16:23:12.140479 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 29 16:23:12.529177 sshd[4121]: Connection closed by 139.178.89.65 port 37934 Jan 29 16:23:12.531079 sshd-session[4118]: pam_unix(sshd:session): session closed for user core Jan 29 16:23:12.545362 systemd[1]: sshd@18-164.92.66.114:22-139.178.89.65:37934.service: Deactivated successfully. Jan 29 16:23:12.550274 systemd[1]: session-18.scope: Deactivated successfully. Jan 29 16:23:12.552354 systemd-logind[1467]: Session 18 logged out. Waiting for processes to exit. Jan 29 16:23:12.563760 systemd[1]: Started sshd@19-164.92.66.114:22-139.178.89.65:37944.service - OpenSSH per-connection server daemon (139.178.89.65:37944). Jan 29 16:23:12.566412 systemd-logind[1467]: Removed session 18. Jan 29 16:23:12.652504 sshd[4130]: Accepted publickey for core from 139.178.89.65 port 37944 ssh2: RSA SHA256:1yg7JhvZkrJOwhuBgQvJ79WUbQdosGJaLn9TZ7AtIqY Jan 29 16:23:12.656419 sshd-session[4130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:23:12.672233 systemd-logind[1467]: New session 19 of user core. Jan 29 16:23:12.677553 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 29 16:23:15.005682 sshd[4133]: Connection closed by 139.178.89.65 port 37944 Jan 29 16:23:15.006556 sshd-session[4130]: pam_unix(sshd:session): session closed for user core Jan 29 16:23:15.031654 systemd[1]: Started sshd@20-164.92.66.114:22-139.178.89.65:37946.service - OpenSSH per-connection server daemon (139.178.89.65:37946). Jan 29 16:23:15.032601 systemd[1]: sshd@19-164.92.66.114:22-139.178.89.65:37944.service: Deactivated successfully. Jan 29 16:23:15.040997 systemd[1]: session-19.scope: Deactivated successfully. Jan 29 16:23:15.045430 systemd-logind[1467]: Session 19 logged out. Waiting for processes to exit. Jan 29 16:23:15.051031 systemd-logind[1467]: Removed session 19. Jan 29 16:23:15.165136 sshd[4145]: Accepted publickey for core from 139.178.89.65 port 37946 ssh2: RSA SHA256:1yg7JhvZkrJOwhuBgQvJ79WUbQdosGJaLn9TZ7AtIqY Jan 29 16:23:15.169198 sshd-session[4145]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:23:15.181721 systemd-logind[1467]: New session 20 of user core. Jan 29 16:23:15.192475 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 29 16:23:15.664426 sshd[4152]: Connection closed by 139.178.89.65 port 37946 Jan 29 16:23:15.665683 sshd-session[4145]: pam_unix(sshd:session): session closed for user core Jan 29 16:23:15.686888 systemd[1]: sshd@20-164.92.66.114:22-139.178.89.65:37946.service: Deactivated successfully. Jan 29 16:23:15.692878 systemd[1]: session-20.scope: Deactivated successfully. Jan 29 16:23:15.695976 systemd-logind[1467]: Session 20 logged out. Waiting for processes to exit. Jan 29 16:23:15.709334 systemd[1]: Started sshd@21-164.92.66.114:22-139.178.89.65:37952.service - OpenSSH per-connection server daemon (139.178.89.65:37952). Jan 29 16:23:15.710886 systemd-logind[1467]: Removed session 20. Jan 29 16:23:15.776139 sshd[4161]: Accepted publickey for core from 139.178.89.65 port 37952 ssh2: RSA SHA256:1yg7JhvZkrJOwhuBgQvJ79WUbQdosGJaLn9TZ7AtIqY Jan 29 16:23:15.777902 sshd-session[4161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:23:15.786880 systemd-logind[1467]: New session 21 of user core. Jan 29 16:23:15.798405 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 29 16:23:15.985119 sshd[4164]: Connection closed by 139.178.89.65 port 37952 Jan 29 16:23:15.987435 sshd-session[4161]: pam_unix(sshd:session): session closed for user core Jan 29 16:23:15.995022 systemd[1]: sshd@21-164.92.66.114:22-139.178.89.65:37952.service: Deactivated successfully. Jan 29 16:23:16.000840 systemd[1]: session-21.scope: Deactivated successfully. Jan 29 16:23:16.002705 systemd-logind[1467]: Session 21 logged out. Waiting for processes to exit. Jan 29 16:23:16.003950 systemd-logind[1467]: Removed session 21. Jan 29 16:23:19.641100 kubelet[2569]: E0129 16:23:19.640803 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 16:23:21.016342 systemd[1]: Started sshd@22-164.92.66.114:22-139.178.89.65:52390.service - OpenSSH per-connection server daemon (139.178.89.65:52390). Jan 29 16:23:21.080146 sshd[4179]: Accepted publickey for core from 139.178.89.65 port 52390 ssh2: RSA SHA256:1yg7JhvZkrJOwhuBgQvJ79WUbQdosGJaLn9TZ7AtIqY Jan 29 16:23:21.082246 sshd-session[4179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:23:21.090658 systemd-logind[1467]: New session 22 of user core. Jan 29 16:23:21.100367 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 29 16:23:21.252307 sshd[4181]: Connection closed by 139.178.89.65 port 52390 Jan 29 16:23:21.253375 sshd-session[4179]: pam_unix(sshd:session): session closed for user core Jan 29 16:23:21.259593 systemd[1]: sshd@22-164.92.66.114:22-139.178.89.65:52390.service: Deactivated successfully. Jan 29 16:23:21.262941 systemd[1]: session-22.scope: Deactivated successfully. Jan 29 16:23:21.264795 systemd-logind[1467]: Session 22 logged out. Waiting for processes to exit. Jan 29 16:23:21.266692 systemd-logind[1467]: Removed session 22. Jan 29 16:23:21.641744 kubelet[2569]: E0129 16:23:21.639283 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 16:23:24.649402 kubelet[2569]: E0129 16:23:24.649053 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 16:23:26.278614 systemd[1]: Started sshd@23-164.92.66.114:22-139.178.89.65:52398.service - OpenSSH per-connection server daemon (139.178.89.65:52398). Jan 29 16:23:26.354183 sshd[4196]: Accepted publickey for core from 139.178.89.65 port 52398 ssh2: RSA SHA256:1yg7JhvZkrJOwhuBgQvJ79WUbQdosGJaLn9TZ7AtIqY Jan 29 16:23:26.356151 sshd-session[4196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:23:26.366228 systemd-logind[1467]: New session 23 of user core. Jan 29 16:23:26.372524 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 29 16:23:26.566230 sshd[4198]: Connection closed by 139.178.89.65 port 52398 Jan 29 16:23:26.567017 sshd-session[4196]: pam_unix(sshd:session): session closed for user core Jan 29 16:23:26.573536 systemd[1]: sshd@23-164.92.66.114:22-139.178.89.65:52398.service: Deactivated successfully. Jan 29 16:23:26.578537 systemd[1]: session-23.scope: Deactivated successfully. Jan 29 16:23:26.581968 systemd-logind[1467]: Session 23 logged out. Waiting for processes to exit. Jan 29 16:23:26.585023 systemd-logind[1467]: Removed session 23. Jan 29 16:23:31.590609 systemd[1]: Started sshd@24-164.92.66.114:22-139.178.89.65:48464.service - OpenSSH per-connection server daemon (139.178.89.65:48464). Jan 29 16:23:31.652169 sshd[4210]: Accepted publickey for core from 139.178.89.65 port 48464 ssh2: RSA SHA256:1yg7JhvZkrJOwhuBgQvJ79WUbQdosGJaLn9TZ7AtIqY Jan 29 16:23:31.653376 sshd-session[4210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:23:31.664984 systemd-logind[1467]: New session 24 of user core. Jan 29 16:23:31.669869 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 29 16:23:31.837726 sshd[4212]: Connection closed by 139.178.89.65 port 48464 Jan 29 16:23:31.836515 sshd-session[4210]: pam_unix(sshd:session): session closed for user core Jan 29 16:23:31.842371 systemd[1]: sshd@24-164.92.66.114:22-139.178.89.65:48464.service: Deactivated successfully. Jan 29 16:23:31.847949 systemd[1]: session-24.scope: Deactivated successfully. Jan 29 16:23:31.851055 systemd-logind[1467]: Session 24 logged out. Waiting for processes to exit. Jan 29 16:23:31.854279 systemd-logind[1467]: Removed session 24. Jan 29 16:23:33.641636 kubelet[2569]: E0129 16:23:33.641567 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 16:23:36.864191 systemd[1]: Started sshd@25-164.92.66.114:22-139.178.89.65:48474.service - OpenSSH per-connection server daemon (139.178.89.65:48474). Jan 29 16:23:36.945368 sshd[4223]: Accepted publickey for core from 139.178.89.65 port 48474 ssh2: RSA SHA256:1yg7JhvZkrJOwhuBgQvJ79WUbQdosGJaLn9TZ7AtIqY Jan 29 16:23:36.949046 sshd-session[4223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:23:36.959190 systemd-logind[1467]: New session 25 of user core. Jan 29 16:23:36.966526 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 29 16:23:37.134316 sshd[4225]: Connection closed by 139.178.89.65 port 48474 Jan 29 16:23:37.135331 sshd-session[4223]: pam_unix(sshd:session): session closed for user core Jan 29 16:23:37.150428 systemd[1]: sshd@25-164.92.66.114:22-139.178.89.65:48474.service: Deactivated successfully. Jan 29 16:23:37.155507 systemd[1]: session-25.scope: Deactivated successfully. Jan 29 16:23:37.159830 systemd-logind[1467]: Session 25 logged out. Waiting for processes to exit. Jan 29 16:23:37.168749 systemd[1]: Started sshd@26-164.92.66.114:22-139.178.89.65:48486.service - OpenSSH per-connection server daemon (139.178.89.65:48486). Jan 29 16:23:37.173794 systemd-logind[1467]: Removed session 25. Jan 29 16:23:37.236595 sshd[4236]: Accepted publickey for core from 139.178.89.65 port 48486 ssh2: RSA SHA256:1yg7JhvZkrJOwhuBgQvJ79WUbQdosGJaLn9TZ7AtIqY Jan 29 16:23:37.239368 sshd-session[4236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:23:37.250226 systemd-logind[1467]: New session 26 of user core. Jan 29 16:23:37.256435 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 29 16:23:38.781756 containerd[1485]: time="2025-01-29T16:23:38.781674894Z" level=info msg="StopContainer for \"2ec1344e60dea9861f1d241b4432bf4ecf2c6f239abf4eb769ddf79645515620\" with timeout 30 (s)" Jan 29 16:23:38.783232 containerd[1485]: time="2025-01-29T16:23:38.782934275Z" level=info msg="Stop container \"2ec1344e60dea9861f1d241b4432bf4ecf2c6f239abf4eb769ddf79645515620\" with signal terminated" Jan 29 16:23:38.799586 containerd[1485]: time="2025-01-29T16:23:38.799464706Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 16:23:38.815679 containerd[1485]: time="2025-01-29T16:23:38.815387097Z" level=info msg="StopContainer for \"5a5fbe3cf78a1158ec4480f449823ce1970e3f796bd280e5dc776e10f7cb8f95\" with timeout 2 (s)" Jan 29 16:23:38.817314 containerd[1485]: time="2025-01-29T16:23:38.817226553Z" level=info msg="Stop container \"5a5fbe3cf78a1158ec4480f449823ce1970e3f796bd280e5dc776e10f7cb8f95\" with signal terminated" Jan 29 16:23:38.841157 systemd-networkd[1370]: lxc_health: Link DOWN Jan 29 16:23:38.841169 systemd-networkd[1370]: lxc_health: Lost carrier Jan 29 16:23:38.865744 systemd[1]: cri-containerd-2ec1344e60dea9861f1d241b4432bf4ecf2c6f239abf4eb769ddf79645515620.scope: Deactivated successfully. Jan 29 16:23:38.866521 systemd[1]: cri-containerd-2ec1344e60dea9861f1d241b4432bf4ecf2c6f239abf4eb769ddf79645515620.scope: Consumed 569ms CPU time, 28.1M memory peak, 1.8M read from disk, 4K written to disk. Jan 29 16:23:38.881798 systemd[1]: cri-containerd-5a5fbe3cf78a1158ec4480f449823ce1970e3f796bd280e5dc776e10f7cb8f95.scope: Deactivated successfully. Jan 29 16:23:38.882684 systemd[1]: cri-containerd-5a5fbe3cf78a1158ec4480f449823ce1970e3f796bd280e5dc776e10f7cb8f95.scope: Consumed 9.943s CPU time, 154.5M memory peak, 32.1M read from disk, 13.3M written to disk. Jan 29 16:23:38.930806 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2ec1344e60dea9861f1d241b4432bf4ecf2c6f239abf4eb769ddf79645515620-rootfs.mount: Deactivated successfully. Jan 29 16:23:38.938104 containerd[1485]: time="2025-01-29T16:23:38.937571106Z" level=info msg="shim disconnected" id=5a5fbe3cf78a1158ec4480f449823ce1970e3f796bd280e5dc776e10f7cb8f95 namespace=k8s.io Jan 29 16:23:38.938104 containerd[1485]: time="2025-01-29T16:23:38.937817039Z" level=warning msg="cleaning up after shim disconnected" id=5a5fbe3cf78a1158ec4480f449823ce1970e3f796bd280e5dc776e10f7cb8f95 namespace=k8s.io Jan 29 16:23:38.938104 containerd[1485]: time="2025-01-29T16:23:38.937837378Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:23:38.938104 containerd[1485]: time="2025-01-29T16:23:38.937904491Z" level=info msg="shim disconnected" id=2ec1344e60dea9861f1d241b4432bf4ecf2c6f239abf4eb769ddf79645515620 namespace=k8s.io Jan 29 16:23:38.938104 containerd[1485]: time="2025-01-29T16:23:38.937940117Z" level=warning msg="cleaning up after shim disconnected" id=2ec1344e60dea9861f1d241b4432bf4ecf2c6f239abf4eb769ddf79645515620 namespace=k8s.io Jan 29 16:23:38.938104 containerd[1485]: time="2025-01-29T16:23:38.937951790Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:23:38.939672 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5a5fbe3cf78a1158ec4480f449823ce1970e3f796bd280e5dc776e10f7cb8f95-rootfs.mount: Deactivated successfully. Jan 29 16:23:38.970054 containerd[1485]: time="2025-01-29T16:23:38.969563164Z" level=warning msg="cleanup warnings time=\"2025-01-29T16:23:38Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 16:23:38.974568 containerd[1485]: time="2025-01-29T16:23:38.974496916Z" level=info msg="StopContainer for \"5a5fbe3cf78a1158ec4480f449823ce1970e3f796bd280e5dc776e10f7cb8f95\" returns successfully" Jan 29 16:23:38.975508 containerd[1485]: time="2025-01-29T16:23:38.975472598Z" level=info msg="StopPodSandbox for \"6604940116418ca541fa7e6e1526d91ab216d03da5313961b239caf2d8030af2\"" Jan 29 16:23:38.980510 containerd[1485]: time="2025-01-29T16:23:38.980450689Z" level=info msg="StopContainer for \"2ec1344e60dea9861f1d241b4432bf4ecf2c6f239abf4eb769ddf79645515620\" returns successfully" Jan 29 16:23:38.981847 containerd[1485]: time="2025-01-29T16:23:38.981784425Z" level=info msg="StopPodSandbox for \"c1a581d0cb691ce7bd73cf3d6b3cdebc7bb48ac67ef4044adbd19f228cf66340\"" Jan 29 16:23:38.996148 containerd[1485]: time="2025-01-29T16:23:38.983788424Z" level=info msg="Container to stop \"2ec1344e60dea9861f1d241b4432bf4ecf2c6f239abf4eb769ddf79645515620\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:23:38.997427 containerd[1485]: time="2025-01-29T16:23:38.983841492Z" level=info msg="Container to stop \"8ba4541eee9828c902fb6727af9c45b5b47c27b71da8eed2ff4f9355d0dcfb5a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:23:38.997427 containerd[1485]: time="2025-01-29T16:23:38.997341736Z" level=info msg="Container to stop \"6399f97224a518be73442036b5a69028065c9359c3d28e739aa8879e98b98559\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:23:38.997427 containerd[1485]: time="2025-01-29T16:23:38.997359427Z" level=info msg="Container to stop \"b6c645d2259e627b5451f474c013505a920ed17c9af1d1f804d7cec870589cca\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:23:38.997427 containerd[1485]: time="2025-01-29T16:23:38.997369334Z" level=info msg="Container to stop \"3cbb75c24464520988bddbe42ca2e8450a9e3d71f2ed1f8d14657d747004d0a7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:23:38.997427 containerd[1485]: time="2025-01-29T16:23:38.997379889Z" level=info msg="Container to stop \"5a5fbe3cf78a1158ec4480f449823ce1970e3f796bd280e5dc776e10f7cb8f95\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:23:38.998633 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c1a581d0cb691ce7bd73cf3d6b3cdebc7bb48ac67ef4044adbd19f228cf66340-shm.mount: Deactivated successfully. Jan 29 16:23:39.024172 systemd[1]: cri-containerd-6604940116418ca541fa7e6e1526d91ab216d03da5313961b239caf2d8030af2.scope: Deactivated successfully. Jan 29 16:23:39.027050 systemd[1]: cri-containerd-c1a581d0cb691ce7bd73cf3d6b3cdebc7bb48ac67ef4044adbd19f228cf66340.scope: Deactivated successfully. Jan 29 16:23:39.090236 containerd[1485]: time="2025-01-29T16:23:39.088319262Z" level=info msg="shim disconnected" id=6604940116418ca541fa7e6e1526d91ab216d03da5313961b239caf2d8030af2 namespace=k8s.io Jan 29 16:23:39.090236 containerd[1485]: time="2025-01-29T16:23:39.088431690Z" level=warning msg="cleaning up after shim disconnected" id=6604940116418ca541fa7e6e1526d91ab216d03da5313961b239caf2d8030af2 namespace=k8s.io Jan 29 16:23:39.090236 containerd[1485]: time="2025-01-29T16:23:39.088452534Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:23:39.097754 containerd[1485]: time="2025-01-29T16:23:39.096035741Z" level=info msg="shim disconnected" id=c1a581d0cb691ce7bd73cf3d6b3cdebc7bb48ac67ef4044adbd19f228cf66340 namespace=k8s.io Jan 29 16:23:39.097754 containerd[1485]: time="2025-01-29T16:23:39.097378311Z" level=warning msg="cleaning up after shim disconnected" id=c1a581d0cb691ce7bd73cf3d6b3cdebc7bb48ac67ef4044adbd19f228cf66340 namespace=k8s.io Jan 29 16:23:39.097754 containerd[1485]: time="2025-01-29T16:23:39.097403959Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:23:39.125903 containerd[1485]: time="2025-01-29T16:23:39.125742864Z" level=info msg="TearDown network for sandbox \"6604940116418ca541fa7e6e1526d91ab216d03da5313961b239caf2d8030af2\" successfully" Jan 29 16:23:39.125903 containerd[1485]: time="2025-01-29T16:23:39.125818397Z" level=info msg="StopPodSandbox for \"6604940116418ca541fa7e6e1526d91ab216d03da5313961b239caf2d8030af2\" returns successfully" Jan 29 16:23:39.133435 containerd[1485]: time="2025-01-29T16:23:39.132862489Z" level=info msg="TearDown network for sandbox \"c1a581d0cb691ce7bd73cf3d6b3cdebc7bb48ac67ef4044adbd19f228cf66340\" successfully" Jan 29 16:23:39.133435 containerd[1485]: time="2025-01-29T16:23:39.132985200Z" level=info msg="StopPodSandbox for \"c1a581d0cb691ce7bd73cf3d6b3cdebc7bb48ac67ef4044adbd19f228cf66340\" returns successfully" Jan 29 16:23:39.211104 kubelet[2569]: I0129 16:23:39.208755 2569 scope.go:117] "RemoveContainer" containerID="2ec1344e60dea9861f1d241b4432bf4ecf2c6f239abf4eb769ddf79645515620" Jan 29 16:23:39.222469 containerd[1485]: time="2025-01-29T16:23:39.221833300Z" level=info msg="RemoveContainer for \"2ec1344e60dea9861f1d241b4432bf4ecf2c6f239abf4eb769ddf79645515620\"" Jan 29 16:23:39.230180 containerd[1485]: time="2025-01-29T16:23:39.229978827Z" level=info msg="RemoveContainer for \"2ec1344e60dea9861f1d241b4432bf4ecf2c6f239abf4eb769ddf79645515620\" returns successfully" Jan 29 16:23:39.234283 kubelet[2569]: I0129 16:23:39.231462 2569 scope.go:117] "RemoveContainer" containerID="2ec1344e60dea9861f1d241b4432bf4ecf2c6f239abf4eb769ddf79645515620" Jan 29 16:23:39.234283 kubelet[2569]: E0129 16:23:39.232841 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2ec1344e60dea9861f1d241b4432bf4ecf2c6f239abf4eb769ddf79645515620\": not found" containerID="2ec1344e60dea9861f1d241b4432bf4ecf2c6f239abf4eb769ddf79645515620" Jan 29 16:23:39.234522 containerd[1485]: time="2025-01-29T16:23:39.232092376Z" level=error msg="ContainerStatus for \"2ec1344e60dea9861f1d241b4432bf4ecf2c6f239abf4eb769ddf79645515620\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2ec1344e60dea9861f1d241b4432bf4ecf2c6f239abf4eb769ddf79645515620\": not found" Jan 29 16:23:39.237231 kubelet[2569]: I0129 16:23:39.232914 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2ec1344e60dea9861f1d241b4432bf4ecf2c6f239abf4eb769ddf79645515620"} err="failed to get container status \"2ec1344e60dea9861f1d241b4432bf4ecf2c6f239abf4eb769ddf79645515620\": rpc error: code = NotFound desc = an error occurred when try to find container \"2ec1344e60dea9861f1d241b4432bf4ecf2c6f239abf4eb769ddf79645515620\": not found" Jan 29 16:23:39.237556 kubelet[2569]: I0129 16:23:39.237521 2569 scope.go:117] "RemoveContainer" containerID="5a5fbe3cf78a1158ec4480f449823ce1970e3f796bd280e5dc776e10f7cb8f95" Jan 29 16:23:39.244019 containerd[1485]: time="2025-01-29T16:23:39.243961182Z" level=info msg="RemoveContainer for \"5a5fbe3cf78a1158ec4480f449823ce1970e3f796bd280e5dc776e10f7cb8f95\"" Jan 29 16:23:39.249180 containerd[1485]: time="2025-01-29T16:23:39.249100811Z" level=info msg="RemoveContainer for \"5a5fbe3cf78a1158ec4480f449823ce1970e3f796bd280e5dc776e10f7cb8f95\" returns successfully" Jan 29 16:23:39.250041 kubelet[2569]: I0129 16:23:39.249996 2569 scope.go:117] "RemoveContainer" containerID="3cbb75c24464520988bddbe42ca2e8450a9e3d71f2ed1f8d14657d747004d0a7" Jan 29 16:23:39.254302 containerd[1485]: time="2025-01-29T16:23:39.254100863Z" level=info msg="RemoveContainer for \"3cbb75c24464520988bddbe42ca2e8450a9e3d71f2ed1f8d14657d747004d0a7\"" Jan 29 16:23:39.259883 containerd[1485]: time="2025-01-29T16:23:39.259775688Z" level=info msg="RemoveContainer for \"3cbb75c24464520988bddbe42ca2e8450a9e3d71f2ed1f8d14657d747004d0a7\" returns successfully" Jan 29 16:23:39.260251 kubelet[2569]: I0129 16:23:39.260205 2569 scope.go:117] "RemoveContainer" containerID="6399f97224a518be73442036b5a69028065c9359c3d28e739aa8879e98b98559" Jan 29 16:23:39.262304 containerd[1485]: time="2025-01-29T16:23:39.262235635Z" level=info msg="RemoveContainer for \"6399f97224a518be73442036b5a69028065c9359c3d28e739aa8879e98b98559\"" Jan 29 16:23:39.266565 containerd[1485]: time="2025-01-29T16:23:39.266486364Z" level=info msg="RemoveContainer for \"6399f97224a518be73442036b5a69028065c9359c3d28e739aa8879e98b98559\" returns successfully" Jan 29 16:23:39.267239 kubelet[2569]: I0129 16:23:39.267184 2569 scope.go:117] "RemoveContainer" containerID="b6c645d2259e627b5451f474c013505a920ed17c9af1d1f804d7cec870589cca" Jan 29 16:23:39.270258 containerd[1485]: time="2025-01-29T16:23:39.269529183Z" level=info msg="RemoveContainer for \"b6c645d2259e627b5451f474c013505a920ed17c9af1d1f804d7cec870589cca\"" Jan 29 16:23:39.277219 containerd[1485]: time="2025-01-29T16:23:39.277060594Z" level=info msg="RemoveContainer for \"b6c645d2259e627b5451f474c013505a920ed17c9af1d1f804d7cec870589cca\" returns successfully" Jan 29 16:23:39.277895 kubelet[2569]: I0129 16:23:39.277549 2569 scope.go:117] "RemoveContainer" containerID="8ba4541eee9828c902fb6727af9c45b5b47c27b71da8eed2ff4f9355d0dcfb5a" Jan 29 16:23:39.281164 containerd[1485]: time="2025-01-29T16:23:39.280789407Z" level=info msg="RemoveContainer for \"8ba4541eee9828c902fb6727af9c45b5b47c27b71da8eed2ff4f9355d0dcfb5a\"" Jan 29 16:23:39.286509 containerd[1485]: time="2025-01-29T16:23:39.286418012Z" level=info msg="RemoveContainer for \"8ba4541eee9828c902fb6727af9c45b5b47c27b71da8eed2ff4f9355d0dcfb5a\" returns successfully" Jan 29 16:23:39.287645 kubelet[2569]: I0129 16:23:39.287103 2569 scope.go:117] "RemoveContainer" containerID="5a5fbe3cf78a1158ec4480f449823ce1970e3f796bd280e5dc776e10f7cb8f95" Jan 29 16:23:39.287804 containerd[1485]: time="2025-01-29T16:23:39.287523306Z" level=error msg="ContainerStatus for \"5a5fbe3cf78a1158ec4480f449823ce1970e3f796bd280e5dc776e10f7cb8f95\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5a5fbe3cf78a1158ec4480f449823ce1970e3f796bd280e5dc776e10f7cb8f95\": not found" Jan 29 16:23:39.288248 kubelet[2569]: E0129 16:23:39.288125 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5a5fbe3cf78a1158ec4480f449823ce1970e3f796bd280e5dc776e10f7cb8f95\": not found" containerID="5a5fbe3cf78a1158ec4480f449823ce1970e3f796bd280e5dc776e10f7cb8f95" Jan 29 16:23:39.288483 kubelet[2569]: I0129 16:23:39.288201 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5a5fbe3cf78a1158ec4480f449823ce1970e3f796bd280e5dc776e10f7cb8f95"} err="failed to get container status \"5a5fbe3cf78a1158ec4480f449823ce1970e3f796bd280e5dc776e10f7cb8f95\": rpc error: code = NotFound desc = an error occurred when try to find container \"5a5fbe3cf78a1158ec4480f449823ce1970e3f796bd280e5dc776e10f7cb8f95\": not found" Jan 29 16:23:39.288483 kubelet[2569]: I0129 16:23:39.288351 2569 scope.go:117] "RemoveContainer" containerID="3cbb75c24464520988bddbe42ca2e8450a9e3d71f2ed1f8d14657d747004d0a7" Jan 29 16:23:39.294115 kubelet[2569]: I0129 16:23:39.292599 2569 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4221b60b-8726-4e68-b252-a62f89e937dd-cilium-config-path\") pod \"4221b60b-8726-4e68-b252-a62f89e937dd\" (UID: \"4221b60b-8726-4e68-b252-a62f89e937dd\") " Jan 29 16:23:39.294115 kubelet[2569]: I0129 16:23:39.292671 2569 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f08433b6-100d-418e-a7a8-2d1174bf43b0-cilium-cgroup\") pod \"f08433b6-100d-418e-a7a8-2d1174bf43b0\" (UID: \"f08433b6-100d-418e-a7a8-2d1174bf43b0\") " Jan 29 16:23:39.294115 kubelet[2569]: I0129 16:23:39.292707 2569 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f08433b6-100d-418e-a7a8-2d1174bf43b0-hubble-tls\") pod \"f08433b6-100d-418e-a7a8-2d1174bf43b0\" (UID: \"f08433b6-100d-418e-a7a8-2d1174bf43b0\") " Jan 29 16:23:39.294115 kubelet[2569]: I0129 16:23:39.292737 2569 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f08433b6-100d-418e-a7a8-2d1174bf43b0-host-proc-sys-net\") pod \"f08433b6-100d-418e-a7a8-2d1174bf43b0\" (UID: \"f08433b6-100d-418e-a7a8-2d1174bf43b0\") " Jan 29 16:23:39.294115 kubelet[2569]: I0129 16:23:39.292768 2569 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f08433b6-100d-418e-a7a8-2d1174bf43b0-clustermesh-secrets\") pod \"f08433b6-100d-418e-a7a8-2d1174bf43b0\" (UID: \"f08433b6-100d-418e-a7a8-2d1174bf43b0\") " Jan 29 16:23:39.294115 kubelet[2569]: I0129 16:23:39.292789 2569 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f08433b6-100d-418e-a7a8-2d1174bf43b0-xtables-lock\") pod \"f08433b6-100d-418e-a7a8-2d1174bf43b0\" (UID: \"f08433b6-100d-418e-a7a8-2d1174bf43b0\") " Jan 29 16:23:39.294553 kubelet[2569]: I0129 16:23:39.292811 2569 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f08433b6-100d-418e-a7a8-2d1174bf43b0-cilium-run\") pod \"f08433b6-100d-418e-a7a8-2d1174bf43b0\" (UID: \"f08433b6-100d-418e-a7a8-2d1174bf43b0\") " Jan 29 16:23:39.294553 kubelet[2569]: I0129 16:23:39.292837 2569 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f08433b6-100d-418e-a7a8-2d1174bf43b0-lib-modules\") pod \"f08433b6-100d-418e-a7a8-2d1174bf43b0\" (UID: \"f08433b6-100d-418e-a7a8-2d1174bf43b0\") " Jan 29 16:23:39.294553 kubelet[2569]: I0129 16:23:39.292860 2569 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwvf5\" (UniqueName: \"kubernetes.io/projected/f08433b6-100d-418e-a7a8-2d1174bf43b0-kube-api-access-lwvf5\") pod \"f08433b6-100d-418e-a7a8-2d1174bf43b0\" (UID: \"f08433b6-100d-418e-a7a8-2d1174bf43b0\") " Jan 29 16:23:39.294553 kubelet[2569]: I0129 16:23:39.292883 2569 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f08433b6-100d-418e-a7a8-2d1174bf43b0-etc-cni-netd\") pod \"f08433b6-100d-418e-a7a8-2d1174bf43b0\" (UID: \"f08433b6-100d-418e-a7a8-2d1174bf43b0\") " Jan 29 16:23:39.294553 kubelet[2569]: I0129 16:23:39.292905 2569 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f08433b6-100d-418e-a7a8-2d1174bf43b0-cni-path\") pod \"f08433b6-100d-418e-a7a8-2d1174bf43b0\" (UID: \"f08433b6-100d-418e-a7a8-2d1174bf43b0\") " Jan 29 16:23:39.294553 kubelet[2569]: I0129 16:23:39.292952 2569 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m58cl\" (UniqueName: \"kubernetes.io/projected/4221b60b-8726-4e68-b252-a62f89e937dd-kube-api-access-m58cl\") pod \"4221b60b-8726-4e68-b252-a62f89e937dd\" (UID: \"4221b60b-8726-4e68-b252-a62f89e937dd\") " Jan 29 16:23:39.294846 kubelet[2569]: I0129 16:23:39.292974 2569 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f08433b6-100d-418e-a7a8-2d1174bf43b0-hostproc\") pod \"f08433b6-100d-418e-a7a8-2d1174bf43b0\" (UID: \"f08433b6-100d-418e-a7a8-2d1174bf43b0\") " Jan 29 16:23:39.294846 kubelet[2569]: I0129 16:23:39.292995 2569 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f08433b6-100d-418e-a7a8-2d1174bf43b0-host-proc-sys-kernel\") pod \"f08433b6-100d-418e-a7a8-2d1174bf43b0\" (UID: \"f08433b6-100d-418e-a7a8-2d1174bf43b0\") " Jan 29 16:23:39.294846 kubelet[2569]: I0129 16:23:39.293015 2569 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f08433b6-100d-418e-a7a8-2d1174bf43b0-bpf-maps\") pod \"f08433b6-100d-418e-a7a8-2d1174bf43b0\" (UID: \"f08433b6-100d-418e-a7a8-2d1174bf43b0\") " Jan 29 16:23:39.294846 kubelet[2569]: I0129 16:23:39.293038 2569 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f08433b6-100d-418e-a7a8-2d1174bf43b0-cilium-config-path\") pod \"f08433b6-100d-418e-a7a8-2d1174bf43b0\" (UID: \"f08433b6-100d-418e-a7a8-2d1174bf43b0\") " Jan 29 16:23:39.294846 kubelet[2569]: I0129 16:23:39.293305 2569 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f08433b6-100d-418e-a7a8-2d1174bf43b0-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f08433b6-100d-418e-a7a8-2d1174bf43b0" (UID: "f08433b6-100d-418e-a7a8-2d1174bf43b0"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:23:39.295272 kubelet[2569]: I0129 16:23:39.295210 2569 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f08433b6-100d-418e-a7a8-2d1174bf43b0-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f08433b6-100d-418e-a7a8-2d1174bf43b0" (UID: "f08433b6-100d-418e-a7a8-2d1174bf43b0"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:23:39.298414 kubelet[2569]: I0129 16:23:39.298341 2569 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f08433b6-100d-418e-a7a8-2d1174bf43b0-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f08433b6-100d-418e-a7a8-2d1174bf43b0" (UID: "f08433b6-100d-418e-a7a8-2d1174bf43b0"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:23:39.302538 kubelet[2569]: I0129 16:23:39.302466 2569 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f08433b6-100d-418e-a7a8-2d1174bf43b0-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f08433b6-100d-418e-a7a8-2d1174bf43b0" (UID: "f08433b6-100d-418e-a7a8-2d1174bf43b0"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:23:39.302746 kubelet[2569]: I0129 16:23:39.302570 2569 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f08433b6-100d-418e-a7a8-2d1174bf43b0-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f08433b6-100d-418e-a7a8-2d1174bf43b0" (UID: "f08433b6-100d-418e-a7a8-2d1174bf43b0"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:23:39.303620 containerd[1485]: time="2025-01-29T16:23:39.303393578Z" level=error msg="ContainerStatus for \"3cbb75c24464520988bddbe42ca2e8450a9e3d71f2ed1f8d14657d747004d0a7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3cbb75c24464520988bddbe42ca2e8450a9e3d71f2ed1f8d14657d747004d0a7\": not found" Jan 29 16:23:39.305204 kubelet[2569]: I0129 16:23:39.304826 2569 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f08433b6-100d-418e-a7a8-2d1174bf43b0-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f08433b6-100d-418e-a7a8-2d1174bf43b0" (UID: "f08433b6-100d-418e-a7a8-2d1174bf43b0"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:23:39.310206 kubelet[2569]: I0129 16:23:39.305011 2569 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f08433b6-100d-418e-a7a8-2d1174bf43b0-cni-path" (OuterVolumeSpecName: "cni-path") pod "f08433b6-100d-418e-a7a8-2d1174bf43b0" (UID: "f08433b6-100d-418e-a7a8-2d1174bf43b0"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:23:39.310206 kubelet[2569]: I0129 16:23:39.305425 2569 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4221b60b-8726-4e68-b252-a62f89e937dd-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4221b60b-8726-4e68-b252-a62f89e937dd" (UID: "4221b60b-8726-4e68-b252-a62f89e937dd"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:23:39.310206 kubelet[2569]: I0129 16:23:39.309162 2569 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f08433b6-100d-418e-a7a8-2d1174bf43b0-hostproc" (OuterVolumeSpecName: "hostproc") pod "f08433b6-100d-418e-a7a8-2d1174bf43b0" (UID: "f08433b6-100d-418e-a7a8-2d1174bf43b0"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:23:39.310206 kubelet[2569]: I0129 16:23:39.309199 2569 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f08433b6-100d-418e-a7a8-2d1174bf43b0-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f08433b6-100d-418e-a7a8-2d1174bf43b0" (UID: "f08433b6-100d-418e-a7a8-2d1174bf43b0"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:23:39.310206 kubelet[2569]: I0129 16:23:39.309231 2569 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f08433b6-100d-418e-a7a8-2d1174bf43b0-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f08433b6-100d-418e-a7a8-2d1174bf43b0" (UID: "f08433b6-100d-418e-a7a8-2d1174bf43b0"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:23:39.310630 kubelet[2569]: E0129 16:23:39.309680 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3cbb75c24464520988bddbe42ca2e8450a9e3d71f2ed1f8d14657d747004d0a7\": not found" containerID="3cbb75c24464520988bddbe42ca2e8450a9e3d71f2ed1f8d14657d747004d0a7" Jan 29 16:23:39.310630 kubelet[2569]: I0129 16:23:39.309734 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3cbb75c24464520988bddbe42ca2e8450a9e3d71f2ed1f8d14657d747004d0a7"} err="failed to get container status \"3cbb75c24464520988bddbe42ca2e8450a9e3d71f2ed1f8d14657d747004d0a7\": rpc error: code = NotFound desc = an error occurred when try to find container \"3cbb75c24464520988bddbe42ca2e8450a9e3d71f2ed1f8d14657d747004d0a7\": not found" Jan 29 16:23:39.310630 kubelet[2569]: I0129 16:23:39.309768 2569 scope.go:117] "RemoveContainer" containerID="6399f97224a518be73442036b5a69028065c9359c3d28e739aa8879e98b98559" Jan 29 16:23:39.311322 containerd[1485]: time="2025-01-29T16:23:39.311136354Z" level=error msg="ContainerStatus for \"6399f97224a518be73442036b5a69028065c9359c3d28e739aa8879e98b98559\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6399f97224a518be73442036b5a69028065c9359c3d28e739aa8879e98b98559\": not found" Jan 29 16:23:39.311523 kubelet[2569]: E0129 16:23:39.311486 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6399f97224a518be73442036b5a69028065c9359c3d28e739aa8879e98b98559\": not found" containerID="6399f97224a518be73442036b5a69028065c9359c3d28e739aa8879e98b98559" Jan 29 16:23:39.311625 kubelet[2569]: I0129 16:23:39.311531 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6399f97224a518be73442036b5a69028065c9359c3d28e739aa8879e98b98559"} err="failed to get container status \"6399f97224a518be73442036b5a69028065c9359c3d28e739aa8879e98b98559\": rpc error: code = NotFound desc = an error occurred when try to find container \"6399f97224a518be73442036b5a69028065c9359c3d28e739aa8879e98b98559\": not found" Jan 29 16:23:39.311625 kubelet[2569]: I0129 16:23:39.311564 2569 scope.go:117] "RemoveContainer" containerID="b6c645d2259e627b5451f474c013505a920ed17c9af1d1f804d7cec870589cca" Jan 29 16:23:39.314839 containerd[1485]: time="2025-01-29T16:23:39.313983423Z" level=error msg="ContainerStatus for \"b6c645d2259e627b5451f474c013505a920ed17c9af1d1f804d7cec870589cca\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b6c645d2259e627b5451f474c013505a920ed17c9af1d1f804d7cec870589cca\": not found" Jan 29 16:23:39.316299 kubelet[2569]: I0129 16:23:39.314762 2569 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4221b60b-8726-4e68-b252-a62f89e937dd-kube-api-access-m58cl" (OuterVolumeSpecName: "kube-api-access-m58cl") pod "4221b60b-8726-4e68-b252-a62f89e937dd" (UID: "4221b60b-8726-4e68-b252-a62f89e937dd"). InnerVolumeSpecName "kube-api-access-m58cl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:23:39.318884 kubelet[2569]: E0129 16:23:39.318433 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b6c645d2259e627b5451f474c013505a920ed17c9af1d1f804d7cec870589cca\": not found" containerID="b6c645d2259e627b5451f474c013505a920ed17c9af1d1f804d7cec870589cca" Jan 29 16:23:39.318884 kubelet[2569]: I0129 16:23:39.318541 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b6c645d2259e627b5451f474c013505a920ed17c9af1d1f804d7cec870589cca"} err="failed to get container status \"b6c645d2259e627b5451f474c013505a920ed17c9af1d1f804d7cec870589cca\": rpc error: code = NotFound desc = an error occurred when try to find container \"b6c645d2259e627b5451f474c013505a920ed17c9af1d1f804d7cec870589cca\": not found" Jan 29 16:23:39.318884 kubelet[2569]: I0129 16:23:39.318590 2569 scope.go:117] "RemoveContainer" containerID="8ba4541eee9828c902fb6727af9c45b5b47c27b71da8eed2ff4f9355d0dcfb5a" Jan 29 16:23:39.319289 containerd[1485]: time="2025-01-29T16:23:39.319007833Z" level=error msg="ContainerStatus for \"8ba4541eee9828c902fb6727af9c45b5b47c27b71da8eed2ff4f9355d0dcfb5a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8ba4541eee9828c902fb6727af9c45b5b47c27b71da8eed2ff4f9355d0dcfb5a\": not found" Jan 29 16:23:39.319354 kubelet[2569]: I0129 16:23:39.319297 2569 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f08433b6-100d-418e-a7a8-2d1174bf43b0-kube-api-access-lwvf5" (OuterVolumeSpecName: "kube-api-access-lwvf5") pod "f08433b6-100d-418e-a7a8-2d1174bf43b0" (UID: "f08433b6-100d-418e-a7a8-2d1174bf43b0"). InnerVolumeSpecName "kube-api-access-lwvf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:23:39.319765 kubelet[2569]: E0129 16:23:39.319727 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8ba4541eee9828c902fb6727af9c45b5b47c27b71da8eed2ff4f9355d0dcfb5a\": not found" containerID="8ba4541eee9828c902fb6727af9c45b5b47c27b71da8eed2ff4f9355d0dcfb5a" Jan 29 16:23:39.319857 kubelet[2569]: I0129 16:23:39.319799 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8ba4541eee9828c902fb6727af9c45b5b47c27b71da8eed2ff4f9355d0dcfb5a"} err="failed to get container status \"8ba4541eee9828c902fb6727af9c45b5b47c27b71da8eed2ff4f9355d0dcfb5a\": rpc error: code = NotFound desc = an error occurred when try to find container \"8ba4541eee9828c902fb6727af9c45b5b47c27b71da8eed2ff4f9355d0dcfb5a\": not found" Jan 29 16:23:39.322053 kubelet[2569]: I0129 16:23:39.320495 2569 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f08433b6-100d-418e-a7a8-2d1174bf43b0-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f08433b6-100d-418e-a7a8-2d1174bf43b0" (UID: "f08433b6-100d-418e-a7a8-2d1174bf43b0"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:23:39.322053 kubelet[2569]: I0129 16:23:39.321242 2569 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f08433b6-100d-418e-a7a8-2d1174bf43b0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f08433b6-100d-418e-a7a8-2d1174bf43b0" (UID: "f08433b6-100d-418e-a7a8-2d1174bf43b0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:23:39.324358 kubelet[2569]: I0129 16:23:39.324062 2569 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f08433b6-100d-418e-a7a8-2d1174bf43b0-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f08433b6-100d-418e-a7a8-2d1174bf43b0" (UID: "f08433b6-100d-418e-a7a8-2d1174bf43b0"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:23:39.396205 kubelet[2569]: I0129 16:23:39.394525 2569 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f08433b6-100d-418e-a7a8-2d1174bf43b0-host-proc-sys-kernel\") on node \"ci-4230.0.0-8-df8e9582f3\" DevicePath \"\"" Jan 29 16:23:39.396205 kubelet[2569]: I0129 16:23:39.395219 2569 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f08433b6-100d-418e-a7a8-2d1174bf43b0-bpf-maps\") on node \"ci-4230.0.0-8-df8e9582f3\" DevicePath \"\"" Jan 29 16:23:39.396205 kubelet[2569]: I0129 16:23:39.395264 2569 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f08433b6-100d-418e-a7a8-2d1174bf43b0-cilium-config-path\") on node \"ci-4230.0.0-8-df8e9582f3\" DevicePath \"\"" Jan 29 16:23:39.396205 kubelet[2569]: I0129 16:23:39.395290 2569 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4221b60b-8726-4e68-b252-a62f89e937dd-cilium-config-path\") on node \"ci-4230.0.0-8-df8e9582f3\" DevicePath \"\"" Jan 29 16:23:39.396205 kubelet[2569]: I0129 16:23:39.395304 2569 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f08433b6-100d-418e-a7a8-2d1174bf43b0-cilium-cgroup\") on node \"ci-4230.0.0-8-df8e9582f3\" DevicePath \"\"" Jan 29 16:23:39.396205 kubelet[2569]: I0129 16:23:39.395318 2569 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f08433b6-100d-418e-a7a8-2d1174bf43b0-hubble-tls\") on node \"ci-4230.0.0-8-df8e9582f3\" DevicePath \"\"" Jan 29 16:23:39.396205 kubelet[2569]: I0129 16:23:39.395331 2569 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f08433b6-100d-418e-a7a8-2d1174bf43b0-host-proc-sys-net\") on node \"ci-4230.0.0-8-df8e9582f3\" DevicePath \"\"" Jan 29 16:23:39.396205 kubelet[2569]: I0129 16:23:39.395345 2569 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f08433b6-100d-418e-a7a8-2d1174bf43b0-clustermesh-secrets\") on node \"ci-4230.0.0-8-df8e9582f3\" DevicePath \"\"" Jan 29 16:23:39.396740 kubelet[2569]: I0129 16:23:39.395361 2569 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f08433b6-100d-418e-a7a8-2d1174bf43b0-lib-modules\") on node \"ci-4230.0.0-8-df8e9582f3\" DevicePath \"\"" Jan 29 16:23:39.396740 kubelet[2569]: I0129 16:23:39.395375 2569 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-lwvf5\" (UniqueName: \"kubernetes.io/projected/f08433b6-100d-418e-a7a8-2d1174bf43b0-kube-api-access-lwvf5\") on node \"ci-4230.0.0-8-df8e9582f3\" DevicePath \"\"" Jan 29 16:23:39.396740 kubelet[2569]: I0129 16:23:39.395388 2569 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f08433b6-100d-418e-a7a8-2d1174bf43b0-etc-cni-netd\") on node \"ci-4230.0.0-8-df8e9582f3\" DevicePath \"\"" Jan 29 16:23:39.396740 kubelet[2569]: I0129 16:23:39.395401 2569 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f08433b6-100d-418e-a7a8-2d1174bf43b0-cni-path\") on node \"ci-4230.0.0-8-df8e9582f3\" DevicePath \"\"" Jan 29 16:23:39.396740 kubelet[2569]: I0129 16:23:39.395415 2569 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-m58cl\" (UniqueName: \"kubernetes.io/projected/4221b60b-8726-4e68-b252-a62f89e937dd-kube-api-access-m58cl\") on node \"ci-4230.0.0-8-df8e9582f3\" DevicePath \"\"" Jan 29 16:23:39.396740 kubelet[2569]: I0129 16:23:39.395431 2569 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f08433b6-100d-418e-a7a8-2d1174bf43b0-xtables-lock\") on node \"ci-4230.0.0-8-df8e9582f3\" DevicePath \"\"" Jan 29 16:23:39.396740 kubelet[2569]: I0129 16:23:39.395444 2569 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f08433b6-100d-418e-a7a8-2d1174bf43b0-cilium-run\") on node \"ci-4230.0.0-8-df8e9582f3\" DevicePath \"\"" Jan 29 16:23:39.396740 kubelet[2569]: I0129 16:23:39.395457 2569 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f08433b6-100d-418e-a7a8-2d1174bf43b0-hostproc\") on node \"ci-4230.0.0-8-df8e9582f3\" DevicePath \"\"" Jan 29 16:23:39.517083 systemd[1]: Removed slice kubepods-besteffort-pod4221b60b_8726_4e68_b252_a62f89e937dd.slice - libcontainer container kubepods-besteffort-pod4221b60b_8726_4e68_b252_a62f89e937dd.slice. Jan 29 16:23:39.517640 systemd[1]: kubepods-besteffort-pod4221b60b_8726_4e68_b252_a62f89e937dd.slice: Consumed 608ms CPU time, 28.4M memory peak, 1.8M read from disk, 4K written to disk. Jan 29 16:23:39.538882 systemd[1]: Removed slice kubepods-burstable-podf08433b6_100d_418e_a7a8_2d1174bf43b0.slice - libcontainer container kubepods-burstable-podf08433b6_100d_418e_a7a8_2d1174bf43b0.slice. Jan 29 16:23:39.539150 systemd[1]: kubepods-burstable-podf08433b6_100d_418e_a7a8_2d1174bf43b0.slice: Consumed 10.061s CPU time, 154.8M memory peak, 32.1M read from disk, 15.6M written to disk. Jan 29 16:23:39.635777 containerd[1485]: time="2025-01-29T16:23:39.635296555Z" level=info msg="StopPodSandbox for \"c1a581d0cb691ce7bd73cf3d6b3cdebc7bb48ac67ef4044adbd19f228cf66340\"" Jan 29 16:23:39.635777 containerd[1485]: time="2025-01-29T16:23:39.635483793Z" level=info msg="TearDown network for sandbox \"c1a581d0cb691ce7bd73cf3d6b3cdebc7bb48ac67ef4044adbd19f228cf66340\" successfully" Jan 29 16:23:39.635777 containerd[1485]: time="2025-01-29T16:23:39.635505490Z" level=info msg="StopPodSandbox for \"c1a581d0cb691ce7bd73cf3d6b3cdebc7bb48ac67ef4044adbd19f228cf66340\" returns successfully" Jan 29 16:23:39.638318 containerd[1485]: time="2025-01-29T16:23:39.636364213Z" level=info msg="RemovePodSandbox for \"c1a581d0cb691ce7bd73cf3d6b3cdebc7bb48ac67ef4044adbd19f228cf66340\"" Jan 29 16:23:39.638318 containerd[1485]: time="2025-01-29T16:23:39.636424117Z" level=info msg="Forcibly stopping sandbox \"c1a581d0cb691ce7bd73cf3d6b3cdebc7bb48ac67ef4044adbd19f228cf66340\"" Jan 29 16:23:39.638318 containerd[1485]: time="2025-01-29T16:23:39.636509762Z" level=info msg="TearDown network for sandbox \"c1a581d0cb691ce7bd73cf3d6b3cdebc7bb48ac67ef4044adbd19f228cf66340\" successfully" Jan 29 16:23:39.650922 kubelet[2569]: I0129 16:23:39.650760 2569 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4221b60b-8726-4e68-b252-a62f89e937dd" path="/var/lib/kubelet/pods/4221b60b-8726-4e68-b252-a62f89e937dd/volumes" Jan 29 16:23:39.654672 containerd[1485]: time="2025-01-29T16:23:39.654206498Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c1a581d0cb691ce7bd73cf3d6b3cdebc7bb48ac67ef4044adbd19f228cf66340\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:23:39.654672 containerd[1485]: time="2025-01-29T16:23:39.654344505Z" level=info msg="RemovePodSandbox \"c1a581d0cb691ce7bd73cf3d6b3cdebc7bb48ac67ef4044adbd19f228cf66340\" returns successfully" Jan 29 16:23:39.655250 kubelet[2569]: I0129 16:23:39.654034 2569 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f08433b6-100d-418e-a7a8-2d1174bf43b0" path="/var/lib/kubelet/pods/f08433b6-100d-418e-a7a8-2d1174bf43b0/volumes" Jan 29 16:23:39.657546 containerd[1485]: time="2025-01-29T16:23:39.657465047Z" level=info msg="StopPodSandbox for \"6604940116418ca541fa7e6e1526d91ab216d03da5313961b239caf2d8030af2\"" Jan 29 16:23:39.657749 containerd[1485]: time="2025-01-29T16:23:39.657649581Z" level=info msg="TearDown network for sandbox \"6604940116418ca541fa7e6e1526d91ab216d03da5313961b239caf2d8030af2\" successfully" Jan 29 16:23:39.657749 containerd[1485]: time="2025-01-29T16:23:39.657700177Z" level=info msg="StopPodSandbox for \"6604940116418ca541fa7e6e1526d91ab216d03da5313961b239caf2d8030af2\" returns successfully" Jan 29 16:23:39.659135 containerd[1485]: time="2025-01-29T16:23:39.658387630Z" level=info msg="RemovePodSandbox for \"6604940116418ca541fa7e6e1526d91ab216d03da5313961b239caf2d8030af2\"" Jan 29 16:23:39.659135 containerd[1485]: time="2025-01-29T16:23:39.658441453Z" level=info msg="Forcibly stopping sandbox \"6604940116418ca541fa7e6e1526d91ab216d03da5313961b239caf2d8030af2\"" Jan 29 16:23:39.659135 containerd[1485]: time="2025-01-29T16:23:39.658556899Z" level=info msg="TearDown network for sandbox \"6604940116418ca541fa7e6e1526d91ab216d03da5313961b239caf2d8030af2\" successfully" Jan 29 16:23:39.663759 containerd[1485]: time="2025-01-29T16:23:39.663683451Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6604940116418ca541fa7e6e1526d91ab216d03da5313961b239caf2d8030af2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:23:39.664248 containerd[1485]: time="2025-01-29T16:23:39.664095317Z" level=info msg="RemovePodSandbox \"6604940116418ca541fa7e6e1526d91ab216d03da5313961b239caf2d8030af2\" returns successfully" Jan 29 16:23:39.756692 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6604940116418ca541fa7e6e1526d91ab216d03da5313961b239caf2d8030af2-rootfs.mount: Deactivated successfully. Jan 29 16:23:39.757000 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6604940116418ca541fa7e6e1526d91ab216d03da5313961b239caf2d8030af2-shm.mount: Deactivated successfully. Jan 29 16:23:39.757138 systemd[1]: var-lib-kubelet-pods-f08433b6\x2d100d\x2d418e\x2da7a8\x2d2d1174bf43b0-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 29 16:23:39.757245 systemd[1]: var-lib-kubelet-pods-f08433b6\x2d100d\x2d418e\x2da7a8\x2d2d1174bf43b0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlwvf5.mount: Deactivated successfully. Jan 29 16:23:39.757379 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c1a581d0cb691ce7bd73cf3d6b3cdebc7bb48ac67ef4044adbd19f228cf66340-rootfs.mount: Deactivated successfully. Jan 29 16:23:39.757480 systemd[1]: var-lib-kubelet-pods-4221b60b\x2d8726\x2d4e68\x2db252\x2da62f89e937dd-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dm58cl.mount: Deactivated successfully. Jan 29 16:23:39.757582 systemd[1]: var-lib-kubelet-pods-f08433b6\x2d100d\x2d418e\x2da7a8\x2d2d1174bf43b0-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 29 16:23:39.807912 kubelet[2569]: E0129 16:23:39.805708 2569 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 29 16:23:40.620129 sshd[4239]: Connection closed by 139.178.89.65 port 48486 Jan 29 16:23:40.623941 sshd-session[4236]: pam_unix(sshd:session): session closed for user core Jan 29 16:23:40.657350 systemd[1]: sshd@26-164.92.66.114:22-139.178.89.65:48486.service: Deactivated successfully. Jan 29 16:23:40.665593 systemd[1]: session-26.scope: Deactivated successfully. Jan 29 16:23:40.681547 systemd-logind[1467]: Session 26 logged out. Waiting for processes to exit. Jan 29 16:23:40.690708 systemd[1]: Started sshd@27-164.92.66.114:22-139.178.89.65:48496.service - OpenSSH per-connection server daemon (139.178.89.65:48496). Jan 29 16:23:40.695382 systemd-logind[1467]: Removed session 26. Jan 29 16:23:40.781382 sshd[4401]: Accepted publickey for core from 139.178.89.65 port 48496 ssh2: RSA SHA256:1yg7JhvZkrJOwhuBgQvJ79WUbQdosGJaLn9TZ7AtIqY Jan 29 16:23:40.784036 sshd-session[4401]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:23:40.792552 systemd-logind[1467]: New session 27 of user core. Jan 29 16:23:40.802424 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 29 16:23:41.651180 sshd[4404]: Connection closed by 139.178.89.65 port 48496 Jan 29 16:23:41.651163 sshd-session[4401]: pam_unix(sshd:session): session closed for user core Jan 29 16:23:41.666814 systemd[1]: sshd@27-164.92.66.114:22-139.178.89.65:48496.service: Deactivated successfully. Jan 29 16:23:41.672570 systemd[1]: session-27.scope: Deactivated successfully. Jan 29 16:23:41.674636 systemd-logind[1467]: Session 27 logged out. Waiting for processes to exit. Jan 29 16:23:41.689828 systemd[1]: Started sshd@28-164.92.66.114:22-139.178.89.65:53222.service - OpenSSH per-connection server daemon (139.178.89.65:53222). Jan 29 16:23:41.695312 systemd-logind[1467]: Removed session 27. Jan 29 16:23:41.713320 kubelet[2569]: E0129 16:23:41.709770 2569 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f08433b6-100d-418e-a7a8-2d1174bf43b0" containerName="mount-cgroup" Jan 29 16:23:41.713320 kubelet[2569]: E0129 16:23:41.709829 2569 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f08433b6-100d-418e-a7a8-2d1174bf43b0" containerName="apply-sysctl-overwrites" Jan 29 16:23:41.713320 kubelet[2569]: E0129 16:23:41.709840 2569 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f08433b6-100d-418e-a7a8-2d1174bf43b0" containerName="clean-cilium-state" Jan 29 16:23:41.713320 kubelet[2569]: E0129 16:23:41.709851 2569 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f08433b6-100d-418e-a7a8-2d1174bf43b0" containerName="cilium-agent" Jan 29 16:23:41.713320 kubelet[2569]: E0129 16:23:41.709862 2569 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4221b60b-8726-4e68-b252-a62f89e937dd" containerName="cilium-operator" Jan 29 16:23:41.713320 kubelet[2569]: E0129 16:23:41.709874 2569 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f08433b6-100d-418e-a7a8-2d1174bf43b0" containerName="mount-bpf-fs" Jan 29 16:23:41.713320 kubelet[2569]: I0129 16:23:41.709928 2569 memory_manager.go:354] "RemoveStaleState removing state" podUID="4221b60b-8726-4e68-b252-a62f89e937dd" containerName="cilium-operator" Jan 29 16:23:41.713320 kubelet[2569]: I0129 16:23:41.709940 2569 memory_manager.go:354] "RemoveStaleState removing state" podUID="f08433b6-100d-418e-a7a8-2d1174bf43b0" containerName="cilium-agent" Jan 29 16:23:41.738565 systemd[1]: Created slice kubepods-burstable-pod7676046c_2462_49f7_b516_b3b054b41990.slice - libcontainer container kubepods-burstable-pod7676046c_2462_49f7_b516_b3b054b41990.slice. Jan 29 16:23:41.785108 sshd[4413]: Accepted publickey for core from 139.178.89.65 port 53222 ssh2: RSA SHA256:1yg7JhvZkrJOwhuBgQvJ79WUbQdosGJaLn9TZ7AtIqY Jan 29 16:23:41.786859 sshd-session[4413]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:23:41.803223 systemd-logind[1467]: New session 28 of user core. Jan 29 16:23:41.808781 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 29 16:23:41.819558 kubelet[2569]: I0129 16:23:41.817844 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7676046c-2462-49f7-b516-b3b054b41990-bpf-maps\") pod \"cilium-zb6nv\" (UID: \"7676046c-2462-49f7-b516-b3b054b41990\") " pod="kube-system/cilium-zb6nv" Jan 29 16:23:41.819558 kubelet[2569]: I0129 16:23:41.817921 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7676046c-2462-49f7-b516-b3b054b41990-hostproc\") pod \"cilium-zb6nv\" (UID: \"7676046c-2462-49f7-b516-b3b054b41990\") " pod="kube-system/cilium-zb6nv" Jan 29 16:23:41.819558 kubelet[2569]: I0129 16:23:41.817987 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7676046c-2462-49f7-b516-b3b054b41990-host-proc-sys-net\") pod \"cilium-zb6nv\" (UID: \"7676046c-2462-49f7-b516-b3b054b41990\") " pod="kube-system/cilium-zb6nv" Jan 29 16:23:41.819558 kubelet[2569]: I0129 16:23:41.818049 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7676046c-2462-49f7-b516-b3b054b41990-cilium-ipsec-secrets\") pod \"cilium-zb6nv\" (UID: \"7676046c-2462-49f7-b516-b3b054b41990\") " pod="kube-system/cilium-zb6nv" Jan 29 16:23:41.819558 kubelet[2569]: I0129 16:23:41.818108 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7676046c-2462-49f7-b516-b3b054b41990-hubble-tls\") pod \"cilium-zb6nv\" (UID: \"7676046c-2462-49f7-b516-b3b054b41990\") " pod="kube-system/cilium-zb6nv" Jan 29 16:23:41.819558 kubelet[2569]: I0129 16:23:41.818138 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7676046c-2462-49f7-b516-b3b054b41990-lib-modules\") pod \"cilium-zb6nv\" (UID: \"7676046c-2462-49f7-b516-b3b054b41990\") " pod="kube-system/cilium-zb6nv" Jan 29 16:23:41.819992 kubelet[2569]: I0129 16:23:41.818170 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7676046c-2462-49f7-b516-b3b054b41990-cni-path\") pod \"cilium-zb6nv\" (UID: \"7676046c-2462-49f7-b516-b3b054b41990\") " pod="kube-system/cilium-zb6nv" Jan 29 16:23:41.819992 kubelet[2569]: I0129 16:23:41.818198 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7676046c-2462-49f7-b516-b3b054b41990-etc-cni-netd\") pod \"cilium-zb6nv\" (UID: \"7676046c-2462-49f7-b516-b3b054b41990\") " pod="kube-system/cilium-zb6nv" Jan 29 16:23:41.819992 kubelet[2569]: I0129 16:23:41.818225 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7676046c-2462-49f7-b516-b3b054b41990-clustermesh-secrets\") pod \"cilium-zb6nv\" (UID: \"7676046c-2462-49f7-b516-b3b054b41990\") " pod="kube-system/cilium-zb6nv" Jan 29 16:23:41.819992 kubelet[2569]: I0129 16:23:41.818253 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7676046c-2462-49f7-b516-b3b054b41990-cilium-run\") pod \"cilium-zb6nv\" (UID: \"7676046c-2462-49f7-b516-b3b054b41990\") " pod="kube-system/cilium-zb6nv" Jan 29 16:23:41.819992 kubelet[2569]: I0129 16:23:41.818279 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7676046c-2462-49f7-b516-b3b054b41990-xtables-lock\") pod \"cilium-zb6nv\" (UID: \"7676046c-2462-49f7-b516-b3b054b41990\") " pod="kube-system/cilium-zb6nv" Jan 29 16:23:41.819992 kubelet[2569]: I0129 16:23:41.818329 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7676046c-2462-49f7-b516-b3b054b41990-host-proc-sys-kernel\") pod \"cilium-zb6nv\" (UID: \"7676046c-2462-49f7-b516-b3b054b41990\") " pod="kube-system/cilium-zb6nv" Jan 29 16:23:41.822716 kubelet[2569]: I0129 16:23:41.818365 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7676046c-2462-49f7-b516-b3b054b41990-cilium-config-path\") pod \"cilium-zb6nv\" (UID: \"7676046c-2462-49f7-b516-b3b054b41990\") " pod="kube-system/cilium-zb6nv" Jan 29 16:23:41.822716 kubelet[2569]: I0129 16:23:41.818396 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thkh2\" (UniqueName: \"kubernetes.io/projected/7676046c-2462-49f7-b516-b3b054b41990-kube-api-access-thkh2\") pod \"cilium-zb6nv\" (UID: \"7676046c-2462-49f7-b516-b3b054b41990\") " pod="kube-system/cilium-zb6nv" Jan 29 16:23:41.822716 kubelet[2569]: I0129 16:23:41.818439 2569 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7676046c-2462-49f7-b516-b3b054b41990-cilium-cgroup\") pod \"cilium-zb6nv\" (UID: \"7676046c-2462-49f7-b516-b3b054b41990\") " pod="kube-system/cilium-zb6nv" Jan 29 16:23:41.886992 sshd[4416]: Connection closed by 139.178.89.65 port 53222 Jan 29 16:23:41.887905 sshd-session[4413]: pam_unix(sshd:session): session closed for user core Jan 29 16:23:41.903743 systemd[1]: sshd@28-164.92.66.114:22-139.178.89.65:53222.service: Deactivated successfully. Jan 29 16:23:41.909046 systemd[1]: session-28.scope: Deactivated successfully. Jan 29 16:23:41.912224 systemd-logind[1467]: Session 28 logged out. Waiting for processes to exit. Jan 29 16:23:41.922209 systemd[1]: Started sshd@29-164.92.66.114:22-139.178.89.65:53232.service - OpenSSH per-connection server daemon (139.178.89.65:53232). Jan 29 16:23:41.925782 systemd-logind[1467]: Removed session 28. Jan 29 16:23:42.039259 sshd[4422]: Accepted publickey for core from 139.178.89.65 port 53232 ssh2: RSA SHA256:1yg7JhvZkrJOwhuBgQvJ79WUbQdosGJaLn9TZ7AtIqY Jan 29 16:23:42.041164 sshd-session[4422]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:23:42.050382 systemd-logind[1467]: New session 29 of user core. Jan 29 16:23:42.054172 kubelet[2569]: E0129 16:23:42.051069 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 16:23:42.057111 containerd[1485]: time="2025-01-29T16:23:42.056866696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zb6nv,Uid:7676046c-2462-49f7-b516-b3b054b41990,Namespace:kube-system,Attempt:0,}" Jan 29 16:23:42.058273 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 29 16:23:42.102740 containerd[1485]: time="2025-01-29T16:23:42.102205565Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:23:42.104024 containerd[1485]: time="2025-01-29T16:23:42.103592556Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:23:42.104024 containerd[1485]: time="2025-01-29T16:23:42.103674062Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:23:42.104024 containerd[1485]: time="2025-01-29T16:23:42.103823772Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:23:42.148452 systemd[1]: Started cri-containerd-05420d473bed9494f86bb0a78d95e03cdc9b6d87b403764120b2595ca885b749.scope - libcontainer container 05420d473bed9494f86bb0a78d95e03cdc9b6d87b403764120b2595ca885b749. Jan 29 16:23:42.204665 containerd[1485]: time="2025-01-29T16:23:42.204605628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zb6nv,Uid:7676046c-2462-49f7-b516-b3b054b41990,Namespace:kube-system,Attempt:0,} returns sandbox id \"05420d473bed9494f86bb0a78d95e03cdc9b6d87b403764120b2595ca885b749\"" Jan 29 16:23:42.207324 kubelet[2569]: E0129 16:23:42.206244 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 16:23:42.214576 containerd[1485]: time="2025-01-29T16:23:42.214497013Z" level=info msg="CreateContainer within sandbox \"05420d473bed9494f86bb0a78d95e03cdc9b6d87b403764120b2595ca885b749\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 16:23:42.239317 containerd[1485]: time="2025-01-29T16:23:42.239262989Z" level=info msg="CreateContainer within sandbox \"05420d473bed9494f86bb0a78d95e03cdc9b6d87b403764120b2595ca885b749\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a90d7713948530d299367baa09830721e651407d60dca01958bd50a3187f92fb\"" Jan 29 16:23:42.243394 containerd[1485]: time="2025-01-29T16:23:42.243342185Z" level=info msg="StartContainer for \"a90d7713948530d299367baa09830721e651407d60dca01958bd50a3187f92fb\"" Jan 29 16:23:42.315997 systemd[1]: Started cri-containerd-a90d7713948530d299367baa09830721e651407d60dca01958bd50a3187f92fb.scope - libcontainer container a90d7713948530d299367baa09830721e651407d60dca01958bd50a3187f92fb. Jan 29 16:23:42.400693 containerd[1485]: time="2025-01-29T16:23:42.400581741Z" level=info msg="StartContainer for \"a90d7713948530d299367baa09830721e651407d60dca01958bd50a3187f92fb\" returns successfully" Jan 29 16:23:42.425345 systemd[1]: cri-containerd-a90d7713948530d299367baa09830721e651407d60dca01958bd50a3187f92fb.scope: Deactivated successfully. Jan 29 16:23:42.481478 containerd[1485]: time="2025-01-29T16:23:42.481231536Z" level=info msg="shim disconnected" id=a90d7713948530d299367baa09830721e651407d60dca01958bd50a3187f92fb namespace=k8s.io Jan 29 16:23:42.481478 containerd[1485]: time="2025-01-29T16:23:42.481320424Z" level=warning msg="cleaning up after shim disconnected" id=a90d7713948530d299367baa09830721e651407d60dca01958bd50a3187f92fb namespace=k8s.io Jan 29 16:23:42.481478 containerd[1485]: time="2025-01-29T16:23:42.481334557Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:23:42.631344 kubelet[2569]: I0129 16:23:42.630792 2569 setters.go:600] "Node became not ready" node="ci-4230.0.0-8-df8e9582f3" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-29T16:23:42Z","lastTransitionTime":"2025-01-29T16:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 29 16:23:43.243774 kubelet[2569]: E0129 16:23:43.243729 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 16:23:43.247351 containerd[1485]: time="2025-01-29T16:23:43.247287763Z" level=info msg="CreateContainer within sandbox \"05420d473bed9494f86bb0a78d95e03cdc9b6d87b403764120b2595ca885b749\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 16:23:43.300958 containerd[1485]: time="2025-01-29T16:23:43.300358332Z" level=info msg="CreateContainer within sandbox \"05420d473bed9494f86bb0a78d95e03cdc9b6d87b403764120b2595ca885b749\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"547b8045b9129d95c10502208ef059d2e8d1d03303b85e4a40f9b841f519a1b3\"" Jan 29 16:23:43.302907 containerd[1485]: time="2025-01-29T16:23:43.302845327Z" level=info msg="StartContainer for \"547b8045b9129d95c10502208ef059d2e8d1d03303b85e4a40f9b841f519a1b3\"" Jan 29 16:23:43.362484 systemd[1]: Started cri-containerd-547b8045b9129d95c10502208ef059d2e8d1d03303b85e4a40f9b841f519a1b3.scope - libcontainer container 547b8045b9129d95c10502208ef059d2e8d1d03303b85e4a40f9b841f519a1b3. Jan 29 16:23:43.409348 containerd[1485]: time="2025-01-29T16:23:43.409281281Z" level=info msg="StartContainer for \"547b8045b9129d95c10502208ef059d2e8d1d03303b85e4a40f9b841f519a1b3\" returns successfully" Jan 29 16:23:43.428391 systemd[1]: cri-containerd-547b8045b9129d95c10502208ef059d2e8d1d03303b85e4a40f9b841f519a1b3.scope: Deactivated successfully. Jan 29 16:23:43.477295 containerd[1485]: time="2025-01-29T16:23:43.476834951Z" level=info msg="shim disconnected" id=547b8045b9129d95c10502208ef059d2e8d1d03303b85e4a40f9b841f519a1b3 namespace=k8s.io Jan 29 16:23:43.477615 containerd[1485]: time="2025-01-29T16:23:43.477290188Z" level=warning msg="cleaning up after shim disconnected" id=547b8045b9129d95c10502208ef059d2e8d1d03303b85e4a40f9b841f519a1b3 namespace=k8s.io Jan 29 16:23:43.477615 containerd[1485]: time="2025-01-29T16:23:43.477326040Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:23:43.949765 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-547b8045b9129d95c10502208ef059d2e8d1d03303b85e4a40f9b841f519a1b3-rootfs.mount: Deactivated successfully. Jan 29 16:23:44.248938 kubelet[2569]: E0129 16:23:44.248737 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 16:23:44.254346 containerd[1485]: time="2025-01-29T16:23:44.254291896Z" level=info msg="CreateContainer within sandbox \"05420d473bed9494f86bb0a78d95e03cdc9b6d87b403764120b2595ca885b749\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 16:23:44.284516 containerd[1485]: time="2025-01-29T16:23:44.284451277Z" level=info msg="CreateContainer within sandbox \"05420d473bed9494f86bb0a78d95e03cdc9b6d87b403764120b2595ca885b749\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"349d5cbc192897d6a1b3745edf2daa570a6cbe50798067bdd7f151e0ecac1ead\"" Jan 29 16:23:44.288452 containerd[1485]: time="2025-01-29T16:23:44.288367289Z" level=info msg="StartContainer for \"349d5cbc192897d6a1b3745edf2daa570a6cbe50798067bdd7f151e0ecac1ead\"" Jan 29 16:23:44.344378 systemd[1]: Started cri-containerd-349d5cbc192897d6a1b3745edf2daa570a6cbe50798067bdd7f151e0ecac1ead.scope - libcontainer container 349d5cbc192897d6a1b3745edf2daa570a6cbe50798067bdd7f151e0ecac1ead. Jan 29 16:23:44.423198 containerd[1485]: time="2025-01-29T16:23:44.422619086Z" level=info msg="StartContainer for \"349d5cbc192897d6a1b3745edf2daa570a6cbe50798067bdd7f151e0ecac1ead\" returns successfully" Jan 29 16:23:44.433829 systemd[1]: cri-containerd-349d5cbc192897d6a1b3745edf2daa570a6cbe50798067bdd7f151e0ecac1ead.scope: Deactivated successfully. Jan 29 16:23:44.485464 containerd[1485]: time="2025-01-29T16:23:44.484637969Z" level=info msg="shim disconnected" id=349d5cbc192897d6a1b3745edf2daa570a6cbe50798067bdd7f151e0ecac1ead namespace=k8s.io Jan 29 16:23:44.485464 containerd[1485]: time="2025-01-29T16:23:44.484717466Z" level=warning msg="cleaning up after shim disconnected" id=349d5cbc192897d6a1b3745edf2daa570a6cbe50798067bdd7f151e0ecac1ead namespace=k8s.io Jan 29 16:23:44.485464 containerd[1485]: time="2025-01-29T16:23:44.484731928Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:23:44.808031 kubelet[2569]: E0129 16:23:44.807975 2569 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 29 16:23:44.945924 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-349d5cbc192897d6a1b3745edf2daa570a6cbe50798067bdd7f151e0ecac1ead-rootfs.mount: Deactivated successfully. Jan 29 16:23:45.254303 kubelet[2569]: E0129 16:23:45.254266 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 16:23:45.258993 containerd[1485]: time="2025-01-29T16:23:45.258939320Z" level=info msg="CreateContainer within sandbox \"05420d473bed9494f86bb0a78d95e03cdc9b6d87b403764120b2595ca885b749\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 16:23:45.282412 containerd[1485]: time="2025-01-29T16:23:45.278599523Z" level=info msg="CreateContainer within sandbox \"05420d473bed9494f86bb0a78d95e03cdc9b6d87b403764120b2595ca885b749\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c6981d0f018935c5c2fc180bb02db9d7f3e80bfb50900724f7e18dee5513e680\"" Jan 29 16:23:45.282412 containerd[1485]: time="2025-01-29T16:23:45.280824393Z" level=info msg="StartContainer for \"c6981d0f018935c5c2fc180bb02db9d7f3e80bfb50900724f7e18dee5513e680\"" Jan 29 16:23:45.341527 systemd[1]: Started cri-containerd-c6981d0f018935c5c2fc180bb02db9d7f3e80bfb50900724f7e18dee5513e680.scope - libcontainer container c6981d0f018935c5c2fc180bb02db9d7f3e80bfb50900724f7e18dee5513e680. Jan 29 16:23:45.384605 systemd[1]: cri-containerd-c6981d0f018935c5c2fc180bb02db9d7f3e80bfb50900724f7e18dee5513e680.scope: Deactivated successfully. Jan 29 16:23:45.385010 containerd[1485]: time="2025-01-29T16:23:45.384674210Z" level=info msg="StartContainer for \"c6981d0f018935c5c2fc180bb02db9d7f3e80bfb50900724f7e18dee5513e680\" returns successfully" Jan 29 16:23:45.418301 containerd[1485]: time="2025-01-29T16:23:45.418058981Z" level=info msg="shim disconnected" id=c6981d0f018935c5c2fc180bb02db9d7f3e80bfb50900724f7e18dee5513e680 namespace=k8s.io Jan 29 16:23:45.418301 containerd[1485]: time="2025-01-29T16:23:45.418155253Z" level=warning msg="cleaning up after shim disconnected" id=c6981d0f018935c5c2fc180bb02db9d7f3e80bfb50900724f7e18dee5513e680 namespace=k8s.io Jan 29 16:23:45.418301 containerd[1485]: time="2025-01-29T16:23:45.418165079Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:23:45.946046 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c6981d0f018935c5c2fc180bb02db9d7f3e80bfb50900724f7e18dee5513e680-rootfs.mount: Deactivated successfully. Jan 29 16:23:46.259947 kubelet[2569]: E0129 16:23:46.259450 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 16:23:46.262971 containerd[1485]: time="2025-01-29T16:23:46.262857840Z" level=info msg="CreateContainer within sandbox \"05420d473bed9494f86bb0a78d95e03cdc9b6d87b403764120b2595ca885b749\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 16:23:46.291056 containerd[1485]: time="2025-01-29T16:23:46.290994424Z" level=info msg="CreateContainer within sandbox \"05420d473bed9494f86bb0a78d95e03cdc9b6d87b403764120b2595ca885b749\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f0c2ab64fc45b6eb53680befd99da8abaf88e0c5e1711819e3440ff3f54d3e15\"" Jan 29 16:23:46.294825 containerd[1485]: time="2025-01-29T16:23:46.294759577Z" level=info msg="StartContainer for \"f0c2ab64fc45b6eb53680befd99da8abaf88e0c5e1711819e3440ff3f54d3e15\"" Jan 29 16:23:46.358480 systemd[1]: Started cri-containerd-f0c2ab64fc45b6eb53680befd99da8abaf88e0c5e1711819e3440ff3f54d3e15.scope - libcontainer container f0c2ab64fc45b6eb53680befd99da8abaf88e0c5e1711819e3440ff3f54d3e15. Jan 29 16:23:46.401047 containerd[1485]: time="2025-01-29T16:23:46.400951002Z" level=info msg="StartContainer for \"f0c2ab64fc45b6eb53680befd99da8abaf88e0c5e1711819e3440ff3f54d3e15\" returns successfully" Jan 29 16:23:46.948473 systemd[1]: run-containerd-runc-k8s.io-f0c2ab64fc45b6eb53680befd99da8abaf88e0c5e1711819e3440ff3f54d3e15-runc.qlIYaB.mount: Deactivated successfully. Jan 29 16:23:47.020356 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 29 16:23:47.267860 kubelet[2569]: E0129 16:23:47.267616 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 16:23:48.269133 kubelet[2569]: E0129 16:23:48.268952 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 16:23:48.776209 systemd[1]: run-containerd-runc-k8s.io-f0c2ab64fc45b6eb53680befd99da8abaf88e0c5e1711819e3440ff3f54d3e15-runc.l15BUJ.mount: Deactivated successfully. Jan 29 16:23:48.926097 kubelet[2569]: E0129 16:23:48.925322 2569 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:52000->127.0.0.1:43605: write tcp 127.0.0.1:52000->127.0.0.1:43605: write: broken pipe Jan 29 16:23:49.271916 kubelet[2569]: E0129 16:23:49.271864 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 16:23:51.256745 systemd-networkd[1370]: lxc_health: Link UP Jan 29 16:23:51.280018 systemd-networkd[1370]: lxc_health: Gained carrier Jan 29 16:23:52.059460 kubelet[2569]: E0129 16:23:52.058463 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 16:23:52.105446 kubelet[2569]: I0129 16:23:52.096574 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-zb6nv" podStartSLOduration=11.096544957 podStartE2EDuration="11.096544957s" podCreationTimestamp="2025-01-29 16:23:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:23:47.302131309 +0000 UTC m=+127.822844249" watchObservedRunningTime="2025-01-29 16:23:52.096544957 +0000 UTC m=+132.617257899" Jan 29 16:23:52.283816 kubelet[2569]: E0129 16:23:52.283766 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 16:23:52.620314 systemd-networkd[1370]: lxc_health: Gained IPv6LL Jan 29 16:23:53.286529 kubelet[2569]: E0129 16:23:53.286470 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 16:23:57.971492 systemd[1]: run-containerd-runc-k8s.io-f0c2ab64fc45b6eb53680befd99da8abaf88e0c5e1711819e3440ff3f54d3e15-runc.bhhtay.mount: Deactivated successfully. Jan 29 16:23:58.077125 sshd[4430]: Connection closed by 139.178.89.65 port 53232 Jan 29 16:23:58.077939 sshd-session[4422]: pam_unix(sshd:session): session closed for user core Jan 29 16:23:58.084998 systemd[1]: sshd@29-164.92.66.114:22-139.178.89.65:53232.service: Deactivated successfully. Jan 29 16:23:58.094200 systemd[1]: session-29.scope: Deactivated successfully. Jan 29 16:23:58.100333 systemd-logind[1467]: Session 29 logged out. Waiting for processes to exit. Jan 29 16:23:58.104016 systemd-logind[1467]: Removed session 29.