Jan 30 13:07:34.873516 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 09:29:54 -00 2025 Jan 30 13:07:34.873535 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 30 13:07:34.873546 kernel: BIOS-provided physical RAM map: Jan 30 13:07:34.873552 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 30 13:07:34.873557 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 30 13:07:34.873562 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 30 13:07:34.873568 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007cfdbfff] usable Jan 30 13:07:34.873574 kernel: BIOS-e820: [mem 0x000000007cfdc000-0x000000007cffffff] reserved Jan 30 13:07:34.873581 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 30 13:07:34.873586 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 30 13:07:34.873592 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 30 13:07:34.873597 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 30 13:07:34.873602 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 30 13:07:34.873608 kernel: NX (Execute Disable) protection: active Jan 30 13:07:34.873616 kernel: APIC: Static calls initialized Jan 30 13:07:34.873622 kernel: SMBIOS 3.0.0 present. Jan 30 13:07:34.873628 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Jan 30 13:07:34.873633 kernel: Hypervisor detected: KVM Jan 30 13:07:34.873639 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 30 13:07:34.873645 kernel: kvm-clock: using sched offset of 2737049416 cycles Jan 30 13:07:34.873651 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 30 13:07:34.873657 kernel: tsc: Detected 2445.404 MHz processor Jan 30 13:07:34.873663 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 13:07:34.873669 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 13:07:34.873677 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 Jan 30 13:07:34.873683 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 30 13:07:34.873689 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 13:07:34.873695 kernel: Using GB pages for direct mapping Jan 30 13:07:34.873701 kernel: ACPI: Early table checksum verification disabled Jan 30 13:07:34.873707 kernel: ACPI: RSDP 0x00000000000F51F0 000014 (v00 BOCHS ) Jan 30 13:07:34.873713 kernel: ACPI: RSDT 0x000000007CFE265D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:07:34.873719 kernel: ACPI: FACP 0x000000007CFE244D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:07:34.873725 kernel: ACPI: DSDT 0x000000007CFE0040 00240D (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:07:34.873733 kernel: ACPI: FACS 0x000000007CFE0000 000040 Jan 30 13:07:34.873739 kernel: ACPI: APIC 0x000000007CFE2541 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:07:34.873744 kernel: ACPI: HPET 0x000000007CFE25C1 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:07:34.873750 kernel: ACPI: MCFG 0x000000007CFE25F9 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:07:34.873756 kernel: ACPI: WAET 0x000000007CFE2635 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:07:34.873762 kernel: ACPI: Reserving FACP table memory at [mem 0x7cfe244d-0x7cfe2540] Jan 30 13:07:34.873768 kernel: ACPI: Reserving DSDT table memory at [mem 0x7cfe0040-0x7cfe244c] Jan 30 13:07:34.873779 kernel: ACPI: Reserving FACS table memory at [mem 0x7cfe0000-0x7cfe003f] Jan 30 13:07:34.873785 kernel: ACPI: Reserving APIC table memory at [mem 0x7cfe2541-0x7cfe25c0] Jan 30 13:07:34.873791 kernel: ACPI: Reserving HPET table memory at [mem 0x7cfe25c1-0x7cfe25f8] Jan 30 13:07:34.873797 kernel: ACPI: Reserving MCFG table memory at [mem 0x7cfe25f9-0x7cfe2634] Jan 30 13:07:34.873803 kernel: ACPI: Reserving WAET table memory at [mem 0x7cfe2635-0x7cfe265c] Jan 30 13:07:34.873809 kernel: No NUMA configuration found Jan 30 13:07:34.873815 kernel: Faking a node at [mem 0x0000000000000000-0x000000007cfdbfff] Jan 30 13:07:34.873823 kernel: NODE_DATA(0) allocated [mem 0x7cfd6000-0x7cfdbfff] Jan 30 13:07:34.873829 kernel: Zone ranges: Jan 30 13:07:34.873836 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 13:07:34.873842 kernel: DMA32 [mem 0x0000000001000000-0x000000007cfdbfff] Jan 30 13:07:34.873848 kernel: Normal empty Jan 30 13:07:34.873854 kernel: Movable zone start for each node Jan 30 13:07:34.873860 kernel: Early memory node ranges Jan 30 13:07:34.873866 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 30 13:07:34.873872 kernel: node 0: [mem 0x0000000000100000-0x000000007cfdbfff] Jan 30 13:07:34.873880 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007cfdbfff] Jan 30 13:07:34.873886 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 13:07:34.873892 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 30 13:07:34.873898 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 30 13:07:34.873904 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 30 13:07:34.873910 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 30 13:07:34.873916 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 30 13:07:34.873922 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 30 13:07:34.873928 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 30 13:07:34.873936 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 13:07:34.873942 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 30 13:07:34.874171 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 30 13:07:34.874179 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 13:07:34.874186 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 30 13:07:34.874192 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 30 13:07:34.874199 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 30 13:07:34.874205 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 30 13:07:34.874211 kernel: Booting paravirtualized kernel on KVM Jan 30 13:07:34.874218 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 13:07:34.874227 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 30 13:07:34.874234 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 30 13:07:34.874240 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 30 13:07:34.874246 kernel: pcpu-alloc: [0] 0 1 Jan 30 13:07:34.874252 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 30 13:07:34.874259 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 30 13:07:34.874266 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 13:07:34.874272 kernel: random: crng init done Jan 30 13:07:34.874280 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 13:07:34.874287 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 30 13:07:34.874293 kernel: Fallback order for Node 0: 0 Jan 30 13:07:34.874299 kernel: Built 1 zonelists, mobility grouping on. Total pages: 503708 Jan 30 13:07:34.874305 kernel: Policy zone: DMA32 Jan 30 13:07:34.874311 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 13:07:34.874318 kernel: Memory: 1920004K/2047464K available (14336K kernel code, 2301K rwdata, 22800K rodata, 43320K init, 1752K bss, 127200K reserved, 0K cma-reserved) Jan 30 13:07:34.874324 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 30 13:07:34.874330 kernel: ftrace: allocating 37893 entries in 149 pages Jan 30 13:07:34.874338 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 13:07:34.874345 kernel: Dynamic Preempt: voluntary Jan 30 13:07:34.874351 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 13:07:34.874357 kernel: rcu: RCU event tracing is enabled. Jan 30 13:07:34.874364 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 30 13:07:34.874370 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 13:07:34.874377 kernel: Rude variant of Tasks RCU enabled. Jan 30 13:07:34.874383 kernel: Tracing variant of Tasks RCU enabled. Jan 30 13:07:34.874389 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 13:07:34.874397 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 30 13:07:34.874403 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 30 13:07:34.874409 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 13:07:34.874416 kernel: Console: colour VGA+ 80x25 Jan 30 13:07:34.874422 kernel: printk: console [tty0] enabled Jan 30 13:07:34.874428 kernel: printk: console [ttyS0] enabled Jan 30 13:07:34.874434 kernel: ACPI: Core revision 20230628 Jan 30 13:07:34.874440 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 30 13:07:34.874446 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 13:07:34.874454 kernel: x2apic enabled Jan 30 13:07:34.874461 kernel: APIC: Switched APIC routing to: physical x2apic Jan 30 13:07:34.874467 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 30 13:07:34.874473 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 30 13:07:34.874479 kernel: Calibrating delay loop (skipped) preset value.. 4890.80 BogoMIPS (lpj=2445404) Jan 30 13:07:34.874485 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 30 13:07:34.874491 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 30 13:07:34.874498 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 30 13:07:34.874512 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 13:07:34.874518 kernel: Spectre V2 : Mitigation: Retpolines Jan 30 13:07:34.874525 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 13:07:34.874531 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 30 13:07:34.874540 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 30 13:07:34.874546 kernel: RETBleed: Mitigation: untrained return thunk Jan 30 13:07:34.874553 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 30 13:07:34.874559 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 30 13:07:34.874566 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 30 13:07:34.874575 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 30 13:07:34.874581 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 30 13:07:34.874588 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 30 13:07:34.874594 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 30 13:07:34.874601 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 30 13:07:34.874607 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 30 13:07:34.874614 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 30 13:07:34.874620 kernel: Freeing SMP alternatives memory: 32K Jan 30 13:07:34.874629 kernel: pid_max: default: 32768 minimum: 301 Jan 30 13:07:34.874635 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 13:07:34.874641 kernel: landlock: Up and running. Jan 30 13:07:34.874648 kernel: SELinux: Initializing. Jan 30 13:07:34.874654 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 30 13:07:34.874661 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 30 13:07:34.874667 kernel: smpboot: CPU0: AMD EPYC Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 30 13:07:34.874674 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:07:34.874680 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:07:34.874689 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:07:34.874695 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 30 13:07:34.874702 kernel: ... version: 0 Jan 30 13:07:34.874708 kernel: ... bit width: 48 Jan 30 13:07:34.874714 kernel: ... generic registers: 6 Jan 30 13:07:34.874721 kernel: ... value mask: 0000ffffffffffff Jan 30 13:07:34.874727 kernel: ... max period: 00007fffffffffff Jan 30 13:07:34.874733 kernel: ... fixed-purpose events: 0 Jan 30 13:07:34.874740 kernel: ... event mask: 000000000000003f Jan 30 13:07:34.874748 kernel: signal: max sigframe size: 1776 Jan 30 13:07:34.874754 kernel: rcu: Hierarchical SRCU implementation. Jan 30 13:07:34.874761 kernel: rcu: Max phase no-delay instances is 400. Jan 30 13:07:34.874767 kernel: smp: Bringing up secondary CPUs ... Jan 30 13:07:34.874774 kernel: smpboot: x86: Booting SMP configuration: Jan 30 13:07:34.874780 kernel: .... node #0, CPUs: #1 Jan 30 13:07:34.874786 kernel: smp: Brought up 1 node, 2 CPUs Jan 30 13:07:34.874793 kernel: smpboot: Max logical packages: 1 Jan 30 13:07:34.874799 kernel: smpboot: Total of 2 processors activated (9781.61 BogoMIPS) Jan 30 13:07:34.874807 kernel: devtmpfs: initialized Jan 30 13:07:34.874814 kernel: x86/mm: Memory block size: 128MB Jan 30 13:07:34.874820 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 13:07:34.874827 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 30 13:07:34.874833 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 13:07:34.874839 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 13:07:34.874846 kernel: audit: initializing netlink subsys (disabled) Jan 30 13:07:34.875267 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 13:07:34.875276 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 13:07:34.875287 kernel: audit: type=2000 audit(1738242454.171:1): state=initialized audit_enabled=0 res=1 Jan 30 13:07:34.875293 kernel: cpuidle: using governor menu Jan 30 13:07:34.875300 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 13:07:34.875306 kernel: dca service started, version 1.12.1 Jan 30 13:07:34.875313 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 30 13:07:34.875319 kernel: PCI: Using configuration type 1 for base access Jan 30 13:07:34.875326 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 13:07:34.875332 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 13:07:34.875339 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 13:07:34.875347 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 13:07:34.875354 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 13:07:34.875360 kernel: ACPI: Added _OSI(Module Device) Jan 30 13:07:34.875366 kernel: ACPI: Added _OSI(Processor Device) Jan 30 13:07:34.875373 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 13:07:34.875379 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 13:07:34.875386 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 13:07:34.875392 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 30 13:07:34.875399 kernel: ACPI: Interpreter enabled Jan 30 13:07:34.875407 kernel: ACPI: PM: (supports S0 S5) Jan 30 13:07:34.875413 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 13:07:34.875420 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 13:07:34.875426 kernel: PCI: Using E820 reservations for host bridge windows Jan 30 13:07:34.875433 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 30 13:07:34.875439 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 13:07:34.875598 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 30 13:07:34.875714 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 30 13:07:34.875824 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 30 13:07:34.875834 kernel: PCI host bridge to bus 0000:00 Jan 30 13:07:34.875941 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 30 13:07:34.876061 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 30 13:07:34.876174 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 30 13:07:34.876269 kernel: pci_bus 0000:00: root bus resource [mem 0x7d000000-0xafffffff window] Jan 30 13:07:34.876362 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 30 13:07:34.876461 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 30 13:07:34.876555 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 13:07:34.876675 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 30 13:07:34.876789 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 Jan 30 13:07:34.876894 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfb800000-0xfbffffff pref] Jan 30 13:07:34.877033 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfd200000-0xfd203fff 64bit pref] Jan 30 13:07:34.879038 kernel: pci 0000:00:01.0: reg 0x20: [mem 0xfea10000-0xfea10fff] Jan 30 13:07:34.879173 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea00000-0xfea0ffff pref] Jan 30 13:07:34.879281 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 30 13:07:34.879394 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 30 13:07:34.879499 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea11000-0xfea11fff] Jan 30 13:07:34.879610 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 30 13:07:34.879713 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea12000-0xfea12fff] Jan 30 13:07:34.879833 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 30 13:07:34.879938 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea13000-0xfea13fff] Jan 30 13:07:34.880069 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 30 13:07:34.880191 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea14000-0xfea14fff] Jan 30 13:07:34.880302 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 30 13:07:34.880406 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea15000-0xfea15fff] Jan 30 13:07:34.880522 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 30 13:07:34.880626 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea16000-0xfea16fff] Jan 30 13:07:34.880736 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 30 13:07:34.880841 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea17000-0xfea17fff] Jan 30 13:07:34.882046 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 30 13:07:34.882195 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea18000-0xfea18fff] Jan 30 13:07:34.882326 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Jan 30 13:07:34.882432 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfea19000-0xfea19fff] Jan 30 13:07:34.882545 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 30 13:07:34.882649 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 30 13:07:34.882764 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 30 13:07:34.882867 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc040-0xc05f] Jan 30 13:07:34.883060 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea1a000-0xfea1afff] Jan 30 13:07:34.883193 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 30 13:07:34.883300 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 30 13:07:34.883417 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Jan 30 13:07:34.883525 kernel: pci 0000:01:00.0: reg 0x14: [mem 0xfe880000-0xfe880fff] Jan 30 13:07:34.883633 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Jan 30 13:07:34.883743 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfe800000-0xfe87ffff pref] Jan 30 13:07:34.883855 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 30 13:07:34.883979 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Jan 30 13:07:34.884100 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Jan 30 13:07:34.884224 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 30 13:07:34.884334 kernel: pci 0000:02:00.0: reg 0x10: [mem 0xfe600000-0xfe603fff 64bit] Jan 30 13:07:34.884439 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 30 13:07:34.884549 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Jan 30 13:07:34.885113 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 30 13:07:34.885239 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Jan 30 13:07:34.885348 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfe400000-0xfe400fff] Jan 30 13:07:34.885456 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xfcc00000-0xfcc03fff 64bit pref] Jan 30 13:07:34.885559 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 30 13:07:34.885661 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Jan 30 13:07:34.885763 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 30 13:07:34.885884 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Jan 30 13:07:34.888099 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Jan 30 13:07:34.888224 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 30 13:07:34.888331 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Jan 30 13:07:34.888437 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 30 13:07:34.888556 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 30 13:07:34.888666 kernel: pci 0000:05:00.0: reg 0x14: [mem 0xfe000000-0xfe000fff] Jan 30 13:07:34.888782 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xfc800000-0xfc803fff 64bit pref] Jan 30 13:07:34.888889 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 30 13:07:34.889011 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Jan 30 13:07:34.889131 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 30 13:07:34.889252 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Jan 30 13:07:34.889361 kernel: pci 0000:06:00.0: reg 0x14: [mem 0xfde00000-0xfde00fff] Jan 30 13:07:34.889467 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xfc600000-0xfc603fff 64bit pref] Jan 30 13:07:34.889579 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 30 13:07:34.889683 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Jan 30 13:07:34.889786 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 30 13:07:34.889796 kernel: acpiphp: Slot [0] registered Jan 30 13:07:34.889911 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Jan 30 13:07:34.891091 kernel: pci 0000:07:00.0: reg 0x14: [mem 0xfdc80000-0xfdc80fff] Jan 30 13:07:34.891212 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xfc400000-0xfc403fff 64bit pref] Jan 30 13:07:34.891322 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfdc00000-0xfdc7ffff pref] Jan 30 13:07:34.891434 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 30 13:07:34.891538 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Jan 30 13:07:34.891640 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 30 13:07:34.891650 kernel: acpiphp: Slot [0-2] registered Jan 30 13:07:34.891752 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 30 13:07:34.891854 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Jan 30 13:07:34.893181 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 30 13:07:34.893195 kernel: acpiphp: Slot [0-3] registered Jan 30 13:07:34.893503 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 30 13:07:34.893625 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 30 13:07:34.893735 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 30 13:07:34.893745 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 30 13:07:34.893752 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 30 13:07:34.893758 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 30 13:07:34.893765 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 30 13:07:34.893772 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 30 13:07:34.893783 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 30 13:07:34.893789 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 30 13:07:34.893796 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 30 13:07:34.893803 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 30 13:07:34.893809 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 30 13:07:34.893816 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 30 13:07:34.893822 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 30 13:07:34.893829 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 30 13:07:34.893836 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 30 13:07:34.893844 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 30 13:07:34.893851 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 30 13:07:34.893858 kernel: iommu: Default domain type: Translated Jan 30 13:07:34.893864 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 13:07:34.893870 kernel: PCI: Using ACPI for IRQ routing Jan 30 13:07:34.893877 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 30 13:07:34.893884 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 30 13:07:34.893891 kernel: e820: reserve RAM buffer [mem 0x7cfdc000-0x7fffffff] Jan 30 13:07:34.895006 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 30 13:07:34.895205 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 30 13:07:34.895313 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 30 13:07:34.895323 kernel: vgaarb: loaded Jan 30 13:07:34.895330 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 30 13:07:34.895337 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 30 13:07:34.895344 kernel: clocksource: Switched to clocksource kvm-clock Jan 30 13:07:34.895350 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 13:07:34.895357 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 13:07:34.895364 kernel: pnp: PnP ACPI init Jan 30 13:07:34.895484 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 30 13:07:34.895495 kernel: pnp: PnP ACPI: found 5 devices Jan 30 13:07:34.895502 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 13:07:34.895509 kernel: NET: Registered PF_INET protocol family Jan 30 13:07:34.895516 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 13:07:34.895523 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 30 13:07:34.895529 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 13:07:34.895536 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 30 13:07:34.895546 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 30 13:07:34.895552 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 30 13:07:34.895559 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 30 13:07:34.895566 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 30 13:07:34.895572 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 13:07:34.895579 kernel: NET: Registered PF_XDP protocol family Jan 30 13:07:34.895683 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 30 13:07:34.895786 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 30 13:07:34.895893 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 30 13:07:34.896016 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] Jan 30 13:07:34.896174 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] Jan 30 13:07:34.898052 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] Jan 30 13:07:34.898177 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 30 13:07:34.898284 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Jan 30 13:07:34.898387 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Jan 30 13:07:34.898491 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 30 13:07:34.898602 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Jan 30 13:07:34.898735 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 30 13:07:34.898848 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 30 13:07:34.900977 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Jan 30 13:07:34.901104 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 30 13:07:34.901210 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 30 13:07:34.901320 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Jan 30 13:07:34.901440 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 30 13:07:34.901548 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 30 13:07:34.901651 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Jan 30 13:07:34.901753 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 30 13:07:34.901855 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 30 13:07:34.902001 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Jan 30 13:07:34.902123 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 30 13:07:34.902227 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 30 13:07:34.902329 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Jan 30 13:07:34.902431 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Jan 30 13:07:34.902540 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 30 13:07:34.902642 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 30 13:07:34.902745 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Jan 30 13:07:34.902848 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Jan 30 13:07:34.902965 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 30 13:07:34.903079 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 30 13:07:34.903199 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Jan 30 13:07:34.903301 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 30 13:07:34.903404 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 30 13:07:34.903505 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 30 13:07:34.903607 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 30 13:07:34.903701 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 30 13:07:34.903795 kernel: pci_bus 0000:00: resource 7 [mem 0x7d000000-0xafffffff window] Jan 30 13:07:34.903888 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 30 13:07:34.906010 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 30 13:07:34.906137 kernel: pci_bus 0000:01: resource 1 [mem 0xfe800000-0xfe9fffff] Jan 30 13:07:34.906239 kernel: pci_bus 0000:01: resource 2 [mem 0xfd000000-0xfd1fffff 64bit pref] Jan 30 13:07:34.906351 kernel: pci_bus 0000:02: resource 1 [mem 0xfe600000-0xfe7fffff] Jan 30 13:07:34.906450 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jan 30 13:07:34.906556 kernel: pci_bus 0000:03: resource 1 [mem 0xfe400000-0xfe5fffff] Jan 30 13:07:34.906655 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 30 13:07:34.906759 kernel: pci_bus 0000:04: resource 1 [mem 0xfe200000-0xfe3fffff] Jan 30 13:07:34.906858 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 30 13:07:34.908993 kernel: pci_bus 0000:05: resource 1 [mem 0xfe000000-0xfe1fffff] Jan 30 13:07:34.909113 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 30 13:07:34.909220 kernel: pci_bus 0000:06: resource 1 [mem 0xfde00000-0xfdffffff] Jan 30 13:07:34.909320 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 30 13:07:34.909431 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Jan 30 13:07:34.909550 kernel: pci_bus 0000:07: resource 1 [mem 0xfdc00000-0xfddfffff] Jan 30 13:07:34.909649 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 30 13:07:34.909760 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Jan 30 13:07:34.909859 kernel: pci_bus 0000:08: resource 1 [mem 0xfda00000-0xfdbfffff] Jan 30 13:07:34.909981 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 30 13:07:34.910105 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Jan 30 13:07:34.910208 kernel: pci_bus 0000:09: resource 1 [mem 0xfd800000-0xfd9fffff] Jan 30 13:07:34.910308 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 30 13:07:34.910318 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 30 13:07:34.910329 kernel: PCI: CLS 0 bytes, default 64 Jan 30 13:07:34.910337 kernel: Initialise system trusted keyrings Jan 30 13:07:34.910344 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 30 13:07:34.910351 kernel: Key type asymmetric registered Jan 30 13:07:34.910357 kernel: Asymmetric key parser 'x509' registered Jan 30 13:07:34.910364 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 13:07:34.910371 kernel: io scheduler mq-deadline registered Jan 30 13:07:34.910378 kernel: io scheduler kyber registered Jan 30 13:07:34.910385 kernel: io scheduler bfq registered Jan 30 13:07:34.910493 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jan 30 13:07:34.910598 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jan 30 13:07:34.910703 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jan 30 13:07:34.910807 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jan 30 13:07:34.910910 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jan 30 13:07:34.912050 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jan 30 13:07:34.912175 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jan 30 13:07:34.912280 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jan 30 13:07:34.912389 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jan 30 13:07:34.912492 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jan 30 13:07:34.912596 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jan 30 13:07:34.912699 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jan 30 13:07:34.912801 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jan 30 13:07:34.912903 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jan 30 13:07:34.914036 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jan 30 13:07:34.914158 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jan 30 13:07:34.914174 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 30 13:07:34.914277 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Jan 30 13:07:34.914379 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Jan 30 13:07:34.914388 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 13:07:34.914396 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Jan 30 13:07:34.914403 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 13:07:34.914410 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 13:07:34.914417 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 30 13:07:34.914424 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 30 13:07:34.914434 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 30 13:07:34.914441 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 30 13:07:34.914547 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 30 13:07:34.914645 kernel: rtc_cmos 00:03: registered as rtc0 Jan 30 13:07:34.914742 kernel: rtc_cmos 00:03: setting system clock to 2025-01-30T13:07:34 UTC (1738242454) Jan 30 13:07:34.914837 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 30 13:07:34.914847 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 30 13:07:34.914854 kernel: NET: Registered PF_INET6 protocol family Jan 30 13:07:34.914864 kernel: Segment Routing with IPv6 Jan 30 13:07:34.914871 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 13:07:34.914878 kernel: NET: Registered PF_PACKET protocol family Jan 30 13:07:34.914884 kernel: Key type dns_resolver registered Jan 30 13:07:34.914891 kernel: IPI shorthand broadcast: enabled Jan 30 13:07:34.914898 kernel: sched_clock: Marking stable (1072006879, 134114418)->(1255589924, -49468627) Jan 30 13:07:34.914905 kernel: registered taskstats version 1 Jan 30 13:07:34.914912 kernel: Loading compiled-in X.509 certificates Jan 30 13:07:34.914919 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 7f0738935740330d55027faa5877e7155d5f24f4' Jan 30 13:07:34.914928 kernel: Key type .fscrypt registered Jan 30 13:07:34.914935 kernel: Key type fscrypt-provisioning registered Jan 30 13:07:34.914959 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 13:07:34.914967 kernel: ima: Allocated hash algorithm: sha1 Jan 30 13:07:34.914973 kernel: ima: No architecture policies found Jan 30 13:07:34.914980 kernel: clk: Disabling unused clocks Jan 30 13:07:34.914987 kernel: Freeing unused kernel image (initmem) memory: 43320K Jan 30 13:07:34.914995 kernel: Write protecting the kernel read-only data: 38912k Jan 30 13:07:34.915002 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Jan 30 13:07:34.915011 kernel: Run /init as init process Jan 30 13:07:34.915018 kernel: with arguments: Jan 30 13:07:34.915025 kernel: /init Jan 30 13:07:34.915032 kernel: with environment: Jan 30 13:07:34.915039 kernel: HOME=/ Jan 30 13:07:34.915045 kernel: TERM=linux Jan 30 13:07:34.915053 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 13:07:34.915062 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:07:34.915073 systemd[1]: Detected virtualization kvm. Jan 30 13:07:34.915102 systemd[1]: Detected architecture x86-64. Jan 30 13:07:34.915109 systemd[1]: Running in initrd. Jan 30 13:07:34.915117 systemd[1]: No hostname configured, using default hostname. Jan 30 13:07:34.915124 systemd[1]: Hostname set to . Jan 30 13:07:34.915132 systemd[1]: Initializing machine ID from VM UUID. Jan 30 13:07:34.915139 systemd[1]: Queued start job for default target initrd.target. Jan 30 13:07:34.915147 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:07:34.915157 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:07:34.915165 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 13:07:34.915172 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:07:34.915180 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 13:07:34.915188 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 13:07:34.915197 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 13:07:34.915207 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 13:07:34.915214 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:07:34.915222 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:07:34.915229 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:07:34.915237 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:07:34.915244 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:07:34.915252 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:07:34.915259 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:07:34.915266 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:07:34.915276 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 13:07:34.915283 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 13:07:34.915291 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:07:34.915298 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:07:34.915306 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:07:34.915313 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:07:34.915320 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 13:07:34.915328 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:07:34.915335 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 13:07:34.915345 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 13:07:34.915352 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:07:34.915360 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:07:34.915387 systemd-journald[188]: Collecting audit messages is disabled. Jan 30 13:07:34.915409 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:07:34.915417 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 13:07:34.915424 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:07:34.915432 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 13:07:34.915440 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:07:34.915450 systemd-journald[188]: Journal started Jan 30 13:07:34.915467 systemd-journald[188]: Runtime Journal (/run/log/journal/da4cbd71597a4e67900b158dc587e6b5) is 4.8M, max 38.3M, 33.5M free. Jan 30 13:07:34.895158 systemd-modules-load[189]: Inserted module 'overlay' Jan 30 13:07:34.945740 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 13:07:34.945768 kernel: Bridge firewalling registered Jan 30 13:07:34.919584 systemd-modules-load[189]: Inserted module 'br_netfilter' Jan 30 13:07:34.951964 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:07:34.952004 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:07:34.952652 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:07:34.955455 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:07:34.961070 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:07:34.965097 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:07:34.968473 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:07:34.971889 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:07:34.978863 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:07:34.983072 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 13:07:34.983703 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:07:34.984312 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:07:34.993120 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:07:34.996098 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:07:35.002366 dracut-cmdline[221]: dracut-dracut-053 Jan 30 13:07:35.004599 dracut-cmdline[221]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 30 13:07:35.027607 systemd-resolved[225]: Positive Trust Anchors: Jan 30 13:07:35.028311 systemd-resolved[225]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:07:35.028338 systemd-resolved[225]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:07:35.033600 systemd-resolved[225]: Defaulting to hostname 'linux'. Jan 30 13:07:35.035031 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:07:35.036109 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:07:35.070994 kernel: SCSI subsystem initialized Jan 30 13:07:35.078967 kernel: Loading iSCSI transport class v2.0-870. Jan 30 13:07:35.088984 kernel: iscsi: registered transport (tcp) Jan 30 13:07:35.106979 kernel: iscsi: registered transport (qla4xxx) Jan 30 13:07:35.107035 kernel: QLogic iSCSI HBA Driver Jan 30 13:07:35.143398 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 13:07:35.148058 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 13:07:35.171319 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 13:07:35.171374 kernel: device-mapper: uevent: version 1.0.3 Jan 30 13:07:35.171385 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 13:07:35.210987 kernel: raid6: avx2x4 gen() 33602 MB/s Jan 30 13:07:35.227979 kernel: raid6: avx2x2 gen() 32579 MB/s Jan 30 13:07:35.245075 kernel: raid6: avx2x1 gen() 23755 MB/s Jan 30 13:07:35.245112 kernel: raid6: using algorithm avx2x4 gen() 33602 MB/s Jan 30 13:07:35.263155 kernel: raid6: .... xor() 4942 MB/s, rmw enabled Jan 30 13:07:35.263192 kernel: raid6: using avx2x2 recovery algorithm Jan 30 13:07:35.281981 kernel: xor: automatically using best checksumming function avx Jan 30 13:07:35.398985 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 13:07:35.409256 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:07:35.415092 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:07:35.428038 systemd-udevd[408]: Using default interface naming scheme 'v255'. Jan 30 13:07:35.431720 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:07:35.440143 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 13:07:35.453350 dracut-pre-trigger[414]: rd.md=0: removing MD RAID activation Jan 30 13:07:35.481436 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:07:35.489090 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:07:35.558550 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:07:35.565158 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 13:07:35.588558 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 13:07:35.590245 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:07:35.590694 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:07:35.592179 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:07:35.601126 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 13:07:35.613593 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:07:35.659506 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 13:07:35.690974 kernel: scsi host0: Virtio SCSI HBA Jan 30 13:07:35.696961 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jan 30 13:07:35.708154 kernel: libata version 3.00 loaded. Jan 30 13:07:35.711006 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:07:35.711116 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:07:35.711730 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:07:35.714041 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:07:35.714141 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:07:35.715989 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:07:35.733770 kernel: ACPI: bus type USB registered Jan 30 13:07:35.733855 kernel: usbcore: registered new interface driver usbfs Jan 30 13:07:35.733893 kernel: usbcore: registered new interface driver hub Jan 30 13:07:35.733924 kernel: usbcore: registered new device driver usb Jan 30 13:07:35.730196 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:07:35.740389 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 30 13:07:35.761354 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jan 30 13:07:35.761504 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 30 13:07:35.761696 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 30 13:07:35.761876 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jan 30 13:07:35.762162 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jan 30 13:07:35.762345 kernel: AVX2 version of gcm_enc/dec engaged. Jan 30 13:07:35.762367 kernel: AES CTR mode by8 optimization enabled Jan 30 13:07:35.762382 kernel: hub 1-0:1.0: USB hub found Jan 30 13:07:35.762583 kernel: hub 1-0:1.0: 4 ports detected Jan 30 13:07:35.762767 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 30 13:07:35.765023 kernel: hub 2-0:1.0: USB hub found Jan 30 13:07:35.765211 kernel: hub 2-0:1.0: 4 ports detected Jan 30 13:07:35.775353 kernel: ahci 0000:00:1f.2: version 3.0 Jan 30 13:07:35.790110 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 30 13:07:35.790126 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 30 13:07:35.790261 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 30 13:07:35.790382 kernel: scsi host1: ahci Jan 30 13:07:35.790511 kernel: scsi host2: ahci Jan 30 13:07:35.790633 kernel: scsi host3: ahci Jan 30 13:07:35.790774 kernel: scsi host4: ahci Jan 30 13:07:35.791788 kernel: sd 0:0:0:0: Power-on or device reset occurred Jan 30 13:07:35.791934 kernel: sd 0:0:0:0: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Jan 30 13:07:35.792102 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 30 13:07:35.792232 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Jan 30 13:07:35.792362 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 30 13:07:35.792490 kernel: scsi host5: ahci Jan 30 13:07:35.792616 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 13:07:35.792629 kernel: GPT:17805311 != 80003071 Jan 30 13:07:35.792638 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 13:07:35.792646 kernel: GPT:17805311 != 80003071 Jan 30 13:07:35.792654 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 13:07:35.792662 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:07:35.792670 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 30 13:07:35.792803 kernel: scsi host6: ahci Jan 30 13:07:35.792927 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a100 irq 51 Jan 30 13:07:35.792941 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a180 irq 51 Jan 30 13:07:35.794649 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a200 irq 51 Jan 30 13:07:35.794664 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a280 irq 51 Jan 30 13:07:35.794673 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a300 irq 51 Jan 30 13:07:35.794681 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a380 irq 51 Jan 30 13:07:35.845305 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jan 30 13:07:35.849797 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (457) Jan 30 13:07:35.853979 kernel: BTRFS: device fsid f8084233-4a6f-4e67-af0b-519e43b19e58 devid 1 transid 41 /dev/sda3 scanned by (udev-worker) (462) Jan 30 13:07:35.854047 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:07:35.866805 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jan 30 13:07:35.871590 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jan 30 13:07:35.877783 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jan 30 13:07:35.882811 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 30 13:07:35.893103 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 13:07:35.896088 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:07:35.899290 disk-uuid[554]: Primary Header is updated. Jan 30 13:07:35.899290 disk-uuid[554]: Secondary Entries is updated. Jan 30 13:07:35.899290 disk-uuid[554]: Secondary Header is updated. Jan 30 13:07:35.905722 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:07:35.912893 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:07:36.003132 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 30 13:07:36.101738 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 30 13:07:36.101825 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 30 13:07:36.107491 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 30 13:07:36.107536 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 30 13:07:36.107548 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 30 13:07:36.107558 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 30 13:07:36.107569 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 30 13:07:36.109130 kernel: ata1.00: applying bridge limits Jan 30 13:07:36.110251 kernel: ata1.00: configured for UDMA/100 Jan 30 13:07:36.113987 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 30 13:07:36.147984 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 30 13:07:36.151969 kernel: usbcore: registered new interface driver usbhid Jan 30 13:07:36.151992 kernel: usbhid: USB HID core driver Jan 30 13:07:36.158453 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Jan 30 13:07:36.158476 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jan 30 13:07:36.158663 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 30 13:07:36.177640 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 30 13:07:36.177665 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Jan 30 13:07:36.916987 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:07:36.917304 disk-uuid[555]: The operation has completed successfully. Jan 30 13:07:36.970132 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 13:07:36.970249 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 13:07:36.985085 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 13:07:36.988321 sh[595]: Success Jan 30 13:07:37.000988 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 30 13:07:37.047454 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 13:07:37.055037 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 13:07:37.056144 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 13:07:37.071363 kernel: BTRFS info (device dm-0): first mount of filesystem f8084233-4a6f-4e67-af0b-519e43b19e58 Jan 30 13:07:37.071416 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:07:37.074092 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 13:07:37.074123 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 13:07:37.075285 kernel: BTRFS info (device dm-0): using free space tree Jan 30 13:07:37.083972 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 30 13:07:37.085668 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 13:07:37.086985 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 13:07:37.092148 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 13:07:37.095063 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 13:07:37.111118 kernel: BTRFS info (device sda6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 13:07:37.111161 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:07:37.111172 kernel: BTRFS info (device sda6): using free space tree Jan 30 13:07:37.117257 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 30 13:07:37.117283 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 13:07:37.126831 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 13:07:37.129056 kernel: BTRFS info (device sda6): last unmount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 13:07:37.134575 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 13:07:37.140084 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 13:07:37.205850 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:07:37.216409 ignition[705]: Ignition 2.20.0 Jan 30 13:07:37.216421 ignition[705]: Stage: fetch-offline Jan 30 13:07:37.217107 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:07:37.216453 ignition[705]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:07:37.219142 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:07:37.216463 ignition[705]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 13:07:37.216538 ignition[705]: parsed url from cmdline: "" Jan 30 13:07:37.216542 ignition[705]: no config URL provided Jan 30 13:07:37.216547 ignition[705]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:07:37.216555 ignition[705]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:07:37.216560 ignition[705]: failed to fetch config: resource requires networking Jan 30 13:07:37.216704 ignition[705]: Ignition finished successfully Jan 30 13:07:37.238809 systemd-networkd[781]: lo: Link UP Jan 30 13:07:37.238820 systemd-networkd[781]: lo: Gained carrier Jan 30 13:07:37.241293 systemd-networkd[781]: Enumeration completed Jan 30 13:07:37.241669 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:07:37.242581 systemd[1]: Reached target network.target - Network. Jan 30 13:07:37.242664 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:07:37.242667 systemd-networkd[781]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:07:37.243510 systemd-networkd[781]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:07:37.243513 systemd-networkd[781]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:07:37.244172 systemd-networkd[781]: eth0: Link UP Jan 30 13:07:37.244176 systemd-networkd[781]: eth0: Gained carrier Jan 30 13:07:37.244183 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:07:37.249157 systemd-networkd[781]: eth1: Link UP Jan 30 13:07:37.249161 systemd-networkd[781]: eth1: Gained carrier Jan 30 13:07:37.249167 systemd-networkd[781]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:07:37.251157 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 30 13:07:37.262457 ignition[784]: Ignition 2.20.0 Jan 30 13:07:37.262468 ignition[784]: Stage: fetch Jan 30 13:07:37.262615 ignition[784]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:07:37.262626 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 13:07:37.262712 ignition[784]: parsed url from cmdline: "" Jan 30 13:07:37.262716 ignition[784]: no config URL provided Jan 30 13:07:37.262721 ignition[784]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:07:37.262729 ignition[784]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:07:37.262750 ignition[784]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Jan 30 13:07:37.262889 ignition[784]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 30 13:07:37.271999 systemd-networkd[781]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 13:07:37.300989 systemd-networkd[781]: eth0: DHCPv4 address 138.199.163.224/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 30 13:07:37.463457 ignition[784]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Jan 30 13:07:37.469483 ignition[784]: GET result: OK Jan 30 13:07:37.469550 ignition[784]: parsing config with SHA512: d256913fe2bc382d1d30e68b3dcef5a4ecffb904b428979f773a1cb0403b83b90e072a5ebaf27f9cd89f2619532b1b471d006252491b87d561e4e90099960281 Jan 30 13:07:37.473437 unknown[784]: fetched base config from "system" Jan 30 13:07:37.473453 unknown[784]: fetched base config from "system" Jan 30 13:07:37.473759 ignition[784]: fetch: fetch complete Jan 30 13:07:37.473459 unknown[784]: fetched user config from "hetzner" Jan 30 13:07:37.473765 ignition[784]: fetch: fetch passed Jan 30 13:07:37.473808 ignition[784]: Ignition finished successfully Jan 30 13:07:37.476587 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 30 13:07:37.482161 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 13:07:37.497789 ignition[791]: Ignition 2.20.0 Jan 30 13:07:37.497804 ignition[791]: Stage: kargs Jan 30 13:07:37.498027 ignition[791]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:07:37.498041 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 13:07:37.501065 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 13:07:37.498802 ignition[791]: kargs: kargs passed Jan 30 13:07:37.498845 ignition[791]: Ignition finished successfully Jan 30 13:07:37.513144 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 13:07:37.523813 ignition[798]: Ignition 2.20.0 Jan 30 13:07:37.523826 ignition[798]: Stage: disks Jan 30 13:07:37.524014 ignition[798]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:07:37.524026 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 13:07:37.524764 ignition[798]: disks: disks passed Jan 30 13:07:37.526210 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 13:07:37.524806 ignition[798]: Ignition finished successfully Jan 30 13:07:37.527750 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 13:07:37.528287 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 13:07:37.528763 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:07:37.529925 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:07:37.530877 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:07:37.541156 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 13:07:37.556432 systemd-fsck[806]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 30 13:07:37.559010 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 13:07:37.565753 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 13:07:37.643969 kernel: EXT4-fs (sda9): mounted filesystem cdc615db-d057-439f-af25-aa57b1c399e2 r/w with ordered data mode. Quota mode: none. Jan 30 13:07:37.644501 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 13:07:37.645631 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 13:07:37.651027 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:07:37.654027 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 13:07:37.656126 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 30 13:07:37.658019 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 13:07:37.659091 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:07:37.663854 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 13:07:37.664972 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (814) Jan 30 13:07:37.667327 kernel: BTRFS info (device sda6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 13:07:37.667375 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:07:37.667386 kernel: BTRFS info (device sda6): using free space tree Jan 30 13:07:37.667396 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 30 13:07:37.671286 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 13:07:37.679856 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 13:07:37.684098 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:07:37.724679 initrd-setup-root[842]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 13:07:37.726634 coreos-metadata[816]: Jan 30 13:07:37.726 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Jan 30 13:07:37.728927 coreos-metadata[816]: Jan 30 13:07:37.728 INFO Fetch successful Jan 30 13:07:37.729506 coreos-metadata[816]: Jan 30 13:07:37.729 INFO wrote hostname ci-4186-1-0-d-73846a73c0 to /sysroot/etc/hostname Jan 30 13:07:37.731606 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 13:07:37.732565 initrd-setup-root[850]: cut: /sysroot/etc/group: No such file or directory Jan 30 13:07:37.737889 initrd-setup-root[857]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 13:07:37.742893 initrd-setup-root[864]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 13:07:37.828076 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 13:07:37.832042 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 13:07:37.835078 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 13:07:37.845972 kernel: BTRFS info (device sda6): last unmount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 13:07:37.860627 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 13:07:37.866099 ignition[933]: INFO : Ignition 2.20.0 Jan 30 13:07:37.866099 ignition[933]: INFO : Stage: mount Jan 30 13:07:37.867193 ignition[933]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:07:37.867193 ignition[933]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 13:07:37.867193 ignition[933]: INFO : mount: mount passed Jan 30 13:07:37.867193 ignition[933]: INFO : Ignition finished successfully Jan 30 13:07:37.868582 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 13:07:37.878074 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 13:07:38.070030 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 13:07:38.075261 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:07:38.086981 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (943) Jan 30 13:07:38.089308 kernel: BTRFS info (device sda6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 13:07:38.089379 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:07:38.091443 kernel: BTRFS info (device sda6): using free space tree Jan 30 13:07:38.096756 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 30 13:07:38.096801 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 13:07:38.101495 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:07:38.124639 ignition[960]: INFO : Ignition 2.20.0 Jan 30 13:07:38.124639 ignition[960]: INFO : Stage: files Jan 30 13:07:38.125808 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:07:38.125808 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 13:07:38.127203 ignition[960]: DEBUG : files: compiled without relabeling support, skipping Jan 30 13:07:38.128147 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 13:07:38.128147 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 13:07:38.132638 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 13:07:38.133306 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 13:07:38.134073 unknown[960]: wrote ssh authorized keys file for user: core Jan 30 13:07:38.134866 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 13:07:38.136976 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 13:07:38.137838 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 30 13:07:38.334407 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 30 13:07:39.141290 systemd-networkd[781]: eth1: Gained IPv6LL Jan 30 13:07:39.205077 systemd-networkd[781]: eth0: Gained IPv6LL Jan 30 13:07:39.245406 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 13:07:39.247009 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 30 13:07:39.247009 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 30 13:07:39.791854 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 30 13:07:39.899056 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 30 13:07:39.899056 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 30 13:07:39.901225 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 13:07:39.901225 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:07:39.901225 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:07:39.901225 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:07:39.901225 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:07:39.901225 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:07:39.901225 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:07:39.901225 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:07:39.901225 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:07:39.901225 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:07:39.901225 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:07:39.901225 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:07:39.901225 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 30 13:07:40.547433 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 30 13:07:40.833780 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:07:40.833780 ignition[960]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 30 13:07:40.836244 ignition[960]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:07:40.836244 ignition[960]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:07:40.836244 ignition[960]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 30 13:07:40.836244 ignition[960]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jan 30 13:07:40.836244 ignition[960]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 30 13:07:40.836244 ignition[960]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 30 13:07:40.836244 ignition[960]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jan 30 13:07:40.836244 ignition[960]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jan 30 13:07:40.836244 ignition[960]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 13:07:40.836244 ignition[960]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:07:40.836244 ignition[960]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:07:40.836244 ignition[960]: INFO : files: files passed Jan 30 13:07:40.836244 ignition[960]: INFO : Ignition finished successfully Jan 30 13:07:40.837357 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 13:07:40.848590 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 13:07:40.852379 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 13:07:40.854130 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 13:07:40.854624 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 13:07:40.865788 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:07:40.865788 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:07:40.868502 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:07:40.869614 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:07:40.870475 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 13:07:40.875048 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 13:07:40.900892 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 13:07:40.901028 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 13:07:40.902288 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 13:07:40.903498 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 13:07:40.904049 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 13:07:40.905092 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 13:07:40.919469 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:07:40.929068 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 13:07:40.936825 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:07:40.937420 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:07:40.938436 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 13:07:40.939418 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 13:07:40.939510 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:07:40.940655 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 13:07:40.941305 systemd[1]: Stopped target basic.target - Basic System. Jan 30 13:07:40.942303 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 13:07:40.943214 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:07:40.944096 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 13:07:40.945086 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 13:07:40.946101 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:07:40.947129 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 13:07:40.948086 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 13:07:40.949106 systemd[1]: Stopped target swap.target - Swaps. Jan 30 13:07:40.950016 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 13:07:40.950105 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:07:40.951197 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:07:40.951818 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:07:40.952676 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 13:07:40.954878 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:07:40.955454 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 13:07:40.955546 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 13:07:40.956843 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 13:07:40.956943 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:07:40.958118 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 13:07:40.958248 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 13:07:40.959160 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 30 13:07:40.959291 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 13:07:40.966071 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 13:07:40.967019 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 13:07:40.967711 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:07:40.969542 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 13:07:40.970461 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 13:07:40.971142 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:07:40.976111 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 13:07:40.977102 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:07:40.980232 ignition[1013]: INFO : Ignition 2.20.0 Jan 30 13:07:40.980232 ignition[1013]: INFO : Stage: umount Jan 30 13:07:40.985065 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:07:40.985065 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 13:07:40.985065 ignition[1013]: INFO : umount: umount passed Jan 30 13:07:40.985065 ignition[1013]: INFO : Ignition finished successfully Jan 30 13:07:40.982839 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 13:07:40.982931 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 13:07:40.984480 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 13:07:40.984651 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 13:07:40.987219 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 13:07:40.987265 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 13:07:40.988165 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 30 13:07:40.988207 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 30 13:07:40.988632 systemd[1]: Stopped target network.target - Network. Jan 30 13:07:40.989023 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 13:07:40.989066 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:07:40.989672 systemd[1]: Stopped target paths.target - Path Units. Jan 30 13:07:40.994075 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 13:07:40.997994 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:07:40.998473 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 13:07:40.998876 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 13:07:40.999317 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 13:07:40.999358 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:07:41.000766 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 13:07:41.000802 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:07:41.003385 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 13:07:41.003430 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 13:07:41.003862 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 13:07:41.003904 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 13:07:41.007218 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 13:07:41.008051 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 13:07:41.010822 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 13:07:41.011031 systemd-networkd[781]: eth1: DHCPv6 lease lost Jan 30 13:07:41.014808 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 13:07:41.014899 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 13:07:41.017564 systemd-networkd[781]: eth0: DHCPv6 lease lost Jan 30 13:07:41.021059 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 13:07:41.022656 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 13:07:41.026141 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 13:07:41.026257 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 13:07:41.030612 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 13:07:41.030669 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:07:41.037077 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 13:07:41.038882 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 13:07:41.038934 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:07:41.039859 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:07:41.039904 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:07:41.040824 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 13:07:41.040866 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 13:07:41.041911 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 13:07:41.041999 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:07:41.042933 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:07:41.045303 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 13:07:41.045393 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 13:07:41.050764 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 13:07:41.050828 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 13:07:41.053080 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 13:07:41.053172 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 13:07:41.057549 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 13:07:41.057713 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:07:41.058816 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 13:07:41.058860 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 13:07:41.059649 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 13:07:41.059694 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:07:41.060620 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 13:07:41.060664 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:07:41.062054 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 13:07:41.062097 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 13:07:41.063065 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:07:41.063109 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:07:41.072080 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 13:07:41.073899 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 13:07:41.073965 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:07:41.074479 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:07:41.074536 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:07:41.077865 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 13:07:41.077971 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 13:07:41.078873 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 13:07:41.080817 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 13:07:41.094938 systemd[1]: Switching root. Jan 30 13:07:41.125179 systemd-journald[188]: Journal stopped Jan 30 13:07:42.181079 systemd-journald[188]: Received SIGTERM from PID 1 (systemd). Jan 30 13:07:42.181150 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 13:07:42.181168 kernel: SELinux: policy capability open_perms=1 Jan 30 13:07:42.181186 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 13:07:42.181196 kernel: SELinux: policy capability always_check_network=0 Jan 30 13:07:42.181206 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 13:07:42.181216 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 13:07:42.181226 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 13:07:42.181236 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 13:07:42.181246 kernel: audit: type=1403 audit(1738242461.256:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 13:07:42.181257 systemd[1]: Successfully loaded SELinux policy in 43.138ms. Jan 30 13:07:42.181283 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 15.801ms. Jan 30 13:07:42.181297 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:07:42.181309 systemd[1]: Detected virtualization kvm. Jan 30 13:07:42.181319 systemd[1]: Detected architecture x86-64. Jan 30 13:07:42.181330 systemd[1]: Detected first boot. Jan 30 13:07:42.181340 systemd[1]: Hostname set to . Jan 30 13:07:42.181351 systemd[1]: Initializing machine ID from VM UUID. Jan 30 13:07:42.181362 zram_generator::config[1056]: No configuration found. Jan 30 13:07:42.181374 systemd[1]: Populated /etc with preset unit settings. Jan 30 13:07:42.181387 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 30 13:07:42.181397 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 30 13:07:42.181408 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 30 13:07:42.181419 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 13:07:42.181430 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 13:07:42.181441 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 13:07:42.181452 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 13:07:42.181463 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 13:07:42.181476 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 13:07:42.181488 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 13:07:42.181498 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 13:07:42.181509 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:07:42.181520 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:07:42.181530 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 13:07:42.181542 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 13:07:42.181553 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 13:07:42.181564 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:07:42.181577 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 30 13:07:42.181588 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:07:42.181598 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 30 13:07:42.181620 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 30 13:07:42.181631 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 30 13:07:42.181641 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 13:07:42.181654 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:07:42.181665 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:07:42.181676 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:07:42.181687 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:07:42.181698 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 13:07:42.181708 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 13:07:42.181719 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:07:42.181730 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:07:42.181746 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:07:42.181760 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 13:07:42.181773 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 13:07:42.181786 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 13:07:42.181797 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 13:07:42.181808 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:07:42.181819 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 13:07:42.181832 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 13:07:42.181843 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 13:07:42.181854 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 13:07:42.181864 systemd[1]: Reached target machines.target - Containers. Jan 30 13:07:42.181875 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 13:07:42.181886 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:07:42.181896 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:07:42.181907 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 13:07:42.181919 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:07:42.181931 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:07:42.181942 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:07:42.181964 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 13:07:42.181975 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:07:42.181988 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 13:07:42.181999 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 30 13:07:42.182010 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 30 13:07:42.182020 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 30 13:07:42.182034 systemd[1]: Stopped systemd-fsck-usr.service. Jan 30 13:07:42.182044 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:07:42.182054 kernel: fuse: init (API version 7.39) Jan 30 13:07:42.182065 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:07:42.182076 kernel: loop: module loaded Jan 30 13:07:42.182086 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 13:07:42.182097 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 13:07:42.182107 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:07:42.182118 systemd[1]: verity-setup.service: Deactivated successfully. Jan 30 13:07:42.182132 systemd[1]: Stopped verity-setup.service. Jan 30 13:07:42.182143 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:07:42.182154 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 13:07:42.182164 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 13:07:42.182175 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 13:07:42.182185 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 13:07:42.182199 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 13:07:42.182209 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 13:07:42.182220 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:07:42.182231 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 13:07:42.182242 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 13:07:42.182252 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:07:42.182263 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:07:42.182276 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:07:42.182287 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:07:42.182300 kernel: ACPI: bus type drm_connector registered Jan 30 13:07:42.182310 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:07:42.182321 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:07:42.182332 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 13:07:42.182345 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 13:07:42.182356 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:07:42.182366 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:07:42.182377 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:07:42.182406 systemd-journald[1136]: Collecting audit messages is disabled. Jan 30 13:07:42.182428 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 13:07:42.182439 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 13:07:42.182452 systemd-journald[1136]: Journal started Jan 30 13:07:42.182472 systemd-journald[1136]: Runtime Journal (/run/log/journal/da4cbd71597a4e67900b158dc587e6b5) is 4.8M, max 38.3M, 33.5M free. Jan 30 13:07:41.821603 systemd[1]: Queued start job for default target multi-user.target. Jan 30 13:07:41.851430 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 30 13:07:41.852041 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 30 13:07:42.184888 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:07:42.189050 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 13:07:42.202531 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 13:07:42.209540 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 13:07:42.215086 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 13:07:42.216302 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 13:07:42.216386 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:07:42.217721 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 13:07:42.221050 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 13:07:42.228630 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 13:07:42.229236 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:07:42.231428 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 13:07:42.237406 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 13:07:42.237968 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:07:42.241092 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 13:07:42.241591 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:07:42.248111 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:07:42.251186 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 13:07:42.255362 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 13:07:42.258495 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 13:07:42.261111 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 13:07:42.263515 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 13:07:42.273892 systemd-journald[1136]: Time spent on flushing to /var/log/journal/da4cbd71597a4e67900b158dc587e6b5 is 21.616ms for 1138 entries. Jan 30 13:07:42.273892 systemd-journald[1136]: System Journal (/var/log/journal/da4cbd71597a4e67900b158dc587e6b5) is 8.0M, max 584.8M, 576.8M free. Jan 30 13:07:42.344658 systemd-journald[1136]: Received client request to flush runtime journal. Jan 30 13:07:42.344713 kernel: loop0: detected capacity change from 0 to 210664 Jan 30 13:07:42.285378 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 13:07:42.286089 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 13:07:42.294491 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 13:07:42.305587 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:07:42.317596 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 13:07:42.320458 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:07:42.347780 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 13:07:42.359971 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 13:07:42.361319 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 13:07:42.370810 udevadm[1187]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 30 13:07:42.374508 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 13:07:42.386806 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 13:07:42.395119 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:07:42.402986 kernel: loop1: detected capacity change from 0 to 141000 Jan 30 13:07:42.431305 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Jan 30 13:07:42.431861 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Jan 30 13:07:42.441509 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:07:42.452968 kernel: loop2: detected capacity change from 0 to 8 Jan 30 13:07:42.480052 kernel: loop3: detected capacity change from 0 to 138184 Jan 30 13:07:42.524265 kernel: loop4: detected capacity change from 0 to 210664 Jan 30 13:07:42.551981 kernel: loop5: detected capacity change from 0 to 141000 Jan 30 13:07:42.585984 kernel: loop6: detected capacity change from 0 to 8 Jan 30 13:07:42.591980 kernel: loop7: detected capacity change from 0 to 138184 Jan 30 13:07:42.619306 (sd-merge)[1202]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Jan 30 13:07:42.620568 (sd-merge)[1202]: Merged extensions into '/usr'. Jan 30 13:07:42.625660 systemd[1]: Reloading requested from client PID 1176 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 13:07:42.626031 systemd[1]: Reloading... Jan 30 13:07:42.719990 zram_generator::config[1231]: No configuration found. Jan 30 13:07:42.768155 ldconfig[1171]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 13:07:42.834099 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:07:42.880303 systemd[1]: Reloading finished in 253 ms. Jan 30 13:07:42.904266 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 13:07:42.905232 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 13:07:42.906038 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 13:07:42.918084 systemd[1]: Starting ensure-sysext.service... Jan 30 13:07:42.921099 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:07:42.928193 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:07:42.934065 systemd[1]: Reloading requested from client PID 1272 ('systemctl') (unit ensure-sysext.service)... Jan 30 13:07:42.934078 systemd[1]: Reloading... Jan 30 13:07:42.957396 systemd-tmpfiles[1273]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 13:07:42.957659 systemd-tmpfiles[1273]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 13:07:42.958621 systemd-tmpfiles[1273]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 13:07:42.958918 systemd-tmpfiles[1273]: ACLs are not supported, ignoring. Jan 30 13:07:42.959547 systemd-tmpfiles[1273]: ACLs are not supported, ignoring. Jan 30 13:07:42.962625 systemd-udevd[1274]: Using default interface naming scheme 'v255'. Jan 30 13:07:42.964033 systemd-tmpfiles[1273]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:07:42.964198 systemd-tmpfiles[1273]: Skipping /boot Jan 30 13:07:42.984451 systemd-tmpfiles[1273]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:07:42.984602 systemd-tmpfiles[1273]: Skipping /boot Jan 30 13:07:43.029972 zram_generator::config[1301]: No configuration found. Jan 30 13:07:43.189342 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 30 13:07:43.196967 kernel: mousedev: PS/2 mouse device common for all mice Jan 30 13:07:43.203969 kernel: ACPI: button: Power Button [PWRF] Jan 30 13:07:43.222984 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1320) Jan 30 13:07:43.233356 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:07:43.297371 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 30 13:07:43.299000 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 30 13:07:43.299218 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 30 13:07:43.306399 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 30 13:07:43.306801 systemd[1]: Reloading finished in 372 ms. Jan 30 13:07:43.327725 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:07:43.328240 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 30 13:07:43.329477 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:07:43.349973 kernel: EDAC MC: Ver: 3.0.0 Jan 30 13:07:43.378414 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Jan 30 13:07:43.383520 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Jan 30 13:07:43.383587 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Jan 30 13:07:43.383891 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 30 13:07:43.386017 kernel: Console: switching to colour dummy device 80x25 Jan 30 13:07:43.389037 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 30 13:07:43.389075 kernel: [drm] features: -context_init Jan 30 13:07:43.389989 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:07:43.392031 kernel: [drm] number of scanouts: 1 Jan 30 13:07:43.392067 kernel: [drm] number of cap sets: 0 Jan 30 13:07:43.394618 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Jan 30 13:07:43.394345 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 30 13:07:43.398224 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 13:07:43.398515 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:07:43.406148 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 30 13:07:43.406203 kernel: Console: switching to colour frame buffer device 160x50 Jan 30 13:07:43.400876 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:07:43.417263 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 30 13:07:43.425994 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:07:43.433874 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:07:43.434103 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:07:43.443304 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 13:07:43.447238 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 13:07:43.450079 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:07:43.461259 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:07:43.466467 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 13:07:43.470887 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:07:43.471981 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:07:43.474725 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:07:43.475314 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:07:43.477781 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:07:43.479108 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:07:43.480289 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:07:43.480465 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:07:43.514165 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 13:07:43.516160 systemd[1]: Finished ensure-sysext.service. Jan 30 13:07:43.518377 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 13:07:43.529451 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 13:07:43.542316 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:07:43.542564 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:07:43.549250 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:07:43.562664 augenrules[1422]: No rules Jan 30 13:07:43.563904 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:07:43.566931 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:07:43.570083 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:07:43.570274 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:07:43.579204 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 30 13:07:43.581880 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 13:07:43.585706 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 13:07:43.588357 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:07:43.591659 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 13:07:43.592613 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 13:07:43.592809 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 30 13:07:43.595286 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:07:43.595472 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:07:43.596570 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:07:43.596730 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:07:43.597392 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:07:43.597565 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:07:43.598247 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:07:43.598410 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:07:43.599496 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:07:43.599662 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:07:43.607384 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 13:07:43.609758 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 13:07:43.629113 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 13:07:43.629650 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:07:43.629731 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:07:43.632554 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:07:43.635394 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 13:07:43.644307 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 13:07:43.653132 lvm[1441]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:07:43.697564 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 13:07:43.700526 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:07:43.707103 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 13:07:43.717985 lvm[1452]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:07:43.756102 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 30 13:07:43.756907 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 13:07:43.764865 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:07:43.772734 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 13:07:43.782761 systemd-networkd[1393]: lo: Link UP Jan 30 13:07:43.782770 systemd-networkd[1393]: lo: Gained carrier Jan 30 13:07:43.785719 systemd-networkd[1393]: Enumeration completed Jan 30 13:07:43.785876 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:07:43.788998 systemd-networkd[1393]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:07:43.789007 systemd-networkd[1393]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:07:43.793070 systemd-networkd[1393]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:07:43.793083 systemd-networkd[1393]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:07:43.793678 systemd-networkd[1393]: eth0: Link UP Jan 30 13:07:43.793682 systemd-networkd[1393]: eth0: Gained carrier Jan 30 13:07:43.793694 systemd-networkd[1393]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:07:43.793933 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 13:07:43.794268 systemd-resolved[1395]: Positive Trust Anchors: Jan 30 13:07:43.794277 systemd-resolved[1395]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:07:43.794304 systemd-resolved[1395]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:07:43.797405 systemd-networkd[1393]: eth1: Link UP Jan 30 13:07:43.797415 systemd-networkd[1393]: eth1: Gained carrier Jan 30 13:07:43.797434 systemd-networkd[1393]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:07:43.801157 systemd-resolved[1395]: Using system hostname 'ci-4186-1-0-d-73846a73c0'. Jan 30 13:07:43.803204 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:07:43.803869 systemd[1]: Reached target network.target - Network. Jan 30 13:07:43.804433 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:07:43.805689 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:07:43.808282 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 13:07:43.808820 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 13:07:43.809492 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 13:07:43.810984 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 13:07:43.813859 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 13:07:43.814380 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 13:07:43.814416 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:07:43.815153 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:07:43.820723 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 13:07:43.823431 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 13:07:43.828039 systemd-networkd[1393]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 13:07:43.829842 systemd-timesyncd[1429]: Network configuration changed, trying to establish connection. Jan 30 13:07:43.831293 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 13:07:43.834247 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 13:07:43.835266 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:07:43.836176 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:07:43.836847 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:07:43.836933 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:07:43.848380 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 13:07:43.854197 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 30 13:07:43.857005 systemd-networkd[1393]: eth0: DHCPv4 address 138.199.163.224/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 30 13:07:43.858168 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 13:07:43.858204 systemd-timesyncd[1429]: Network configuration changed, trying to establish connection. Jan 30 13:07:43.864067 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 13:07:43.869146 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 13:07:43.870480 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 13:07:43.879113 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 13:07:43.881093 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 30 13:07:43.891146 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Jan 30 13:07:43.896777 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 13:07:43.897650 jq[1465]: false Jan 30 13:07:43.902158 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 13:07:43.909130 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 13:07:43.910037 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 13:07:43.910549 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 13:07:43.915148 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 13:07:43.918010 coreos-metadata[1463]: Jan 30 13:07:43.917 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Jan 30 13:07:43.918373 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 13:07:43.929116 coreos-metadata[1463]: Jan 30 13:07:43.929 INFO Fetch successful Jan 30 13:07:43.929734 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 13:07:43.930408 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 13:07:43.932592 coreos-metadata[1463]: Jan 30 13:07:43.932 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Jan 30 13:07:43.933770 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 13:07:43.933959 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 13:07:43.942971 jq[1478]: true Jan 30 13:07:43.946975 coreos-metadata[1463]: Jan 30 13:07:43.945 INFO Fetch successful Jan 30 13:07:43.980979 jq[1486]: true Jan 30 13:07:43.988380 dbus-daemon[1464]: [system] SELinux support is enabled Jan 30 13:07:43.990186 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 13:07:43.997521 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 13:07:43.997571 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 13:07:43.998450 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 13:07:43.998483 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 13:07:44.007348 update_engine[1475]: I20250130 13:07:44.007287 1475 main.cc:92] Flatcar Update Engine starting Jan 30 13:07:44.008846 extend-filesystems[1466]: Found loop4 Jan 30 13:07:44.013650 extend-filesystems[1466]: Found loop5 Jan 30 13:07:44.013650 extend-filesystems[1466]: Found loop6 Jan 30 13:07:44.013650 extend-filesystems[1466]: Found loop7 Jan 30 13:07:44.013650 extend-filesystems[1466]: Found sda Jan 30 13:07:44.013650 extend-filesystems[1466]: Found sda1 Jan 30 13:07:44.013650 extend-filesystems[1466]: Found sda2 Jan 30 13:07:44.013650 extend-filesystems[1466]: Found sda3 Jan 30 13:07:44.013650 extend-filesystems[1466]: Found usr Jan 30 13:07:44.013650 extend-filesystems[1466]: Found sda4 Jan 30 13:07:44.013650 extend-filesystems[1466]: Found sda6 Jan 30 13:07:44.013650 extend-filesystems[1466]: Found sda7 Jan 30 13:07:44.013650 extend-filesystems[1466]: Found sda9 Jan 30 13:07:44.013650 extend-filesystems[1466]: Checking size of /dev/sda9 Jan 30 13:07:44.036921 update_engine[1475]: I20250130 13:07:44.035200 1475 update_check_scheduler.cc:74] Next update check in 8m29s Jan 30 13:07:44.021486 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 13:07:44.041836 tar[1481]: linux-amd64/helm Jan 30 13:07:44.021721 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 13:07:44.027520 (ntainerd)[1498]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 13:07:44.037129 systemd[1]: Started update-engine.service - Update Engine. Jan 30 13:07:44.045178 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 13:07:44.069079 extend-filesystems[1466]: Resized partition /dev/sda9 Jan 30 13:07:44.079920 extend-filesystems[1520]: resize2fs 1.47.1 (20-May-2024) Jan 30 13:07:44.099603 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Jan 30 13:07:44.120437 systemd-logind[1474]: New seat seat0. Jan 30 13:07:44.129259 systemd-logind[1474]: Watching system buttons on /dev/input/event2 (Power Button) Jan 30 13:07:44.129284 systemd-logind[1474]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 30 13:07:44.129505 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 13:07:44.151178 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 30 13:07:44.154644 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 13:07:44.213662 bash[1528]: Updated "/home/core/.ssh/authorized_keys" Jan 30 13:07:44.211605 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 13:07:44.222321 systemd[1]: Starting sshkeys.service... Jan 30 13:07:44.228974 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1324) Jan 30 13:07:44.282143 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 30 13:07:44.294687 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 30 13:07:44.322099 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Jan 30 13:07:44.356523 extend-filesystems[1520]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 30 13:07:44.356523 extend-filesystems[1520]: old_desc_blocks = 1, new_desc_blocks = 5 Jan 30 13:07:44.356523 extend-filesystems[1520]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Jan 30 13:07:44.366444 extend-filesystems[1466]: Resized filesystem in /dev/sda9 Jan 30 13:07:44.366444 extend-filesystems[1466]: Found sr0 Jan 30 13:07:44.358274 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 13:07:44.358510 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 13:07:44.380671 coreos-metadata[1543]: Jan 30 13:07:44.379 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Jan 30 13:07:44.380671 coreos-metadata[1543]: Jan 30 13:07:44.379 INFO Fetch successful Jan 30 13:07:44.383511 unknown[1543]: wrote ssh authorized keys file for user: core Jan 30 13:07:44.388368 locksmithd[1507]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 13:07:44.410062 update-ssh-keys[1551]: Updated "/home/core/.ssh/authorized_keys" Jan 30 13:07:44.411712 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 30 13:07:44.416118 containerd[1498]: time="2025-01-30T13:07:44.414313290Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 30 13:07:44.418851 systemd[1]: Finished sshkeys.service. Jan 30 13:07:44.424189 sshd_keygen[1487]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 13:07:44.447704 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 13:07:44.459038 containerd[1498]: time="2025-01-30T13:07:44.458993657Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:07:44.463013 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 13:07:44.467373 containerd[1498]: time="2025-01-30T13:07:44.467334935Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:07:44.467420 containerd[1498]: time="2025-01-30T13:07:44.467367816Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 13:07:44.467420 containerd[1498]: time="2025-01-30T13:07:44.467401169Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 13:07:44.467646 containerd[1498]: time="2025-01-30T13:07:44.467621753Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 13:07:44.467674 containerd[1498]: time="2025-01-30T13:07:44.467663401Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 13:07:44.467787 containerd[1498]: time="2025-01-30T13:07:44.467762707Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:07:44.467787 containerd[1498]: time="2025-01-30T13:07:44.467783607Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:07:44.469208 containerd[1498]: time="2025-01-30T13:07:44.468025540Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:07:44.469208 containerd[1498]: time="2025-01-30T13:07:44.468043203Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 13:07:44.469208 containerd[1498]: time="2025-01-30T13:07:44.468056488Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:07:44.469208 containerd[1498]: time="2025-01-30T13:07:44.468083218Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 13:07:44.469208 containerd[1498]: time="2025-01-30T13:07:44.468199667Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:07:44.469208 containerd[1498]: time="2025-01-30T13:07:44.468547950Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:07:44.469208 containerd[1498]: time="2025-01-30T13:07:44.468688974Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:07:44.469208 containerd[1498]: time="2025-01-30T13:07:44.468702279Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 13:07:44.469208 containerd[1498]: time="2025-01-30T13:07:44.468826823Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 13:07:44.469208 containerd[1498]: time="2025-01-30T13:07:44.468914057Z" level=info msg="metadata content store policy set" policy=shared Jan 30 13:07:44.473276 containerd[1498]: time="2025-01-30T13:07:44.473247867Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 13:07:44.473926 containerd[1498]: time="2025-01-30T13:07:44.473896233Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 13:07:44.473998 containerd[1498]: time="2025-01-30T13:07:44.473973318Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 13:07:44.474050 containerd[1498]: time="2025-01-30T13:07:44.474002492Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 13:07:44.474050 containerd[1498]: time="2025-01-30T13:07:44.474016419Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 13:07:44.474173 containerd[1498]: time="2025-01-30T13:07:44.474149338Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 13:07:44.474344 containerd[1498]: time="2025-01-30T13:07:44.474321981Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 13:07:44.474476 containerd[1498]: time="2025-01-30T13:07:44.474438360Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 13:07:44.474476 containerd[1498]: time="2025-01-30T13:07:44.474472755Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 13:07:44.474517 containerd[1498]: time="2025-01-30T13:07:44.474486410Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 13:07:44.474517 containerd[1498]: time="2025-01-30T13:07:44.474498823Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 13:07:44.474517 containerd[1498]: time="2025-01-30T13:07:44.474509613Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 13:07:44.474571 containerd[1498]: time="2025-01-30T13:07:44.474519563Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 13:07:44.474571 containerd[1498]: time="2025-01-30T13:07:44.474533488Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 13:07:44.474571 containerd[1498]: time="2025-01-30T13:07:44.474546523Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 13:07:44.474571 containerd[1498]: time="2025-01-30T13:07:44.474556973Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 13:07:44.474638 containerd[1498]: time="2025-01-30T13:07:44.474577161Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 13:07:44.474638 containerd[1498]: time="2025-01-30T13:07:44.474587430Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 13:07:44.474638 containerd[1498]: time="2025-01-30T13:07:44.474603971Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 13:07:44.474638 containerd[1498]: time="2025-01-30T13:07:44.474615171Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 13:07:44.474638 containerd[1498]: time="2025-01-30T13:07:44.474632424Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 13:07:44.474719 containerd[1498]: time="2025-01-30T13:07:44.474644036Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 13:07:44.474719 containerd[1498]: time="2025-01-30T13:07:44.474654826Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 13:07:44.474719 containerd[1498]: time="2025-01-30T13:07:44.474665045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 13:07:44.474719 containerd[1498]: time="2025-01-30T13:07:44.474674102Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 13:07:44.474719 containerd[1498]: time="2025-01-30T13:07:44.474683961Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 13:07:44.474719 containerd[1498]: time="2025-01-30T13:07:44.474695602Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 13:07:44.474719 containerd[1498]: time="2025-01-30T13:07:44.474708617Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 13:07:44.474719 containerd[1498]: time="2025-01-30T13:07:44.474718185Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 13:07:44.474839 containerd[1498]: time="2025-01-30T13:07:44.474728414Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 13:07:44.474839 containerd[1498]: time="2025-01-30T13:07:44.474737922Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 13:07:44.474839 containerd[1498]: time="2025-01-30T13:07:44.474757999Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 13:07:44.474839 containerd[1498]: time="2025-01-30T13:07:44.474775573Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 13:07:44.474839 containerd[1498]: time="2025-01-30T13:07:44.474785722Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 13:07:44.474839 containerd[1498]: time="2025-01-30T13:07:44.474794418Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 13:07:44.474839 containerd[1498]: time="2025-01-30T13:07:44.474829193Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 13:07:44.474839 containerd[1498]: time="2025-01-30T13:07:44.474841086Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 13:07:44.474987 containerd[1498]: time="2025-01-30T13:07:44.474850012Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 13:07:44.474987 containerd[1498]: time="2025-01-30T13:07:44.474860151Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 13:07:44.474987 containerd[1498]: time="2025-01-30T13:07:44.474867996Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 13:07:44.474987 containerd[1498]: time="2025-01-30T13:07:44.474877383Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 13:07:44.474987 containerd[1498]: time="2025-01-30T13:07:44.474886671Z" level=info msg="NRI interface is disabled by configuration." Jan 30 13:07:44.474987 containerd[1498]: time="2025-01-30T13:07:44.474895447Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 13:07:44.476592 containerd[1498]: time="2025-01-30T13:07:44.475522173Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 13:07:44.476592 containerd[1498]: time="2025-01-30T13:07:44.475570714Z" level=info msg="Connect containerd service" Jan 30 13:07:44.476592 containerd[1498]: time="2025-01-30T13:07:44.475595501Z" level=info msg="using legacy CRI server" Jan 30 13:07:44.476592 containerd[1498]: time="2025-01-30T13:07:44.475601472Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 13:07:44.476592 containerd[1498]: time="2025-01-30T13:07:44.475703874Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 13:07:44.477819 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 13:07:44.478113 containerd[1498]: time="2025-01-30T13:07:44.477487580Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:07:44.478224 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 13:07:44.482009 containerd[1498]: time="2025-01-30T13:07:44.478384062Z" level=info msg="Start subscribing containerd event" Jan 30 13:07:44.482009 containerd[1498]: time="2025-01-30T13:07:44.478422384Z" level=info msg="Start recovering state" Jan 30 13:07:44.482009 containerd[1498]: time="2025-01-30T13:07:44.478487155Z" level=info msg="Start event monitor" Jan 30 13:07:44.482009 containerd[1498]: time="2025-01-30T13:07:44.478503767Z" level=info msg="Start snapshots syncer" Jan 30 13:07:44.482009 containerd[1498]: time="2025-01-30T13:07:44.478512062Z" level=info msg="Start cni network conf syncer for default" Jan 30 13:07:44.482009 containerd[1498]: time="2025-01-30T13:07:44.478518264Z" level=info msg="Start streaming server" Jan 30 13:07:44.483731 containerd[1498]: time="2025-01-30T13:07:44.483622118Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 13:07:44.483860 containerd[1498]: time="2025-01-30T13:07:44.483843593Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 13:07:44.485777 containerd[1498]: time="2025-01-30T13:07:44.483998554Z" level=info msg="containerd successfully booted in 0.073216s" Jan 30 13:07:44.491204 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 13:07:44.493652 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 13:07:44.506636 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 13:07:44.519190 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 13:07:44.527392 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 30 13:07:44.530261 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 13:07:44.743928 tar[1481]: linux-amd64/LICENSE Jan 30 13:07:44.743928 tar[1481]: linux-amd64/README.md Jan 30 13:07:44.760747 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 30 13:07:45.669169 systemd-networkd[1393]: eth0: Gained IPv6LL Jan 30 13:07:45.669997 systemd-timesyncd[1429]: Network configuration changed, trying to establish connection. Jan 30 13:07:45.673059 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 13:07:45.674502 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 13:07:45.686190 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:07:45.695877 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 13:07:45.721273 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 13:07:45.797364 systemd-networkd[1393]: eth1: Gained IPv6LL Jan 30 13:07:45.797923 systemd-timesyncd[1429]: Network configuration changed, trying to establish connection. Jan 30 13:07:46.419519 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:07:46.423501 (kubelet)[1593]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:07:46.424270 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 13:07:46.427532 systemd[1]: Startup finished in 1.195s (kernel) + 6.564s (initrd) + 5.212s (userspace) = 12.973s. Jan 30 13:07:46.438237 agetty[1573]: failed to open credentials directory Jan 30 13:07:46.443944 agetty[1572]: failed to open credentials directory Jan 30 13:07:46.937767 kubelet[1593]: E0130 13:07:46.937700 1593 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:07:46.942447 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:07:46.942686 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:07:57.062026 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 13:07:57.070639 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:07:57.236090 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:07:57.240353 (kubelet)[1612]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:07:57.275907 kubelet[1612]: E0130 13:07:57.275844 1612 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:07:57.282522 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:07:57.282710 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:08:07.311845 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 30 13:08:07.318267 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:08:07.500341 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:08:07.505161 (kubelet)[1629]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:08:07.539535 kubelet[1629]: E0130 13:08:07.539471 1629 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:08:07.542005 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:08:07.542214 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:08:16.620604 systemd-timesyncd[1429]: Contacted time server 85.214.83.151:123 (2.flatcar.pool.ntp.org). Jan 30 13:08:16.620737 systemd-timesyncd[1429]: Initial clock synchronization to Thu 2025-01-30 13:08:16.620236 UTC. Jan 30 13:08:16.620986 systemd-resolved[1395]: Clock change detected. Flushing caches. Jan 30 13:08:18.242950 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 30 13:08:18.255669 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:08:18.375307 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:08:18.380879 (kubelet)[1645]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:08:18.420181 kubelet[1645]: E0130 13:08:18.420124 1645 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:08:18.423619 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:08:18.423805 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:08:28.492946 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 30 13:08:28.501682 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:08:28.625322 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:08:28.629558 (kubelet)[1663]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:08:28.669504 kubelet[1663]: E0130 13:08:28.669441 1663 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:08:28.673227 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:08:28.673414 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:08:30.170467 update_engine[1475]: I20250130 13:08:30.170338 1475 update_attempter.cc:509] Updating boot flags... Jan 30 13:08:30.229540 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1680) Jan 30 13:08:30.279510 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1684) Jan 30 13:08:36.571230 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 13:08:36.575913 systemd[1]: Started sshd@0-138.199.163.224:22-139.178.89.65:36294.service - OpenSSH per-connection server daemon (139.178.89.65:36294). Jan 30 13:08:37.557199 sshd[1690]: Accepted publickey for core from 139.178.89.65 port 36294 ssh2: RSA SHA256:5b7aLHOxh/fZTvNGxGsjXyEVE8Wd5gb2YihhQWnHlKs Jan 30 13:08:37.559295 sshd-session[1690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:08:37.568152 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 13:08:37.572887 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 13:08:37.575449 systemd-logind[1474]: New session 1 of user core. Jan 30 13:08:37.587225 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 13:08:37.593908 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 13:08:37.606302 (systemd)[1694]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 13:08:37.707257 systemd[1694]: Queued start job for default target default.target. Jan 30 13:08:37.717646 systemd[1694]: Created slice app.slice - User Application Slice. Jan 30 13:08:37.717672 systemd[1694]: Reached target paths.target - Paths. Jan 30 13:08:37.717684 systemd[1694]: Reached target timers.target - Timers. Jan 30 13:08:37.719038 systemd[1694]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 13:08:37.731947 systemd[1694]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 13:08:37.732055 systemd[1694]: Reached target sockets.target - Sockets. Jan 30 13:08:37.732070 systemd[1694]: Reached target basic.target - Basic System. Jan 30 13:08:37.732108 systemd[1694]: Reached target default.target - Main User Target. Jan 30 13:08:37.732139 systemd[1694]: Startup finished in 118ms. Jan 30 13:08:37.732231 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 13:08:37.739649 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 13:08:38.425481 systemd[1]: Started sshd@1-138.199.163.224:22-139.178.89.65:36304.service - OpenSSH per-connection server daemon (139.178.89.65:36304). Jan 30 13:08:38.743411 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 30 13:08:38.752206 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:08:38.923539 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:08:38.936751 (kubelet)[1715]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:08:38.968336 kubelet[1715]: E0130 13:08:38.968281 1715 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:08:38.972151 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:08:38.972328 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:08:39.420065 sshd[1705]: Accepted publickey for core from 139.178.89.65 port 36304 ssh2: RSA SHA256:5b7aLHOxh/fZTvNGxGsjXyEVE8Wd5gb2YihhQWnHlKs Jan 30 13:08:39.422845 sshd-session[1705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:08:39.431018 systemd-logind[1474]: New session 2 of user core. Jan 30 13:08:39.447722 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 13:08:40.105842 sshd[1724]: Connection closed by 139.178.89.65 port 36304 Jan 30 13:08:40.106576 sshd-session[1705]: pam_unix(sshd:session): session closed for user core Jan 30 13:08:40.111084 systemd[1]: sshd@1-138.199.163.224:22-139.178.89.65:36304.service: Deactivated successfully. Jan 30 13:08:40.113351 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 13:08:40.114450 systemd-logind[1474]: Session 2 logged out. Waiting for processes to exit. Jan 30 13:08:40.115896 systemd-logind[1474]: Removed session 2. Jan 30 13:08:40.283721 systemd[1]: Started sshd@2-138.199.163.224:22-139.178.89.65:36310.service - OpenSSH per-connection server daemon (139.178.89.65:36310). Jan 30 13:08:41.288046 sshd[1729]: Accepted publickey for core from 139.178.89.65 port 36310 ssh2: RSA SHA256:5b7aLHOxh/fZTvNGxGsjXyEVE8Wd5gb2YihhQWnHlKs Jan 30 13:08:41.289692 sshd-session[1729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:08:41.294578 systemd-logind[1474]: New session 3 of user core. Jan 30 13:08:41.303632 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 13:08:41.964953 sshd[1731]: Connection closed by 139.178.89.65 port 36310 Jan 30 13:08:41.966210 sshd-session[1729]: pam_unix(sshd:session): session closed for user core Jan 30 13:08:41.971248 systemd[1]: sshd@2-138.199.163.224:22-139.178.89.65:36310.service: Deactivated successfully. Jan 30 13:08:41.974371 systemd[1]: session-3.scope: Deactivated successfully. Jan 30 13:08:41.976969 systemd-logind[1474]: Session 3 logged out. Waiting for processes to exit. Jan 30 13:08:41.979342 systemd-logind[1474]: Removed session 3. Jan 30 13:08:42.139051 systemd[1]: Started sshd@3-138.199.163.224:22-139.178.89.65:56434.service - OpenSSH per-connection server daemon (139.178.89.65:56434). Jan 30 13:08:43.128412 sshd[1736]: Accepted publickey for core from 139.178.89.65 port 56434 ssh2: RSA SHA256:5b7aLHOxh/fZTvNGxGsjXyEVE8Wd5gb2YihhQWnHlKs Jan 30 13:08:43.130062 sshd-session[1736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:08:43.134621 systemd-logind[1474]: New session 4 of user core. Jan 30 13:08:43.141865 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 13:08:43.803628 sshd[1738]: Connection closed by 139.178.89.65 port 56434 Jan 30 13:08:43.804770 sshd-session[1736]: pam_unix(sshd:session): session closed for user core Jan 30 13:08:43.808106 systemd[1]: sshd@3-138.199.163.224:22-139.178.89.65:56434.service: Deactivated successfully. Jan 30 13:08:43.810241 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 13:08:43.811674 systemd-logind[1474]: Session 4 logged out. Waiting for processes to exit. Jan 30 13:08:43.813006 systemd-logind[1474]: Removed session 4. Jan 30 13:08:43.986883 systemd[1]: Started sshd@4-138.199.163.224:22-139.178.89.65:56448.service - OpenSSH per-connection server daemon (139.178.89.65:56448). Jan 30 13:08:44.991848 sshd[1743]: Accepted publickey for core from 139.178.89.65 port 56448 ssh2: RSA SHA256:5b7aLHOxh/fZTvNGxGsjXyEVE8Wd5gb2YihhQWnHlKs Jan 30 13:08:44.993383 sshd-session[1743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:08:44.998164 systemd-logind[1474]: New session 5 of user core. Jan 30 13:08:45.007622 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 13:08:45.525911 sudo[1746]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 13:08:45.526316 sudo[1746]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:08:45.542265 sudo[1746]: pam_unix(sudo:session): session closed for user root Jan 30 13:08:45.703217 sshd[1745]: Connection closed by 139.178.89.65 port 56448 Jan 30 13:08:45.704391 sshd-session[1743]: pam_unix(sshd:session): session closed for user core Jan 30 13:08:45.710621 systemd-logind[1474]: Session 5 logged out. Waiting for processes to exit. Jan 30 13:08:45.711685 systemd[1]: sshd@4-138.199.163.224:22-139.178.89.65:56448.service: Deactivated successfully. Jan 30 13:08:45.715154 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 13:08:45.716664 systemd-logind[1474]: Removed session 5. Jan 30 13:08:45.884247 systemd[1]: Started sshd@5-138.199.163.224:22-139.178.89.65:56456.service - OpenSSH per-connection server daemon (139.178.89.65:56456). Jan 30 13:08:46.888394 sshd[1751]: Accepted publickey for core from 139.178.89.65 port 56456 ssh2: RSA SHA256:5b7aLHOxh/fZTvNGxGsjXyEVE8Wd5gb2YihhQWnHlKs Jan 30 13:08:46.891624 sshd-session[1751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:08:46.901314 systemd-logind[1474]: New session 6 of user core. Jan 30 13:08:46.911779 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 13:08:47.414947 sudo[1755]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 13:08:47.415320 sudo[1755]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:08:47.419173 sudo[1755]: pam_unix(sudo:session): session closed for user root Jan 30 13:08:47.425581 sudo[1754]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 30 13:08:47.425928 sudo[1754]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:08:47.447010 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 30 13:08:47.499550 augenrules[1777]: No rules Jan 30 13:08:47.500421 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 13:08:47.500756 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 30 13:08:47.502052 sudo[1754]: pam_unix(sudo:session): session closed for user root Jan 30 13:08:47.662654 sshd[1753]: Connection closed by 139.178.89.65 port 56456 Jan 30 13:08:47.663363 sshd-session[1751]: pam_unix(sshd:session): session closed for user core Jan 30 13:08:47.669826 systemd[1]: sshd@5-138.199.163.224:22-139.178.89.65:56456.service: Deactivated successfully. Jan 30 13:08:47.673596 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 13:08:47.674928 systemd-logind[1474]: Session 6 logged out. Waiting for processes to exit. Jan 30 13:08:47.677260 systemd-logind[1474]: Removed session 6. Jan 30 13:08:47.838023 systemd[1]: Started sshd@6-138.199.163.224:22-139.178.89.65:56460.service - OpenSSH per-connection server daemon (139.178.89.65:56460). Jan 30 13:08:48.819449 sshd[1785]: Accepted publickey for core from 139.178.89.65 port 56460 ssh2: RSA SHA256:5b7aLHOxh/fZTvNGxGsjXyEVE8Wd5gb2YihhQWnHlKs Jan 30 13:08:48.821973 sshd-session[1785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:08:48.831447 systemd-logind[1474]: New session 7 of user core. Jan 30 13:08:48.842793 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 13:08:48.993266 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 30 13:08:49.001040 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:08:49.149755 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:08:49.156802 (kubelet)[1796]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:08:49.190714 kubelet[1796]: E0130 13:08:49.190658 1796 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:08:49.194027 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:08:49.194204 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:08:49.335643 sudo[1804]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 13:08:49.336019 sudo[1804]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:08:49.567717 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 30 13:08:49.570022 (dockerd)[1823]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 30 13:08:49.775917 dockerd[1823]: time="2025-01-30T13:08:49.775848754Z" level=info msg="Starting up" Jan 30 13:08:49.836123 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1450634315-merged.mount: Deactivated successfully. Jan 30 13:08:49.866464 dockerd[1823]: time="2025-01-30T13:08:49.866417149Z" level=info msg="Loading containers: start." Jan 30 13:08:50.024520 kernel: Initializing XFRM netlink socket Jan 30 13:08:50.104267 systemd-networkd[1393]: docker0: Link UP Jan 30 13:08:50.137239 dockerd[1823]: time="2025-01-30T13:08:50.137181810Z" level=info msg="Loading containers: done." Jan 30 13:08:50.150467 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3535429912-merged.mount: Deactivated successfully. Jan 30 13:08:50.153138 dockerd[1823]: time="2025-01-30T13:08:50.153099948Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 30 13:08:50.153202 dockerd[1823]: time="2025-01-30T13:08:50.153176037Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jan 30 13:08:50.153298 dockerd[1823]: time="2025-01-30T13:08:50.153272306Z" level=info msg="Daemon has completed initialization" Jan 30 13:08:50.181455 dockerd[1823]: time="2025-01-30T13:08:50.181386129Z" level=info msg="API listen on /run/docker.sock" Jan 30 13:08:50.181544 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 30 13:08:51.254002 containerd[1498]: time="2025-01-30T13:08:51.253956227Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 30 13:08:51.842108 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2004206384.mount: Deactivated successfully. Jan 30 13:08:52.798718 containerd[1498]: time="2025-01-30T13:08:52.798663219Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:08:52.799701 containerd[1498]: time="2025-01-30T13:08:52.799528015Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=32677104" Jan 30 13:08:52.800454 containerd[1498]: time="2025-01-30T13:08:52.800401370Z" level=info msg="ImageCreate event name:\"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:08:52.802873 containerd[1498]: time="2025-01-30T13:08:52.802818757Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:08:52.804246 containerd[1498]: time="2025-01-30T13:08:52.803754212Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"32673812\" in 1.549764418s" Jan 30 13:08:52.804246 containerd[1498]: time="2025-01-30T13:08:52.803785213Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\"" Jan 30 13:08:52.823364 containerd[1498]: time="2025-01-30T13:08:52.823316536Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 30 13:08:54.151222 containerd[1498]: time="2025-01-30T13:08:54.151154836Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:08:54.152224 containerd[1498]: time="2025-01-30T13:08:54.152175578Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=29605765" Jan 30 13:08:54.153027 containerd[1498]: time="2025-01-30T13:08:54.152988888Z" level=info msg="ImageCreate event name:\"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:08:54.155431 containerd[1498]: time="2025-01-30T13:08:54.155370543Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:08:54.156871 containerd[1498]: time="2025-01-30T13:08:54.156422366Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"31052327\" in 1.33305987s" Jan 30 13:08:54.156871 containerd[1498]: time="2025-01-30T13:08:54.156459528Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\"" Jan 30 13:08:54.178884 containerd[1498]: time="2025-01-30T13:08:54.178843824Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 30 13:08:55.221080 containerd[1498]: time="2025-01-30T13:08:55.221002420Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:08:55.222021 containerd[1498]: time="2025-01-30T13:08:55.221970957Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=17783084" Jan 30 13:08:55.223000 containerd[1498]: time="2025-01-30T13:08:55.222951047Z" level=info msg="ImageCreate event name:\"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:08:55.225341 containerd[1498]: time="2025-01-30T13:08:55.225264620Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:08:55.226469 containerd[1498]: time="2025-01-30T13:08:55.226321509Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"19229664\" in 1.047240465s" Jan 30 13:08:55.226469 containerd[1498]: time="2025-01-30T13:08:55.226348341Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\"" Jan 30 13:08:55.248088 containerd[1498]: time="2025-01-30T13:08:55.248055872Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 30 13:08:56.201568 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount48623274.mount: Deactivated successfully. Jan 30 13:08:56.471953 containerd[1498]: time="2025-01-30T13:08:56.471824379Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:08:56.472668 containerd[1498]: time="2025-01-30T13:08:56.472459678Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=29058363" Jan 30 13:08:56.473182 containerd[1498]: time="2025-01-30T13:08:56.473137189Z" level=info msg="ImageCreate event name:\"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:08:56.474826 containerd[1498]: time="2025-01-30T13:08:56.474773974Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:08:56.475435 containerd[1498]: time="2025-01-30T13:08:56.475248232Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"29057356\" in 1.227019595s" Jan 30 13:08:56.475435 containerd[1498]: time="2025-01-30T13:08:56.475274292Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\"" Jan 30 13:08:56.495562 containerd[1498]: time="2025-01-30T13:08:56.495526084Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 30 13:08:57.049204 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1456162729.mount: Deactivated successfully. Jan 30 13:08:57.703819 containerd[1498]: time="2025-01-30T13:08:57.703738434Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:08:57.704726 containerd[1498]: time="2025-01-30T13:08:57.704690172Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185841" Jan 30 13:08:57.705733 containerd[1498]: time="2025-01-30T13:08:57.705694091Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:08:57.708057 containerd[1498]: time="2025-01-30T13:08:57.708025481Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:08:57.709162 containerd[1498]: time="2025-01-30T13:08:57.709054277Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.213491323s" Jan 30 13:08:57.709162 containerd[1498]: time="2025-01-30T13:08:57.709079055Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 30 13:08:57.729695 containerd[1498]: time="2025-01-30T13:08:57.729566387Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 30 13:08:58.248384 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3036594850.mount: Deactivated successfully. Jan 30 13:08:58.255698 containerd[1498]: time="2025-01-30T13:08:58.255587455Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:08:58.257065 containerd[1498]: time="2025-01-30T13:08:58.256944862Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322310" Jan 30 13:08:58.257896 containerd[1498]: time="2025-01-30T13:08:58.257812974Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:08:58.261408 containerd[1498]: time="2025-01-30T13:08:58.261314302Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:08:58.263142 containerd[1498]: time="2025-01-30T13:08:58.262614538Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 533.011369ms" Jan 30 13:08:58.263142 containerd[1498]: time="2025-01-30T13:08:58.262660987Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 30 13:08:58.300405 containerd[1498]: time="2025-01-30T13:08:58.300349098Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 30 13:08:58.864713 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount879973379.mount: Deactivated successfully. Jan 30 13:08:59.242751 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 30 13:08:59.251113 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:08:59.369740 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:08:59.373683 (kubelet)[2213]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:08:59.414937 kubelet[2213]: E0130 13:08:59.414867 2213 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:08:59.418955 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:08:59.419146 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:09:01.315533 containerd[1498]: time="2025-01-30T13:09:01.315466042Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:09:01.316607 containerd[1498]: time="2025-01-30T13:09:01.316572034Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238651" Jan 30 13:09:01.317567 containerd[1498]: time="2025-01-30T13:09:01.317543667Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:09:01.320273 containerd[1498]: time="2025-01-30T13:09:01.320211114Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:09:01.321616 containerd[1498]: time="2025-01-30T13:09:01.321594026Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 3.021196184s" Jan 30 13:09:01.321721 containerd[1498]: time="2025-01-30T13:09:01.321702725Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jan 30 13:09:04.547804 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:09:04.553666 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:09:04.574254 systemd[1]: Reloading requested from client PID 2291 ('systemctl') (unit session-7.scope)... Jan 30 13:09:04.574266 systemd[1]: Reloading... Jan 30 13:09:04.698524 zram_generator::config[2334]: No configuration found. Jan 30 13:09:04.795189 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:09:04.864131 systemd[1]: Reloading finished in 289 ms. Jan 30 13:09:04.910292 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:09:04.914813 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:09:04.916214 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 13:09:04.916650 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:09:04.921818 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:09:05.042914 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:09:05.047174 (kubelet)[2386]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:09:05.086153 kubelet[2386]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:09:05.086153 kubelet[2386]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 13:09:05.086153 kubelet[2386]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:09:05.086683 kubelet[2386]: I0130 13:09:05.086192 2386 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:09:05.476809 kubelet[2386]: I0130 13:09:05.476749 2386 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 13:09:05.476809 kubelet[2386]: I0130 13:09:05.476777 2386 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:09:05.477046 kubelet[2386]: I0130 13:09:05.476965 2386 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 13:09:05.495869 kubelet[2386]: E0130 13:09:05.495600 2386 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://138.199.163.224:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 138.199.163.224:6443: connect: connection refused Jan 30 13:09:05.496170 kubelet[2386]: I0130 13:09:05.495987 2386 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:09:05.518436 kubelet[2386]: I0130 13:09:05.518403 2386 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:09:05.518943 kubelet[2386]: I0130 13:09:05.518885 2386 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:09:05.519196 kubelet[2386]: I0130 13:09:05.518937 2386 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4186-1-0-d-73846a73c0","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 13:09:05.519271 kubelet[2386]: I0130 13:09:05.519210 2386 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:09:05.519271 kubelet[2386]: I0130 13:09:05.519226 2386 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 13:09:05.519471 kubelet[2386]: I0130 13:09:05.519435 2386 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:09:05.520657 kubelet[2386]: I0130 13:09:05.520627 2386 kubelet.go:400] "Attempting to sync node with API server" Jan 30 13:09:05.520712 kubelet[2386]: I0130 13:09:05.520658 2386 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:09:05.520712 kubelet[2386]: I0130 13:09:05.520693 2386 kubelet.go:312] "Adding apiserver pod source" Jan 30 13:09:05.520766 kubelet[2386]: I0130 13:09:05.520724 2386 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:09:05.527784 kubelet[2386]: W0130 13:09:05.527693 2386 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://138.199.163.224:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 138.199.163.224:6443: connect: connection refused Jan 30 13:09:05.527835 kubelet[2386]: E0130 13:09:05.527804 2386 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://138.199.163.224:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 138.199.163.224:6443: connect: connection refused Jan 30 13:09:05.529512 kubelet[2386]: W0130 13:09:05.527919 2386 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://138.199.163.224:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186-1-0-d-73846a73c0&limit=500&resourceVersion=0": dial tcp 138.199.163.224:6443: connect: connection refused Jan 30 13:09:05.529512 kubelet[2386]: E0130 13:09:05.527978 2386 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://138.199.163.224:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186-1-0-d-73846a73c0&limit=500&resourceVersion=0": dial tcp 138.199.163.224:6443: connect: connection refused Jan 30 13:09:05.530967 kubelet[2386]: I0130 13:09:05.530932 2386 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 30 13:09:05.533325 kubelet[2386]: I0130 13:09:05.532986 2386 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:09:05.533325 kubelet[2386]: W0130 13:09:05.533090 2386 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 13:09:05.534727 kubelet[2386]: I0130 13:09:05.534689 2386 server.go:1264] "Started kubelet" Jan 30 13:09:05.535428 kubelet[2386]: I0130 13:09:05.535360 2386 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:09:05.538530 kubelet[2386]: I0130 13:09:05.537838 2386 server.go:455] "Adding debug handlers to kubelet server" Jan 30 13:09:05.541947 kubelet[2386]: I0130 13:09:05.541224 2386 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:09:05.541947 kubelet[2386]: I0130 13:09:05.541654 2386 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:09:05.542195 kubelet[2386]: I0130 13:09:05.542122 2386 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:09:05.546996 kubelet[2386]: E0130 13:09:05.544933 2386 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://138.199.163.224:6443/api/v1/namespaces/default/events\": dial tcp 138.199.163.224:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4186-1-0-d-73846a73c0.181f7a62b4105c3c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186-1-0-d-73846a73c0,UID:ci-4186-1-0-d-73846a73c0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4186-1-0-d-73846a73c0,},FirstTimestamp:2025-01-30 13:09:05.534655548 +0000 UTC m=+0.483199884,LastTimestamp:2025-01-30 13:09:05.534655548 +0000 UTC m=+0.483199884,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186-1-0-d-73846a73c0,}" Jan 30 13:09:05.549977 kubelet[2386]: E0130 13:09:05.549418 2386 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4186-1-0-d-73846a73c0\" not found" Jan 30 13:09:05.550078 kubelet[2386]: I0130 13:09:05.550064 2386 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 13:09:05.550964 kubelet[2386]: I0130 13:09:05.550949 2386 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 13:09:05.551063 kubelet[2386]: I0130 13:09:05.551052 2386 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:09:05.551351 kubelet[2386]: W0130 13:09:05.551320 2386 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://138.199.163.224:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 138.199.163.224:6443: connect: connection refused Jan 30 13:09:05.551430 kubelet[2386]: E0130 13:09:05.551418 2386 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://138.199.163.224:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 138.199.163.224:6443: connect: connection refused Jan 30 13:09:05.551750 kubelet[2386]: E0130 13:09:05.551717 2386 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.163.224:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186-1-0-d-73846a73c0?timeout=10s\": dial tcp 138.199.163.224:6443: connect: connection refused" interval="200ms" Jan 30 13:09:05.552130 kubelet[2386]: E0130 13:09:05.552115 2386 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:09:05.552798 kubelet[2386]: I0130 13:09:05.552783 2386 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:09:05.553099 kubelet[2386]: I0130 13:09:05.553083 2386 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:09:05.554481 kubelet[2386]: I0130 13:09:05.554448 2386 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:09:05.564714 kubelet[2386]: I0130 13:09:05.564678 2386 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:09:05.565856 kubelet[2386]: I0130 13:09:05.565828 2386 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:09:05.565856 kubelet[2386]: I0130 13:09:05.565855 2386 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 13:09:05.565923 kubelet[2386]: I0130 13:09:05.565871 2386 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 13:09:05.565923 kubelet[2386]: E0130 13:09:05.565909 2386 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:09:05.573337 kubelet[2386]: W0130 13:09:05.573301 2386 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://138.199.163.224:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 138.199.163.224:6443: connect: connection refused Jan 30 13:09:05.573385 kubelet[2386]: E0130 13:09:05.573361 2386 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://138.199.163.224:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 138.199.163.224:6443: connect: connection refused Jan 30 13:09:05.587771 kubelet[2386]: I0130 13:09:05.587552 2386 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 13:09:05.587771 kubelet[2386]: I0130 13:09:05.587567 2386 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 13:09:05.587771 kubelet[2386]: I0130 13:09:05.587581 2386 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:09:05.589315 kubelet[2386]: I0130 13:09:05.589303 2386 policy_none.go:49] "None policy: Start" Jan 30 13:09:05.590038 kubelet[2386]: I0130 13:09:05.590013 2386 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 13:09:05.590038 kubelet[2386]: I0130 13:09:05.590038 2386 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:09:05.596834 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 30 13:09:05.606373 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 30 13:09:05.609320 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 30 13:09:05.617254 kubelet[2386]: I0130 13:09:05.617236 2386 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:09:05.617872 kubelet[2386]: I0130 13:09:05.617505 2386 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:09:05.617872 kubelet[2386]: I0130 13:09:05.617614 2386 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:09:05.619656 kubelet[2386]: E0130 13:09:05.619625 2386 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4186-1-0-d-73846a73c0\" not found" Jan 30 13:09:05.653541 kubelet[2386]: I0130 13:09:05.653472 2386 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186-1-0-d-73846a73c0" Jan 30 13:09:05.654055 kubelet[2386]: E0130 13:09:05.653999 2386 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://138.199.163.224:6443/api/v1/nodes\": dial tcp 138.199.163.224:6443: connect: connection refused" node="ci-4186-1-0-d-73846a73c0" Jan 30 13:09:05.666358 kubelet[2386]: I0130 13:09:05.666235 2386 topology_manager.go:215] "Topology Admit Handler" podUID="645698ea5f46b33dbe7628590336a7b0" podNamespace="kube-system" podName="kube-apiserver-ci-4186-1-0-d-73846a73c0" Jan 30 13:09:05.668149 kubelet[2386]: I0130 13:09:05.668094 2386 topology_manager.go:215] "Topology Admit Handler" podUID="a2ad11259b608695e5b7a1574a583b1d" podNamespace="kube-system" podName="kube-controller-manager-ci-4186-1-0-d-73846a73c0" Jan 30 13:09:05.670939 kubelet[2386]: I0130 13:09:05.670304 2386 topology_manager.go:215] "Topology Admit Handler" podUID="15d2b526c990fed492422a19c19754a6" podNamespace="kube-system" podName="kube-scheduler-ci-4186-1-0-d-73846a73c0" Jan 30 13:09:05.678973 systemd[1]: Created slice kubepods-burstable-pod645698ea5f46b33dbe7628590336a7b0.slice - libcontainer container kubepods-burstable-pod645698ea5f46b33dbe7628590336a7b0.slice. Jan 30 13:09:05.693900 systemd[1]: Created slice kubepods-burstable-poda2ad11259b608695e5b7a1574a583b1d.slice - libcontainer container kubepods-burstable-poda2ad11259b608695e5b7a1574a583b1d.slice. Jan 30 13:09:05.698058 systemd[1]: Created slice kubepods-burstable-pod15d2b526c990fed492422a19c19754a6.slice - libcontainer container kubepods-burstable-pod15d2b526c990fed492422a19c19754a6.slice. Jan 30 13:09:05.752371 kubelet[2386]: E0130 13:09:05.752268 2386 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.163.224:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186-1-0-d-73846a73c0?timeout=10s\": dial tcp 138.199.163.224:6443: connect: connection refused" interval="400ms" Jan 30 13:09:05.853143 kubelet[2386]: I0130 13:09:05.853010 2386 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/645698ea5f46b33dbe7628590336a7b0-k8s-certs\") pod \"kube-apiserver-ci-4186-1-0-d-73846a73c0\" (UID: \"645698ea5f46b33dbe7628590336a7b0\") " pod="kube-system/kube-apiserver-ci-4186-1-0-d-73846a73c0" Jan 30 13:09:05.853143 kubelet[2386]: I0130 13:09:05.853057 2386 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a2ad11259b608695e5b7a1574a583b1d-ca-certs\") pod \"kube-controller-manager-ci-4186-1-0-d-73846a73c0\" (UID: \"a2ad11259b608695e5b7a1574a583b1d\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-d-73846a73c0" Jan 30 13:09:05.853143 kubelet[2386]: I0130 13:09:05.853075 2386 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a2ad11259b608695e5b7a1574a583b1d-flexvolume-dir\") pod \"kube-controller-manager-ci-4186-1-0-d-73846a73c0\" (UID: \"a2ad11259b608695e5b7a1574a583b1d\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-d-73846a73c0" Jan 30 13:09:05.853143 kubelet[2386]: I0130 13:09:05.853092 2386 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/15d2b526c990fed492422a19c19754a6-kubeconfig\") pod \"kube-scheduler-ci-4186-1-0-d-73846a73c0\" (UID: \"15d2b526c990fed492422a19c19754a6\") " pod="kube-system/kube-scheduler-ci-4186-1-0-d-73846a73c0" Jan 30 13:09:05.853143 kubelet[2386]: I0130 13:09:05.853107 2386 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/645698ea5f46b33dbe7628590336a7b0-ca-certs\") pod \"kube-apiserver-ci-4186-1-0-d-73846a73c0\" (UID: \"645698ea5f46b33dbe7628590336a7b0\") " pod="kube-system/kube-apiserver-ci-4186-1-0-d-73846a73c0" Jan 30 13:09:05.853605 kubelet[2386]: I0130 13:09:05.853121 2386 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/645698ea5f46b33dbe7628590336a7b0-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186-1-0-d-73846a73c0\" (UID: \"645698ea5f46b33dbe7628590336a7b0\") " pod="kube-system/kube-apiserver-ci-4186-1-0-d-73846a73c0" Jan 30 13:09:05.853605 kubelet[2386]: I0130 13:09:05.853135 2386 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a2ad11259b608695e5b7a1574a583b1d-k8s-certs\") pod \"kube-controller-manager-ci-4186-1-0-d-73846a73c0\" (UID: \"a2ad11259b608695e5b7a1574a583b1d\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-d-73846a73c0" Jan 30 13:09:05.853605 kubelet[2386]: I0130 13:09:05.853149 2386 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a2ad11259b608695e5b7a1574a583b1d-kubeconfig\") pod \"kube-controller-manager-ci-4186-1-0-d-73846a73c0\" (UID: \"a2ad11259b608695e5b7a1574a583b1d\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-d-73846a73c0" Jan 30 13:09:05.853605 kubelet[2386]: I0130 13:09:05.853161 2386 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a2ad11259b608695e5b7a1574a583b1d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186-1-0-d-73846a73c0\" (UID: \"a2ad11259b608695e5b7a1574a583b1d\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-d-73846a73c0" Jan 30 13:09:05.856264 kubelet[2386]: I0130 13:09:05.856224 2386 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186-1-0-d-73846a73c0" Jan 30 13:09:05.856872 kubelet[2386]: E0130 13:09:05.856818 2386 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://138.199.163.224:6443/api/v1/nodes\": dial tcp 138.199.163.224:6443: connect: connection refused" node="ci-4186-1-0-d-73846a73c0" Jan 30 13:09:05.989556 containerd[1498]: time="2025-01-30T13:09:05.989364688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186-1-0-d-73846a73c0,Uid:645698ea5f46b33dbe7628590336a7b0,Namespace:kube-system,Attempt:0,}" Jan 30 13:09:06.003578 containerd[1498]: time="2025-01-30T13:09:06.002928771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186-1-0-d-73846a73c0,Uid:15d2b526c990fed492422a19c19754a6,Namespace:kube-system,Attempt:0,}" Jan 30 13:09:06.004030 containerd[1498]: time="2025-01-30T13:09:06.003979904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186-1-0-d-73846a73c0,Uid:a2ad11259b608695e5b7a1574a583b1d,Namespace:kube-system,Attempt:0,}" Jan 30 13:09:06.153047 kubelet[2386]: E0130 13:09:06.152976 2386 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.163.224:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186-1-0-d-73846a73c0?timeout=10s\": dial tcp 138.199.163.224:6443: connect: connection refused" interval="800ms" Jan 30 13:09:06.259928 kubelet[2386]: I0130 13:09:06.259797 2386 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186-1-0-d-73846a73c0" Jan 30 13:09:06.260140 kubelet[2386]: E0130 13:09:06.260111 2386 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://138.199.163.224:6443/api/v1/nodes\": dial tcp 138.199.163.224:6443: connect: connection refused" node="ci-4186-1-0-d-73846a73c0" Jan 30 13:09:06.432801 kubelet[2386]: W0130 13:09:06.432661 2386 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://138.199.163.224:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 138.199.163.224:6443: connect: connection refused Jan 30 13:09:06.432801 kubelet[2386]: E0130 13:09:06.432773 2386 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://138.199.163.224:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 138.199.163.224:6443: connect: connection refused Jan 30 13:09:06.439681 kubelet[2386]: W0130 13:09:06.439553 2386 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://138.199.163.224:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 138.199.163.224:6443: connect: connection refused Jan 30 13:09:06.439681 kubelet[2386]: E0130 13:09:06.439686 2386 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://138.199.163.224:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 138.199.163.224:6443: connect: connection refused Jan 30 13:09:06.489124 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1376731099.mount: Deactivated successfully. Jan 30 13:09:06.496194 containerd[1498]: time="2025-01-30T13:09:06.496132199Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:09:06.497918 containerd[1498]: time="2025-01-30T13:09:06.497881965Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:09:06.499608 containerd[1498]: time="2025-01-30T13:09:06.499547591Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312076" Jan 30 13:09:06.500147 containerd[1498]: time="2025-01-30T13:09:06.500095596Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:09:06.502053 containerd[1498]: time="2025-01-30T13:09:06.501981632Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:09:06.503125 containerd[1498]: time="2025-01-30T13:09:06.503085635Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:09:06.503280 containerd[1498]: time="2025-01-30T13:09:06.503221696Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:09:06.506434 containerd[1498]: time="2025-01-30T13:09:06.506324020Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:09:06.508008 kubelet[2386]: W0130 13:09:06.507941 2386 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://138.199.163.224:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186-1-0-d-73846a73c0&limit=500&resourceVersion=0": dial tcp 138.199.163.224:6443: connect: connection refused Jan 30 13:09:06.508163 kubelet[2386]: E0130 13:09:06.508133 2386 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://138.199.163.224:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186-1-0-d-73846a73c0&limit=500&resourceVersion=0": dial tcp 138.199.163.224:6443: connect: connection refused Jan 30 13:09:06.508650 containerd[1498]: time="2025-01-30T13:09:06.508610830Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 519.085575ms" Jan 30 13:09:06.511316 containerd[1498]: time="2025-01-30T13:09:06.511051443Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 507.960292ms" Jan 30 13:09:06.514138 containerd[1498]: time="2025-01-30T13:09:06.514092701Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 509.759813ms" Jan 30 13:09:06.620314 containerd[1498]: time="2025-01-30T13:09:06.619930751Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:09:06.620314 containerd[1498]: time="2025-01-30T13:09:06.619979824Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:09:06.620314 containerd[1498]: time="2025-01-30T13:09:06.619992689Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:09:06.620314 containerd[1498]: time="2025-01-30T13:09:06.620060297Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:09:06.621559 containerd[1498]: time="2025-01-30T13:09:06.618821285Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:09:06.621559 containerd[1498]: time="2025-01-30T13:09:06.621428887Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:09:06.621559 containerd[1498]: time="2025-01-30T13:09:06.621445448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:09:06.625524 containerd[1498]: time="2025-01-30T13:09:06.623096606Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:09:06.625524 containerd[1498]: time="2025-01-30T13:09:06.623142734Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:09:06.625524 containerd[1498]: time="2025-01-30T13:09:06.623155209Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:09:06.625524 containerd[1498]: time="2025-01-30T13:09:06.623218339Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:09:06.625524 containerd[1498]: time="2025-01-30T13:09:06.625452207Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:09:06.651632 systemd[1]: Started cri-containerd-b2a18720b369a6d1455bd11662192a3e6bf68837fa83b62a780e7cf1a4c67b05.scope - libcontainer container b2a18720b369a6d1455bd11662192a3e6bf68837fa83b62a780e7cf1a4c67b05. Jan 30 13:09:06.657347 systemd[1]: Started cri-containerd-49dc1eac08664a5976eee1807e4cf97f64dd0c9808886da0053b4250b774f1fa.scope - libcontainer container 49dc1eac08664a5976eee1807e4cf97f64dd0c9808886da0053b4250b774f1fa. Jan 30 13:09:06.659635 systemd[1]: Started cri-containerd-49ea4692f5fd969ed026454c7e144a7d2602cdca12c0cd143efd5285cb84bda9.scope - libcontainer container 49ea4692f5fd969ed026454c7e144a7d2602cdca12c0cd143efd5285cb84bda9. Jan 30 13:09:06.712139 containerd[1498]: time="2025-01-30T13:09:06.712088559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186-1-0-d-73846a73c0,Uid:a2ad11259b608695e5b7a1574a583b1d,Namespace:kube-system,Attempt:0,} returns sandbox id \"49dc1eac08664a5976eee1807e4cf97f64dd0c9808886da0053b4250b774f1fa\"" Jan 30 13:09:06.715646 containerd[1498]: time="2025-01-30T13:09:06.715100302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186-1-0-d-73846a73c0,Uid:645698ea5f46b33dbe7628590336a7b0,Namespace:kube-system,Attempt:0,} returns sandbox id \"b2a18720b369a6d1455bd11662192a3e6bf68837fa83b62a780e7cf1a4c67b05\"" Jan 30 13:09:06.719724 containerd[1498]: time="2025-01-30T13:09:06.719476715Z" level=info msg="CreateContainer within sandbox \"49dc1eac08664a5976eee1807e4cf97f64dd0c9808886da0053b4250b774f1fa\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 30 13:09:06.719842 containerd[1498]: time="2025-01-30T13:09:06.719551077Z" level=info msg="CreateContainer within sandbox \"b2a18720b369a6d1455bd11662192a3e6bf68837fa83b62a780e7cf1a4c67b05\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 30 13:09:06.736217 containerd[1498]: time="2025-01-30T13:09:06.736194988Z" level=info msg="CreateContainer within sandbox \"b2a18720b369a6d1455bd11662192a3e6bf68837fa83b62a780e7cf1a4c67b05\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d110663db84a6e907db01118016b8ee77a0fab727c8a1dc26202638ab2123be3\"" Jan 30 13:09:06.737041 containerd[1498]: time="2025-01-30T13:09:06.736998380Z" level=info msg="StartContainer for \"d110663db84a6e907db01118016b8ee77a0fab727c8a1dc26202638ab2123be3\"" Jan 30 13:09:06.738620 containerd[1498]: time="2025-01-30T13:09:06.738559606Z" level=info msg="CreateContainer within sandbox \"49dc1eac08664a5976eee1807e4cf97f64dd0c9808886da0053b4250b774f1fa\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"05964ce6f1a4995773fe09d267c8d3c5b271b883d2036c6549db92b146ada98a\"" Jan 30 13:09:06.738956 containerd[1498]: time="2025-01-30T13:09:06.738878795Z" level=info msg="StartContainer for \"05964ce6f1a4995773fe09d267c8d3c5b271b883d2036c6549db92b146ada98a\"" Jan 30 13:09:06.743416 containerd[1498]: time="2025-01-30T13:09:06.743367172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186-1-0-d-73846a73c0,Uid:15d2b526c990fed492422a19c19754a6,Namespace:kube-system,Attempt:0,} returns sandbox id \"49ea4692f5fd969ed026454c7e144a7d2602cdca12c0cd143efd5285cb84bda9\"" Jan 30 13:09:06.746599 containerd[1498]: time="2025-01-30T13:09:06.746574006Z" level=info msg="CreateContainer within sandbox \"49ea4692f5fd969ed026454c7e144a7d2602cdca12c0cd143efd5285cb84bda9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 30 13:09:06.763058 containerd[1498]: time="2025-01-30T13:09:06.761357470Z" level=info msg="CreateContainer within sandbox \"49ea4692f5fd969ed026454c7e144a7d2602cdca12c0cd143efd5285cb84bda9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6b3ba01b38b12e60408db9388f1df1ba99e44123136098858cbdb5beca8a663e\"" Jan 30 13:09:06.763570 containerd[1498]: time="2025-01-30T13:09:06.763553597Z" level=info msg="StartContainer for \"6b3ba01b38b12e60408db9388f1df1ba99e44123136098858cbdb5beca8a663e\"" Jan 30 13:09:06.775644 systemd[1]: Started cri-containerd-05964ce6f1a4995773fe09d267c8d3c5b271b883d2036c6549db92b146ada98a.scope - libcontainer container 05964ce6f1a4995773fe09d267c8d3c5b271b883d2036c6549db92b146ada98a. Jan 30 13:09:06.783607 systemd[1]: Started cri-containerd-d110663db84a6e907db01118016b8ee77a0fab727c8a1dc26202638ab2123be3.scope - libcontainer container d110663db84a6e907db01118016b8ee77a0fab727c8a1dc26202638ab2123be3. Jan 30 13:09:06.804612 systemd[1]: Started cri-containerd-6b3ba01b38b12e60408db9388f1df1ba99e44123136098858cbdb5beca8a663e.scope - libcontainer container 6b3ba01b38b12e60408db9388f1df1ba99e44123136098858cbdb5beca8a663e. Jan 30 13:09:06.853302 containerd[1498]: time="2025-01-30T13:09:06.853260923Z" level=info msg="StartContainer for \"05964ce6f1a4995773fe09d267c8d3c5b271b883d2036c6549db92b146ada98a\" returns successfully" Jan 30 13:09:06.867264 containerd[1498]: time="2025-01-30T13:09:06.867215338Z" level=info msg="StartContainer for \"d110663db84a6e907db01118016b8ee77a0fab727c8a1dc26202638ab2123be3\" returns successfully" Jan 30 13:09:06.876900 containerd[1498]: time="2025-01-30T13:09:06.876708858Z" level=info msg="StartContainer for \"6b3ba01b38b12e60408db9388f1df1ba99e44123136098858cbdb5beca8a663e\" returns successfully" Jan 30 13:09:06.955509 kubelet[2386]: E0130 13:09:06.953634 2386 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.163.224:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186-1-0-d-73846a73c0?timeout=10s\": dial tcp 138.199.163.224:6443: connect: connection refused" interval="1.6s" Jan 30 13:09:07.064774 kubelet[2386]: I0130 13:09:07.064664 2386 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186-1-0-d-73846a73c0" Jan 30 13:09:07.065206 kubelet[2386]: E0130 13:09:07.064942 2386 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://138.199.163.224:6443/api/v1/nodes\": dial tcp 138.199.163.224:6443: connect: connection refused" node="ci-4186-1-0-d-73846a73c0" Jan 30 13:09:08.527557 kubelet[2386]: I0130 13:09:08.527504 2386 apiserver.go:52] "Watching apiserver" Jan 30 13:09:08.551913 kubelet[2386]: I0130 13:09:08.551862 2386 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 13:09:08.557927 kubelet[2386]: E0130 13:09:08.557877 2386 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4186-1-0-d-73846a73c0\" not found" node="ci-4186-1-0-d-73846a73c0" Jan 30 13:09:08.560374 kubelet[2386]: E0130 13:09:08.560345 2386 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4186-1-0-d-73846a73c0" not found Jan 30 13:09:08.667644 kubelet[2386]: I0130 13:09:08.667424 2386 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186-1-0-d-73846a73c0" Jan 30 13:09:08.677513 kubelet[2386]: I0130 13:09:08.676703 2386 kubelet_node_status.go:76] "Successfully registered node" node="ci-4186-1-0-d-73846a73c0" Jan 30 13:09:10.008950 systemd[1]: Reloading requested from client PID 2656 ('systemctl') (unit session-7.scope)... Jan 30 13:09:10.008967 systemd[1]: Reloading... Jan 30 13:09:10.118511 zram_generator::config[2702]: No configuration found. Jan 30 13:09:10.212878 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:09:10.299849 systemd[1]: Reloading finished in 290 ms. Jan 30 13:09:10.340897 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:09:10.353907 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 13:09:10.354183 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:09:10.360715 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:09:10.483588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:09:10.487213 (kubelet)[2747]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:09:10.541587 kubelet[2747]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:09:10.542098 kubelet[2747]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 13:09:10.542181 kubelet[2747]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:09:10.542511 kubelet[2747]: I0130 13:09:10.542397 2747 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:09:10.547058 kubelet[2747]: I0130 13:09:10.547033 2747 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 13:09:10.547058 kubelet[2747]: I0130 13:09:10.547050 2747 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:09:10.547205 kubelet[2747]: I0130 13:09:10.547185 2747 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 13:09:10.548322 kubelet[2747]: I0130 13:09:10.548301 2747 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 13:09:10.550142 kubelet[2747]: I0130 13:09:10.550005 2747 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:09:10.555625 kubelet[2747]: I0130 13:09:10.555535 2747 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:09:10.556141 kubelet[2747]: I0130 13:09:10.555854 2747 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:09:10.556141 kubelet[2747]: I0130 13:09:10.555876 2747 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4186-1-0-d-73846a73c0","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 13:09:10.556141 kubelet[2747]: I0130 13:09:10.556008 2747 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:09:10.556141 kubelet[2747]: I0130 13:09:10.556019 2747 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 13:09:10.556290 kubelet[2747]: I0130 13:09:10.556073 2747 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:09:10.556362 kubelet[2747]: I0130 13:09:10.556350 2747 kubelet.go:400] "Attempting to sync node with API server" Jan 30 13:09:10.556538 kubelet[2747]: I0130 13:09:10.556526 2747 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:09:10.556638 kubelet[2747]: I0130 13:09:10.556627 2747 kubelet.go:312] "Adding apiserver pod source" Jan 30 13:09:10.556699 kubelet[2747]: I0130 13:09:10.556690 2747 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:09:10.563310 kubelet[2747]: I0130 13:09:10.563028 2747 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 30 13:09:10.565653 kubelet[2747]: I0130 13:09:10.565217 2747 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:09:10.565747 kubelet[2747]: I0130 13:09:10.565735 2747 server.go:1264] "Started kubelet" Jan 30 13:09:10.568689 kubelet[2747]: I0130 13:09:10.568677 2747 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:09:10.571812 kubelet[2747]: I0130 13:09:10.571778 2747 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:09:10.573303 kubelet[2747]: I0130 13:09:10.573278 2747 server.go:455] "Adding debug handlers to kubelet server" Jan 30 13:09:10.573798 kubelet[2747]: I0130 13:09:10.573760 2747 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:09:10.574018 kubelet[2747]: I0130 13:09:10.574003 2747 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:09:10.574474 kubelet[2747]: I0130 13:09:10.574456 2747 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 13:09:10.574968 kubelet[2747]: I0130 13:09:10.574945 2747 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 13:09:10.575766 kubelet[2747]: I0130 13:09:10.575113 2747 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:09:10.578778 kubelet[2747]: I0130 13:09:10.578689 2747 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:09:10.578778 kubelet[2747]: I0130 13:09:10.578755 2747 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:09:10.582304 kubelet[2747]: E0130 13:09:10.582255 2747 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:09:10.582679 kubelet[2747]: I0130 13:09:10.582618 2747 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:09:10.588842 kubelet[2747]: I0130 13:09:10.588805 2747 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:09:10.592950 kubelet[2747]: I0130 13:09:10.591870 2747 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:09:10.592950 kubelet[2747]: I0130 13:09:10.591891 2747 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 13:09:10.592950 kubelet[2747]: I0130 13:09:10.591904 2747 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 13:09:10.592950 kubelet[2747]: E0130 13:09:10.591936 2747 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:09:10.624508 kubelet[2747]: I0130 13:09:10.623353 2747 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 13:09:10.624508 kubelet[2747]: I0130 13:09:10.623366 2747 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 13:09:10.624508 kubelet[2747]: I0130 13:09:10.623381 2747 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:09:10.624508 kubelet[2747]: I0130 13:09:10.623519 2747 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 30 13:09:10.624508 kubelet[2747]: I0130 13:09:10.623544 2747 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 30 13:09:10.624508 kubelet[2747]: I0130 13:09:10.623578 2747 policy_none.go:49] "None policy: Start" Jan 30 13:09:10.624508 kubelet[2747]: I0130 13:09:10.623916 2747 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 13:09:10.624508 kubelet[2747]: I0130 13:09:10.623929 2747 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:09:10.624971 kubelet[2747]: I0130 13:09:10.624952 2747 state_mem.go:75] "Updated machine memory state" Jan 30 13:09:10.633416 kubelet[2747]: I0130 13:09:10.633389 2747 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:09:10.634033 kubelet[2747]: I0130 13:09:10.633742 2747 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:09:10.634792 kubelet[2747]: I0130 13:09:10.634090 2747 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:09:10.677662 kubelet[2747]: I0130 13:09:10.677630 2747 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186-1-0-d-73846a73c0" Jan 30 13:09:10.686366 kubelet[2747]: I0130 13:09:10.686277 2747 kubelet_node_status.go:112] "Node was previously registered" node="ci-4186-1-0-d-73846a73c0" Jan 30 13:09:10.686622 kubelet[2747]: I0130 13:09:10.686579 2747 kubelet_node_status.go:76] "Successfully registered node" node="ci-4186-1-0-d-73846a73c0" Jan 30 13:09:10.693071 kubelet[2747]: I0130 13:09:10.693049 2747 topology_manager.go:215] "Topology Admit Handler" podUID="645698ea5f46b33dbe7628590336a7b0" podNamespace="kube-system" podName="kube-apiserver-ci-4186-1-0-d-73846a73c0" Jan 30 13:09:10.693402 kubelet[2747]: I0130 13:09:10.693178 2747 topology_manager.go:215] "Topology Admit Handler" podUID="a2ad11259b608695e5b7a1574a583b1d" podNamespace="kube-system" podName="kube-controller-manager-ci-4186-1-0-d-73846a73c0" Jan 30 13:09:10.693402 kubelet[2747]: I0130 13:09:10.693227 2747 topology_manager.go:215] "Topology Admit Handler" podUID="15d2b526c990fed492422a19c19754a6" podNamespace="kube-system" podName="kube-scheduler-ci-4186-1-0-d-73846a73c0" Jan 30 13:09:10.700116 kubelet[2747]: E0130 13:09:10.700090 2747 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4186-1-0-d-73846a73c0\" already exists" pod="kube-system/kube-scheduler-ci-4186-1-0-d-73846a73c0" Jan 30 13:09:10.777467 kubelet[2747]: I0130 13:09:10.777421 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a2ad11259b608695e5b7a1574a583b1d-ca-certs\") pod \"kube-controller-manager-ci-4186-1-0-d-73846a73c0\" (UID: \"a2ad11259b608695e5b7a1574a583b1d\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-d-73846a73c0" Jan 30 13:09:10.777629 kubelet[2747]: I0130 13:09:10.777505 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a2ad11259b608695e5b7a1574a583b1d-flexvolume-dir\") pod \"kube-controller-manager-ci-4186-1-0-d-73846a73c0\" (UID: \"a2ad11259b608695e5b7a1574a583b1d\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-d-73846a73c0" Jan 30 13:09:10.777629 kubelet[2747]: I0130 13:09:10.777567 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a2ad11259b608695e5b7a1574a583b1d-k8s-certs\") pod \"kube-controller-manager-ci-4186-1-0-d-73846a73c0\" (UID: \"a2ad11259b608695e5b7a1574a583b1d\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-d-73846a73c0" Jan 30 13:09:10.777629 kubelet[2747]: I0130 13:09:10.777593 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a2ad11259b608695e5b7a1574a583b1d-kubeconfig\") pod \"kube-controller-manager-ci-4186-1-0-d-73846a73c0\" (UID: \"a2ad11259b608695e5b7a1574a583b1d\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-d-73846a73c0" Jan 30 13:09:10.777629 kubelet[2747]: I0130 13:09:10.777613 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/645698ea5f46b33dbe7628590336a7b0-ca-certs\") pod \"kube-apiserver-ci-4186-1-0-d-73846a73c0\" (UID: \"645698ea5f46b33dbe7628590336a7b0\") " pod="kube-system/kube-apiserver-ci-4186-1-0-d-73846a73c0" Jan 30 13:09:10.777629 kubelet[2747]: I0130 13:09:10.777628 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/645698ea5f46b33dbe7628590336a7b0-k8s-certs\") pod \"kube-apiserver-ci-4186-1-0-d-73846a73c0\" (UID: \"645698ea5f46b33dbe7628590336a7b0\") " pod="kube-system/kube-apiserver-ci-4186-1-0-d-73846a73c0" Jan 30 13:09:10.777766 kubelet[2747]: I0130 13:09:10.777650 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/645698ea5f46b33dbe7628590336a7b0-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186-1-0-d-73846a73c0\" (UID: \"645698ea5f46b33dbe7628590336a7b0\") " pod="kube-system/kube-apiserver-ci-4186-1-0-d-73846a73c0" Jan 30 13:09:10.777766 kubelet[2747]: I0130 13:09:10.777668 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a2ad11259b608695e5b7a1574a583b1d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186-1-0-d-73846a73c0\" (UID: \"a2ad11259b608695e5b7a1574a583b1d\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-d-73846a73c0" Jan 30 13:09:10.777766 kubelet[2747]: I0130 13:09:10.777685 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/15d2b526c990fed492422a19c19754a6-kubeconfig\") pod \"kube-scheduler-ci-4186-1-0-d-73846a73c0\" (UID: \"15d2b526c990fed492422a19c19754a6\") " pod="kube-system/kube-scheduler-ci-4186-1-0-d-73846a73c0" Jan 30 13:09:11.019681 sudo[2780]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 30 13:09:11.020064 sudo[2780]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 30 13:09:11.527747 sudo[2780]: pam_unix(sudo:session): session closed for user root Jan 30 13:09:11.559862 kubelet[2747]: I0130 13:09:11.559798 2747 apiserver.go:52] "Watching apiserver" Jan 30 13:09:11.576501 kubelet[2747]: I0130 13:09:11.576447 2747 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 13:09:11.619131 kubelet[2747]: E0130 13:09:11.618767 2747 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4186-1-0-d-73846a73c0\" already exists" pod="kube-system/kube-apiserver-ci-4186-1-0-d-73846a73c0" Jan 30 13:09:11.637539 kubelet[2747]: I0130 13:09:11.637322 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4186-1-0-d-73846a73c0" podStartSLOduration=1.6372978809999998 podStartE2EDuration="1.637297881s" podCreationTimestamp="2025-01-30 13:09:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:09:11.636273256 +0000 UTC m=+1.140342046" watchObservedRunningTime="2025-01-30 13:09:11.637297881 +0000 UTC m=+1.141366671" Jan 30 13:09:11.654733 kubelet[2747]: I0130 13:09:11.654670 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4186-1-0-d-73846a73c0" podStartSLOduration=2.654653153 podStartE2EDuration="2.654653153s" podCreationTimestamp="2025-01-30 13:09:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:09:11.646624784 +0000 UTC m=+1.150693574" watchObservedRunningTime="2025-01-30 13:09:11.654653153 +0000 UTC m=+1.158721942" Jan 30 13:09:12.939356 sudo[1804]: pam_unix(sudo:session): session closed for user root Jan 30 13:09:13.096041 sshd[1787]: Connection closed by 139.178.89.65 port 56460 Jan 30 13:09:13.097546 sshd-session[1785]: pam_unix(sshd:session): session closed for user core Jan 30 13:09:13.102564 systemd[1]: sshd@6-138.199.163.224:22-139.178.89.65:56460.service: Deactivated successfully. Jan 30 13:09:13.106006 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 13:09:13.106349 systemd[1]: session-7.scope: Consumed 4.979s CPU time, 185.9M memory peak, 0B memory swap peak. Jan 30 13:09:13.109311 systemd-logind[1474]: Session 7 logged out. Waiting for processes to exit. Jan 30 13:09:13.110845 systemd-logind[1474]: Removed session 7. Jan 30 13:09:19.159462 kubelet[2747]: I0130 13:09:19.159388 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4186-1-0-d-73846a73c0" podStartSLOduration=9.15932227 podStartE2EDuration="9.15932227s" podCreationTimestamp="2025-01-30 13:09:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:09:11.654299882 +0000 UTC m=+1.158368672" watchObservedRunningTime="2025-01-30 13:09:19.15932227 +0000 UTC m=+8.663391070" Jan 30 13:09:25.826518 kubelet[2747]: I0130 13:09:25.823831 2747 topology_manager.go:215] "Topology Admit Handler" podUID="1a83b8b0-c42a-4a9b-957d-38825443f21a" podNamespace="kube-system" podName="kube-proxy-2pxwt" Jan 30 13:09:25.836876 systemd[1]: Created slice kubepods-besteffort-pod1a83b8b0_c42a_4a9b_957d_38825443f21a.slice - libcontainer container kubepods-besteffort-pod1a83b8b0_c42a_4a9b_957d_38825443f21a.slice. Jan 30 13:09:25.839212 kubelet[2747]: W0130 13:09:25.839178 2747 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-4186-1-0-d-73846a73c0" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4186-1-0-d-73846a73c0' and this object Jan 30 13:09:25.839212 kubelet[2747]: E0130 13:09:25.839215 2747 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-4186-1-0-d-73846a73c0" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4186-1-0-d-73846a73c0' and this object Jan 30 13:09:25.839935 kubelet[2747]: I0130 13:09:25.839904 2747 topology_manager.go:215] "Topology Admit Handler" podUID="765f7396-700d-42f9-a5b4-1c24ddc6850b" podNamespace="kube-system" podName="cilium-864lp" Jan 30 13:09:25.855642 systemd[1]: Created slice kubepods-burstable-pod765f7396_700d_42f9_a5b4_1c24ddc6850b.slice - libcontainer container kubepods-burstable-pod765f7396_700d_42f9_a5b4_1c24ddc6850b.slice. Jan 30 13:09:25.873305 kubelet[2747]: I0130 13:09:25.873262 2747 topology_manager.go:215] "Topology Admit Handler" podUID="44dc237f-6fbb-4c76-bdda-9a1b193343de" podNamespace="kube-system" podName="cilium-operator-599987898-pqzt8" Jan 30 13:09:25.880648 systemd[1]: Created slice kubepods-besteffort-pod44dc237f_6fbb_4c76_bdda_9a1b193343de.slice - libcontainer container kubepods-besteffort-pod44dc237f_6fbb_4c76_bdda_9a1b193343de.slice. Jan 30 13:09:25.885747 kubelet[2747]: I0130 13:09:25.885464 2747 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 30 13:09:25.886269 containerd[1498]: time="2025-01-30T13:09:25.886189060Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 13:09:25.889007 kubelet[2747]: I0130 13:09:25.888737 2747 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 30 13:09:25.968106 kubelet[2747]: I0130 13:09:25.968014 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttlcg\" (UniqueName: \"kubernetes.io/projected/1a83b8b0-c42a-4a9b-957d-38825443f21a-kube-api-access-ttlcg\") pod \"kube-proxy-2pxwt\" (UID: \"1a83b8b0-c42a-4a9b-957d-38825443f21a\") " pod="kube-system/kube-proxy-2pxwt" Jan 30 13:09:25.968106 kubelet[2747]: I0130 13:09:25.968091 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/765f7396-700d-42f9-a5b4-1c24ddc6850b-hostproc\") pod \"cilium-864lp\" (UID: \"765f7396-700d-42f9-a5b4-1c24ddc6850b\") " pod="kube-system/cilium-864lp" Jan 30 13:09:25.968106 kubelet[2747]: I0130 13:09:25.968106 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/765f7396-700d-42f9-a5b4-1c24ddc6850b-cni-path\") pod \"cilium-864lp\" (UID: \"765f7396-700d-42f9-a5b4-1c24ddc6850b\") " pod="kube-system/cilium-864lp" Jan 30 13:09:25.968106 kubelet[2747]: I0130 13:09:25.968119 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/765f7396-700d-42f9-a5b4-1c24ddc6850b-lib-modules\") pod \"cilium-864lp\" (UID: \"765f7396-700d-42f9-a5b4-1c24ddc6850b\") " pod="kube-system/cilium-864lp" Jan 30 13:09:25.968343 kubelet[2747]: I0130 13:09:25.968135 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvph7\" (UniqueName: \"kubernetes.io/projected/765f7396-700d-42f9-a5b4-1c24ddc6850b-kube-api-access-cvph7\") pod \"cilium-864lp\" (UID: \"765f7396-700d-42f9-a5b4-1c24ddc6850b\") " pod="kube-system/cilium-864lp" Jan 30 13:09:25.968343 kubelet[2747]: I0130 13:09:25.968148 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/765f7396-700d-42f9-a5b4-1c24ddc6850b-clustermesh-secrets\") pod \"cilium-864lp\" (UID: \"765f7396-700d-42f9-a5b4-1c24ddc6850b\") " pod="kube-system/cilium-864lp" Jan 30 13:09:25.968343 kubelet[2747]: I0130 13:09:25.968162 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1a83b8b0-c42a-4a9b-957d-38825443f21a-kube-proxy\") pod \"kube-proxy-2pxwt\" (UID: \"1a83b8b0-c42a-4a9b-957d-38825443f21a\") " pod="kube-system/kube-proxy-2pxwt" Jan 30 13:09:25.968343 kubelet[2747]: I0130 13:09:25.968176 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/765f7396-700d-42f9-a5b4-1c24ddc6850b-hubble-tls\") pod \"cilium-864lp\" (UID: \"765f7396-700d-42f9-a5b4-1c24ddc6850b\") " pod="kube-system/cilium-864lp" Jan 30 13:09:25.968343 kubelet[2747]: I0130 13:09:25.968188 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/765f7396-700d-42f9-a5b4-1c24ddc6850b-cilium-cgroup\") pod \"cilium-864lp\" (UID: \"765f7396-700d-42f9-a5b4-1c24ddc6850b\") " pod="kube-system/cilium-864lp" Jan 30 13:09:25.968343 kubelet[2747]: I0130 13:09:25.968206 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/765f7396-700d-42f9-a5b4-1c24ddc6850b-xtables-lock\") pod \"cilium-864lp\" (UID: \"765f7396-700d-42f9-a5b4-1c24ddc6850b\") " pod="kube-system/cilium-864lp" Jan 30 13:09:25.968525 kubelet[2747]: I0130 13:09:25.968219 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/765f7396-700d-42f9-a5b4-1c24ddc6850b-cilium-config-path\") pod \"cilium-864lp\" (UID: \"765f7396-700d-42f9-a5b4-1c24ddc6850b\") " pod="kube-system/cilium-864lp" Jan 30 13:09:25.968525 kubelet[2747]: I0130 13:09:25.968232 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/765f7396-700d-42f9-a5b4-1c24ddc6850b-cilium-run\") pod \"cilium-864lp\" (UID: \"765f7396-700d-42f9-a5b4-1c24ddc6850b\") " pod="kube-system/cilium-864lp" Jan 30 13:09:25.968525 kubelet[2747]: I0130 13:09:25.968244 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/765f7396-700d-42f9-a5b4-1c24ddc6850b-etc-cni-netd\") pod \"cilium-864lp\" (UID: \"765f7396-700d-42f9-a5b4-1c24ddc6850b\") " pod="kube-system/cilium-864lp" Jan 30 13:09:25.968525 kubelet[2747]: I0130 13:09:25.968259 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1a83b8b0-c42a-4a9b-957d-38825443f21a-xtables-lock\") pod \"kube-proxy-2pxwt\" (UID: \"1a83b8b0-c42a-4a9b-957d-38825443f21a\") " pod="kube-system/kube-proxy-2pxwt" Jan 30 13:09:25.968525 kubelet[2747]: I0130 13:09:25.968273 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/765f7396-700d-42f9-a5b4-1c24ddc6850b-host-proc-sys-net\") pod \"cilium-864lp\" (UID: \"765f7396-700d-42f9-a5b4-1c24ddc6850b\") " pod="kube-system/cilium-864lp" Jan 30 13:09:25.968660 kubelet[2747]: I0130 13:09:25.968285 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/765f7396-700d-42f9-a5b4-1c24ddc6850b-host-proc-sys-kernel\") pod \"cilium-864lp\" (UID: \"765f7396-700d-42f9-a5b4-1c24ddc6850b\") " pod="kube-system/cilium-864lp" Jan 30 13:09:25.968660 kubelet[2747]: I0130 13:09:25.968300 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1a83b8b0-c42a-4a9b-957d-38825443f21a-lib-modules\") pod \"kube-proxy-2pxwt\" (UID: \"1a83b8b0-c42a-4a9b-957d-38825443f21a\") " pod="kube-system/kube-proxy-2pxwt" Jan 30 13:09:25.968660 kubelet[2747]: I0130 13:09:25.968314 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/765f7396-700d-42f9-a5b4-1c24ddc6850b-bpf-maps\") pod \"cilium-864lp\" (UID: \"765f7396-700d-42f9-a5b4-1c24ddc6850b\") " pod="kube-system/cilium-864lp" Jan 30 13:09:26.069734 kubelet[2747]: I0130 13:09:26.068731 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5q55z\" (UniqueName: \"kubernetes.io/projected/44dc237f-6fbb-4c76-bdda-9a1b193343de-kube-api-access-5q55z\") pod \"cilium-operator-599987898-pqzt8\" (UID: \"44dc237f-6fbb-4c76-bdda-9a1b193343de\") " pod="kube-system/cilium-operator-599987898-pqzt8" Jan 30 13:09:26.069734 kubelet[2747]: I0130 13:09:26.069037 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/44dc237f-6fbb-4c76-bdda-9a1b193343de-cilium-config-path\") pod \"cilium-operator-599987898-pqzt8\" (UID: \"44dc237f-6fbb-4c76-bdda-9a1b193343de\") " pod="kube-system/cilium-operator-599987898-pqzt8" Jan 30 13:09:26.158960 containerd[1498]: time="2025-01-30T13:09:26.158820533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-864lp,Uid:765f7396-700d-42f9-a5b4-1c24ddc6850b,Namespace:kube-system,Attempt:0,}" Jan 30 13:09:26.189044 containerd[1498]: time="2025-01-30T13:09:26.188980083Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:09:26.189207 containerd[1498]: time="2025-01-30T13:09:26.189111721Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:09:26.189207 containerd[1498]: time="2025-01-30T13:09:26.189128613Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:09:26.190000 containerd[1498]: time="2025-01-30T13:09:26.189884116Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:09:26.205624 systemd[1]: Started cri-containerd-4b6bce24615ffe963a14f15c01a4dfc5f81b4005be6b9f9264fb179063444123.scope - libcontainer container 4b6bce24615ffe963a14f15c01a4dfc5f81b4005be6b9f9264fb179063444123. Jan 30 13:09:26.229280 containerd[1498]: time="2025-01-30T13:09:26.229243582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-864lp,Uid:765f7396-700d-42f9-a5b4-1c24ddc6850b,Namespace:kube-system,Attempt:0,} returns sandbox id \"4b6bce24615ffe963a14f15c01a4dfc5f81b4005be6b9f9264fb179063444123\"" Jan 30 13:09:26.231605 containerd[1498]: time="2025-01-30T13:09:26.231577651Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 30 13:09:26.485943 containerd[1498]: time="2025-01-30T13:09:26.485786674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-pqzt8,Uid:44dc237f-6fbb-4c76-bdda-9a1b193343de,Namespace:kube-system,Attempt:0,}" Jan 30 13:09:26.522360 containerd[1498]: time="2025-01-30T13:09:26.522109157Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:09:26.522360 containerd[1498]: time="2025-01-30T13:09:26.522216450Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:09:26.522970 containerd[1498]: time="2025-01-30T13:09:26.522378145Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:09:26.525043 containerd[1498]: time="2025-01-30T13:09:26.524919344Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:09:26.560669 systemd[1]: Started cri-containerd-8ead30684677970f9efa346a5e91c69e8941cf1fb53f618a1f98ce32680fa570.scope - libcontainer container 8ead30684677970f9efa346a5e91c69e8941cf1fb53f618a1f98ce32680fa570. Jan 30 13:09:26.607724 containerd[1498]: time="2025-01-30T13:09:26.607348016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-pqzt8,Uid:44dc237f-6fbb-4c76-bdda-9a1b193343de,Namespace:kube-system,Attempt:0,} returns sandbox id \"8ead30684677970f9efa346a5e91c69e8941cf1fb53f618a1f98ce32680fa570\"" Jan 30 13:09:27.349294 containerd[1498]: time="2025-01-30T13:09:27.349096552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2pxwt,Uid:1a83b8b0-c42a-4a9b-957d-38825443f21a,Namespace:kube-system,Attempt:0,}" Jan 30 13:09:27.382190 containerd[1498]: time="2025-01-30T13:09:27.381637362Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:09:27.382190 containerd[1498]: time="2025-01-30T13:09:27.381729586Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:09:27.382190 containerd[1498]: time="2025-01-30T13:09:27.381749343Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:09:27.382190 containerd[1498]: time="2025-01-30T13:09:27.381888545Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:09:27.405723 systemd[1]: Started cri-containerd-1d949a751a45766f7b8baecd450c1bbf9166a04c5138cf1e31f562abd0aab431.scope - libcontainer container 1d949a751a45766f7b8baecd450c1bbf9166a04c5138cf1e31f562abd0aab431. Jan 30 13:09:27.436378 containerd[1498]: time="2025-01-30T13:09:27.436249160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2pxwt,Uid:1a83b8b0-c42a-4a9b-957d-38825443f21a,Namespace:kube-system,Attempt:0,} returns sandbox id \"1d949a751a45766f7b8baecd450c1bbf9166a04c5138cf1e31f562abd0aab431\"" Jan 30 13:09:27.446751 containerd[1498]: time="2025-01-30T13:09:27.446570423Z" level=info msg="CreateContainer within sandbox \"1d949a751a45766f7b8baecd450c1bbf9166a04c5138cf1e31f562abd0aab431\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 13:09:27.464474 containerd[1498]: time="2025-01-30T13:09:27.464437191Z" level=info msg="CreateContainer within sandbox \"1d949a751a45766f7b8baecd450c1bbf9166a04c5138cf1e31f562abd0aab431\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"86da4cd568e548fd489f9b1c75f5a93a5fd01b0456fb98623d9ba8becb92272b\"" Jan 30 13:09:27.465430 containerd[1498]: time="2025-01-30T13:09:27.465146868Z" level=info msg="StartContainer for \"86da4cd568e548fd489f9b1c75f5a93a5fd01b0456fb98623d9ba8becb92272b\"" Jan 30 13:09:27.494786 systemd[1]: Started cri-containerd-86da4cd568e548fd489f9b1c75f5a93a5fd01b0456fb98623d9ba8becb92272b.scope - libcontainer container 86da4cd568e548fd489f9b1c75f5a93a5fd01b0456fb98623d9ba8becb92272b. Jan 30 13:09:27.527624 containerd[1498]: time="2025-01-30T13:09:27.527544273Z" level=info msg="StartContainer for \"86da4cd568e548fd489f9b1c75f5a93a5fd01b0456fb98623d9ba8becb92272b\" returns successfully" Jan 30 13:09:27.649714 kubelet[2747]: I0130 13:09:27.649274 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2pxwt" podStartSLOduration=2.649252927 podStartE2EDuration="2.649252927s" podCreationTimestamp="2025-01-30 13:09:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:09:27.64918011 +0000 UTC m=+17.153248900" watchObservedRunningTime="2025-01-30 13:09:27.649252927 +0000 UTC m=+17.153321717" Jan 30 13:09:31.303186 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1107715043.mount: Deactivated successfully. Jan 30 13:09:32.870682 containerd[1498]: time="2025-01-30T13:09:32.870633533Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:09:32.872510 containerd[1498]: time="2025-01-30T13:09:32.872108919Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 30 13:09:32.872510 containerd[1498]: time="2025-01-30T13:09:32.872239546Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:09:32.873630 containerd[1498]: time="2025-01-30T13:09:32.873600084Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 6.641995162s" Jan 30 13:09:32.873684 containerd[1498]: time="2025-01-30T13:09:32.873629561Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 30 13:09:32.875446 containerd[1498]: time="2025-01-30T13:09:32.875406014Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 30 13:09:32.877657 containerd[1498]: time="2025-01-30T13:09:32.877556670Z" level=info msg="CreateContainer within sandbox \"4b6bce24615ffe963a14f15c01a4dfc5f81b4005be6b9f9264fb179063444123\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 13:09:32.951698 containerd[1498]: time="2025-01-30T13:09:32.951648293Z" level=info msg="CreateContainer within sandbox \"4b6bce24615ffe963a14f15c01a4dfc5f81b4005be6b9f9264fb179063444123\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a7182ea3fb540a99af63524074d3b25ac1649760bae1d0847fe9641559e9d3d4\"" Jan 30 13:09:32.953637 containerd[1498]: time="2025-01-30T13:09:32.952807354Z" level=info msg="StartContainer for \"a7182ea3fb540a99af63524074d3b25ac1649760bae1d0847fe9641559e9d3d4\"" Jan 30 13:09:33.045708 systemd[1]: Started cri-containerd-a7182ea3fb540a99af63524074d3b25ac1649760bae1d0847fe9641559e9d3d4.scope - libcontainer container a7182ea3fb540a99af63524074d3b25ac1649760bae1d0847fe9641559e9d3d4. Jan 30 13:09:33.073725 containerd[1498]: time="2025-01-30T13:09:33.073691167Z" level=info msg="StartContainer for \"a7182ea3fb540a99af63524074d3b25ac1649760bae1d0847fe9641559e9d3d4\" returns successfully" Jan 30 13:09:33.090703 systemd[1]: cri-containerd-a7182ea3fb540a99af63524074d3b25ac1649760bae1d0847fe9641559e9d3d4.scope: Deactivated successfully. Jan 30 13:09:33.179328 containerd[1498]: time="2025-01-30T13:09:33.158992843Z" level=info msg="shim disconnected" id=a7182ea3fb540a99af63524074d3b25ac1649760bae1d0847fe9641559e9d3d4 namespace=k8s.io Jan 30 13:09:33.179328 containerd[1498]: time="2025-01-30T13:09:33.179017444Z" level=warning msg="cleaning up after shim disconnected" id=a7182ea3fb540a99af63524074d3b25ac1649760bae1d0847fe9641559e9d3d4 namespace=k8s.io Jan 30 13:09:33.179328 containerd[1498]: time="2025-01-30T13:09:33.179029056Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:09:33.673666 containerd[1498]: time="2025-01-30T13:09:33.673450106Z" level=info msg="CreateContainer within sandbox \"4b6bce24615ffe963a14f15c01a4dfc5f81b4005be6b9f9264fb179063444123\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 13:09:33.688589 containerd[1498]: time="2025-01-30T13:09:33.688192500Z" level=info msg="CreateContainer within sandbox \"4b6bce24615ffe963a14f15c01a4dfc5f81b4005be6b9f9264fb179063444123\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0279af0f57dbd404e6ff5e400049190568b28189494990e6da848747a1e4f17f\"" Jan 30 13:09:33.691627 containerd[1498]: time="2025-01-30T13:09:33.690689308Z" level=info msg="StartContainer for \"0279af0f57dbd404e6ff5e400049190568b28189494990e6da848747a1e4f17f\"" Jan 30 13:09:33.723652 systemd[1]: Started cri-containerd-0279af0f57dbd404e6ff5e400049190568b28189494990e6da848747a1e4f17f.scope - libcontainer container 0279af0f57dbd404e6ff5e400049190568b28189494990e6da848747a1e4f17f. Jan 30 13:09:33.750721 containerd[1498]: time="2025-01-30T13:09:33.750674968Z" level=info msg="StartContainer for \"0279af0f57dbd404e6ff5e400049190568b28189494990e6da848747a1e4f17f\" returns successfully" Jan 30 13:09:33.769079 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:09:33.770079 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:09:33.770176 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:09:33.776985 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:09:33.777287 systemd[1]: cri-containerd-0279af0f57dbd404e6ff5e400049190568b28189494990e6da848747a1e4f17f.scope: Deactivated successfully. Jan 30 13:09:33.803674 containerd[1498]: time="2025-01-30T13:09:33.803609080Z" level=info msg="shim disconnected" id=0279af0f57dbd404e6ff5e400049190568b28189494990e6da848747a1e4f17f namespace=k8s.io Jan 30 13:09:33.804155 containerd[1498]: time="2025-01-30T13:09:33.803855284Z" level=warning msg="cleaning up after shim disconnected" id=0279af0f57dbd404e6ff5e400049190568b28189494990e6da848747a1e4f17f namespace=k8s.io Jan 30 13:09:33.804155 containerd[1498]: time="2025-01-30T13:09:33.803871424Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:09:33.815776 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:09:33.821480 containerd[1498]: time="2025-01-30T13:09:33.821434455Z" level=warning msg="cleanup warnings time=\"2025-01-30T13:09:33Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 30 13:09:33.937216 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a7182ea3fb540a99af63524074d3b25ac1649760bae1d0847fe9641559e9d3d4-rootfs.mount: Deactivated successfully. Jan 30 13:09:34.676107 containerd[1498]: time="2025-01-30T13:09:34.675932052Z" level=info msg="CreateContainer within sandbox \"4b6bce24615ffe963a14f15c01a4dfc5f81b4005be6b9f9264fb179063444123\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 13:09:34.703305 containerd[1498]: time="2025-01-30T13:09:34.703251438Z" level=info msg="CreateContainer within sandbox \"4b6bce24615ffe963a14f15c01a4dfc5f81b4005be6b9f9264fb179063444123\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f08ffd236c247be17d88d16a4ffc0eba9bc0c16cc23dea995659eff6f8260a4d\"" Jan 30 13:09:34.703851 containerd[1498]: time="2025-01-30T13:09:34.703832952Z" level=info msg="StartContainer for \"f08ffd236c247be17d88d16a4ffc0eba9bc0c16cc23dea995659eff6f8260a4d\"" Jan 30 13:09:34.745604 systemd[1]: Started cri-containerd-f08ffd236c247be17d88d16a4ffc0eba9bc0c16cc23dea995659eff6f8260a4d.scope - libcontainer container f08ffd236c247be17d88d16a4ffc0eba9bc0c16cc23dea995659eff6f8260a4d. Jan 30 13:09:34.781827 systemd[1]: cri-containerd-f08ffd236c247be17d88d16a4ffc0eba9bc0c16cc23dea995659eff6f8260a4d.scope: Deactivated successfully. Jan 30 13:09:34.782410 containerd[1498]: time="2025-01-30T13:09:34.782331853Z" level=info msg="StartContainer for \"f08ffd236c247be17d88d16a4ffc0eba9bc0c16cc23dea995659eff6f8260a4d\" returns successfully" Jan 30 13:09:34.802941 containerd[1498]: time="2025-01-30T13:09:34.802882327Z" level=info msg="shim disconnected" id=f08ffd236c247be17d88d16a4ffc0eba9bc0c16cc23dea995659eff6f8260a4d namespace=k8s.io Jan 30 13:09:34.802941 containerd[1498]: time="2025-01-30T13:09:34.802931600Z" level=warning msg="cleaning up after shim disconnected" id=f08ffd236c247be17d88d16a4ffc0eba9bc0c16cc23dea995659eff6f8260a4d namespace=k8s.io Jan 30 13:09:34.802941 containerd[1498]: time="2025-01-30T13:09:34.802943672Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:09:34.817567 containerd[1498]: time="2025-01-30T13:09:34.817525139Z" level=warning msg="cleanup warnings time=\"2025-01-30T13:09:34Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 30 13:09:34.936459 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f08ffd236c247be17d88d16a4ffc0eba9bc0c16cc23dea995659eff6f8260a4d-rootfs.mount: Deactivated successfully. Jan 30 13:09:34.968318 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2719555207.mount: Deactivated successfully. Jan 30 13:09:35.491361 containerd[1498]: time="2025-01-30T13:09:35.491292760Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:09:35.492001 containerd[1498]: time="2025-01-30T13:09:35.491964554Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 30 13:09:35.492881 containerd[1498]: time="2025-01-30T13:09:35.492843687Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:09:35.494030 containerd[1498]: time="2025-01-30T13:09:35.493928547Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.618493168s" Jan 30 13:09:35.494030 containerd[1498]: time="2025-01-30T13:09:35.493954145Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 30 13:09:35.495862 containerd[1498]: time="2025-01-30T13:09:35.495830644Z" level=info msg="CreateContainer within sandbox \"8ead30684677970f9efa346a5e91c69e8941cf1fb53f618a1f98ce32680fa570\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 30 13:09:35.506827 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2390389094.mount: Deactivated successfully. Jan 30 13:09:35.519598 containerd[1498]: time="2025-01-30T13:09:35.519556875Z" level=info msg="CreateContainer within sandbox \"8ead30684677970f9efa346a5e91c69e8941cf1fb53f618a1f98ce32680fa570\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"7b60c61b42e876fc732542716b0a301ffa5e7665bc0b4b25cf5b3c74fe525dec\"" Jan 30 13:09:35.520158 containerd[1498]: time="2025-01-30T13:09:35.520128680Z" level=info msg="StartContainer for \"7b60c61b42e876fc732542716b0a301ffa5e7665bc0b4b25cf5b3c74fe525dec\"" Jan 30 13:09:35.548612 systemd[1]: Started cri-containerd-7b60c61b42e876fc732542716b0a301ffa5e7665bc0b4b25cf5b3c74fe525dec.scope - libcontainer container 7b60c61b42e876fc732542716b0a301ffa5e7665bc0b4b25cf5b3c74fe525dec. Jan 30 13:09:35.571130 containerd[1498]: time="2025-01-30T13:09:35.571098265Z" level=info msg="StartContainer for \"7b60c61b42e876fc732542716b0a301ffa5e7665bc0b4b25cf5b3c74fe525dec\" returns successfully" Jan 30 13:09:35.689528 containerd[1498]: time="2025-01-30T13:09:35.688427286Z" level=info msg="CreateContainer within sandbox \"4b6bce24615ffe963a14f15c01a4dfc5f81b4005be6b9f9264fb179063444123\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 13:09:35.702817 containerd[1498]: time="2025-01-30T13:09:35.702779898Z" level=info msg="CreateContainer within sandbox \"4b6bce24615ffe963a14f15c01a4dfc5f81b4005be6b9f9264fb179063444123\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b8aba3645038723c837581f624f05a93810695644fce497ef77e835e2e086770\"" Jan 30 13:09:35.703843 containerd[1498]: time="2025-01-30T13:09:35.703811898Z" level=info msg="StartContainer for \"b8aba3645038723c837581f624f05a93810695644fce497ef77e835e2e086770\"" Jan 30 13:09:35.744608 systemd[1]: Started cri-containerd-b8aba3645038723c837581f624f05a93810695644fce497ef77e835e2e086770.scope - libcontainer container b8aba3645038723c837581f624f05a93810695644fce497ef77e835e2e086770. Jan 30 13:09:35.775462 kubelet[2747]: I0130 13:09:35.774293 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-pqzt8" podStartSLOduration=1.888181183 podStartE2EDuration="10.774275003s" podCreationTimestamp="2025-01-30 13:09:25 +0000 UTC" firstStartedPulling="2025-01-30 13:09:26.608555352 +0000 UTC m=+16.112624142" lastFinishedPulling="2025-01-30 13:09:35.494649172 +0000 UTC m=+24.998717962" observedRunningTime="2025-01-30 13:09:35.713774985 +0000 UTC m=+25.217843776" watchObservedRunningTime="2025-01-30 13:09:35.774275003 +0000 UTC m=+25.278343793" Jan 30 13:09:35.823358 containerd[1498]: time="2025-01-30T13:09:35.823298669Z" level=info msg="StartContainer for \"b8aba3645038723c837581f624f05a93810695644fce497ef77e835e2e086770\" returns successfully" Jan 30 13:09:35.826437 systemd[1]: cri-containerd-b8aba3645038723c837581f624f05a93810695644fce497ef77e835e2e086770.scope: Deactivated successfully. Jan 30 13:09:35.879547 containerd[1498]: time="2025-01-30T13:09:35.879436753Z" level=info msg="shim disconnected" id=b8aba3645038723c837581f624f05a93810695644fce497ef77e835e2e086770 namespace=k8s.io Jan 30 13:09:35.879547 containerd[1498]: time="2025-01-30T13:09:35.879500033Z" level=warning msg="cleaning up after shim disconnected" id=b8aba3645038723c837581f624f05a93810695644fce497ef77e835e2e086770 namespace=k8s.io Jan 30 13:09:35.879547 containerd[1498]: time="2025-01-30T13:09:35.879508659Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:09:36.698675 containerd[1498]: time="2025-01-30T13:09:36.697719199Z" level=info msg="CreateContainer within sandbox \"4b6bce24615ffe963a14f15c01a4dfc5f81b4005be6b9f9264fb179063444123\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 13:09:36.728630 containerd[1498]: time="2025-01-30T13:09:36.727589224Z" level=info msg="CreateContainer within sandbox \"4b6bce24615ffe963a14f15c01a4dfc5f81b4005be6b9f9264fb179063444123\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b67e52ae726e24c1701730904e36c69cb3497b286f27d92e6a837347ed5caf9f\"" Jan 30 13:09:36.730120 containerd[1498]: time="2025-01-30T13:09:36.730065621Z" level=info msg="StartContainer for \"b67e52ae726e24c1701730904e36c69cb3497b286f27d92e6a837347ed5caf9f\"" Jan 30 13:09:36.771600 systemd[1]: Started cri-containerd-b67e52ae726e24c1701730904e36c69cb3497b286f27d92e6a837347ed5caf9f.scope - libcontainer container b67e52ae726e24c1701730904e36c69cb3497b286f27d92e6a837347ed5caf9f. Jan 30 13:09:36.801181 containerd[1498]: time="2025-01-30T13:09:36.799366210Z" level=info msg="StartContainer for \"b67e52ae726e24c1701730904e36c69cb3497b286f27d92e6a837347ed5caf9f\" returns successfully" Jan 30 13:09:36.997651 kubelet[2747]: I0130 13:09:36.997054 2747 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 30 13:09:37.024115 kubelet[2747]: I0130 13:09:37.023953 2747 topology_manager.go:215] "Topology Admit Handler" podUID="dbb98a87-4231-437d-a127-976c0a7f93eb" podNamespace="kube-system" podName="coredns-7db6d8ff4d-mjz4m" Jan 30 13:09:37.028704 kubelet[2747]: I0130 13:09:37.028575 2747 topology_manager.go:215] "Topology Admit Handler" podUID="b125778b-a007-44dc-8ab1-65238a01d14a" podNamespace="kube-system" podName="coredns-7db6d8ff4d-vh5sb" Jan 30 13:09:37.033787 systemd[1]: Created slice kubepods-burstable-poddbb98a87_4231_437d_a127_976c0a7f93eb.slice - libcontainer container kubepods-burstable-poddbb98a87_4231_437d_a127_976c0a7f93eb.slice. Jan 30 13:09:37.040939 systemd[1]: Created slice kubepods-burstable-podb125778b_a007_44dc_8ab1_65238a01d14a.slice - libcontainer container kubepods-burstable-podb125778b_a007_44dc_8ab1_65238a01d14a.slice. Jan 30 13:09:37.150601 kubelet[2747]: I0130 13:09:37.150540 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b125778b-a007-44dc-8ab1-65238a01d14a-config-volume\") pod \"coredns-7db6d8ff4d-vh5sb\" (UID: \"b125778b-a007-44dc-8ab1-65238a01d14a\") " pod="kube-system/coredns-7db6d8ff4d-vh5sb" Jan 30 13:09:37.150924 kubelet[2747]: I0130 13:09:37.150852 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jgz2\" (UniqueName: \"kubernetes.io/projected/b125778b-a007-44dc-8ab1-65238a01d14a-kube-api-access-6jgz2\") pod \"coredns-7db6d8ff4d-vh5sb\" (UID: \"b125778b-a007-44dc-8ab1-65238a01d14a\") " pod="kube-system/coredns-7db6d8ff4d-vh5sb" Jan 30 13:09:37.151039 kubelet[2747]: I0130 13:09:37.150880 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhrcw\" (UniqueName: \"kubernetes.io/projected/dbb98a87-4231-437d-a127-976c0a7f93eb-kube-api-access-nhrcw\") pod \"coredns-7db6d8ff4d-mjz4m\" (UID: \"dbb98a87-4231-437d-a127-976c0a7f93eb\") " pod="kube-system/coredns-7db6d8ff4d-mjz4m" Jan 30 13:09:37.151039 kubelet[2747]: I0130 13:09:37.151014 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dbb98a87-4231-437d-a127-976c0a7f93eb-config-volume\") pod \"coredns-7db6d8ff4d-mjz4m\" (UID: \"dbb98a87-4231-437d-a127-976c0a7f93eb\") " pod="kube-system/coredns-7db6d8ff4d-mjz4m" Jan 30 13:09:37.338866 containerd[1498]: time="2025-01-30T13:09:37.338506657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-mjz4m,Uid:dbb98a87-4231-437d-a127-976c0a7f93eb,Namespace:kube-system,Attempt:0,}" Jan 30 13:09:37.343759 containerd[1498]: time="2025-01-30T13:09:37.343739867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vh5sb,Uid:b125778b-a007-44dc-8ab1-65238a01d14a,Namespace:kube-system,Attempt:0,}" Jan 30 13:09:39.148816 systemd-networkd[1393]: cilium_host: Link UP Jan 30 13:09:39.151135 systemd-networkd[1393]: cilium_net: Link UP Jan 30 13:09:39.152378 systemd-networkd[1393]: cilium_net: Gained carrier Jan 30 13:09:39.152921 systemd-networkd[1393]: cilium_host: Gained carrier Jan 30 13:09:39.263890 systemd-networkd[1393]: cilium_vxlan: Link UP Jan 30 13:09:39.263902 systemd-networkd[1393]: cilium_vxlan: Gained carrier Jan 30 13:09:39.544833 kernel: NET: Registered PF_ALG protocol family Jan 30 13:09:39.887138 systemd-networkd[1393]: cilium_host: Gained IPv6LL Jan 30 13:09:40.144577 systemd-networkd[1393]: cilium_net: Gained IPv6LL Jan 30 13:09:40.210755 systemd-networkd[1393]: lxc_health: Link UP Jan 30 13:09:40.221395 systemd-networkd[1393]: lxc_health: Gained carrier Jan 30 13:09:40.419175 systemd-networkd[1393]: lxccb6c01323dbd: Link UP Jan 30 13:09:40.429652 systemd-networkd[1393]: lxc400d4864b5fe: Link UP Jan 30 13:09:40.443470 kernel: eth0: renamed from tmp0438e Jan 30 13:09:40.446517 kernel: eth0: renamed from tmpa46e4 Jan 30 13:09:40.453747 systemd-networkd[1393]: lxccb6c01323dbd: Gained carrier Jan 30 13:09:40.460589 systemd-networkd[1393]: lxc400d4864b5fe: Gained carrier Jan 30 13:09:41.230727 systemd-networkd[1393]: cilium_vxlan: Gained IPv6LL Jan 30 13:09:41.679773 systemd-networkd[1393]: lxccb6c01323dbd: Gained IPv6LL Jan 30 13:09:41.806762 systemd-networkd[1393]: lxc400d4864b5fe: Gained IPv6LL Jan 30 13:09:41.934679 systemd-networkd[1393]: lxc_health: Gained IPv6LL Jan 30 13:09:42.180051 kubelet[2747]: I0130 13:09:42.179963 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-864lp" podStartSLOduration=10.535440992 podStartE2EDuration="17.179944047s" podCreationTimestamp="2025-01-30 13:09:25 +0000 UTC" firstStartedPulling="2025-01-30 13:09:26.230364065 +0000 UTC m=+15.734432854" lastFinishedPulling="2025-01-30 13:09:32.87486712 +0000 UTC m=+22.378935909" observedRunningTime="2025-01-30 13:09:37.738918533 +0000 UTC m=+27.242987323" watchObservedRunningTime="2025-01-30 13:09:42.179944047 +0000 UTC m=+31.684012838" Jan 30 13:09:43.822667 containerd[1498]: time="2025-01-30T13:09:43.821477282Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:09:43.822667 containerd[1498]: time="2025-01-30T13:09:43.821567232Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:09:43.822667 containerd[1498]: time="2025-01-30T13:09:43.821588752Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:09:43.824548 containerd[1498]: time="2025-01-30T13:09:43.822734564Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:09:43.856649 systemd[1]: Started cri-containerd-a46e4e8b5ad3448838a90c8be48bef871011a4519e02430e5b41c6a433170f96.scope - libcontainer container a46e4e8b5ad3448838a90c8be48bef871011a4519e02430e5b41c6a433170f96. Jan 30 13:09:43.863247 containerd[1498]: time="2025-01-30T13:09:43.863130447Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:09:43.863416 containerd[1498]: time="2025-01-30T13:09:43.863278134Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:09:43.863416 containerd[1498]: time="2025-01-30T13:09:43.863291319Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:09:43.864114 containerd[1498]: time="2025-01-30T13:09:43.863623864Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:09:43.901374 systemd[1]: Started cri-containerd-0438e28696640e376eba811bd293183cdae2992a8074a1fec1b3067ddf4428e8.scope - libcontainer container 0438e28696640e376eba811bd293183cdae2992a8074a1fec1b3067ddf4428e8. Jan 30 13:09:43.988199 containerd[1498]: time="2025-01-30T13:09:43.986859230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vh5sb,Uid:b125778b-a007-44dc-8ab1-65238a01d14a,Namespace:kube-system,Attempt:0,} returns sandbox id \"a46e4e8b5ad3448838a90c8be48bef871011a4519e02430e5b41c6a433170f96\"" Jan 30 13:09:43.988626 containerd[1498]: time="2025-01-30T13:09:43.988598507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-mjz4m,Uid:dbb98a87-4231-437d-a127-976c0a7f93eb,Namespace:kube-system,Attempt:0,} returns sandbox id \"0438e28696640e376eba811bd293183cdae2992a8074a1fec1b3067ddf4428e8\"" Jan 30 13:09:43.991693 containerd[1498]: time="2025-01-30T13:09:43.991659167Z" level=info msg="CreateContainer within sandbox \"a46e4e8b5ad3448838a90c8be48bef871011a4519e02430e5b41c6a433170f96\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 13:09:43.994696 containerd[1498]: time="2025-01-30T13:09:43.994664634Z" level=info msg="CreateContainer within sandbox \"0438e28696640e376eba811bd293183cdae2992a8074a1fec1b3067ddf4428e8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 13:09:44.016607 containerd[1498]: time="2025-01-30T13:09:44.016506236Z" level=info msg="CreateContainer within sandbox \"0438e28696640e376eba811bd293183cdae2992a8074a1fec1b3067ddf4428e8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bc55534234c9f359ec1bdb61d8cbb9489ad64ae6c5009a46201e03db543c6e6d\"" Jan 30 13:09:44.018280 containerd[1498]: time="2025-01-30T13:09:44.018252045Z" level=info msg="StartContainer for \"bc55534234c9f359ec1bdb61d8cbb9489ad64ae6c5009a46201e03db543c6e6d\"" Jan 30 13:09:44.021032 containerd[1498]: time="2025-01-30T13:09:44.020888117Z" level=info msg="CreateContainer within sandbox \"a46e4e8b5ad3448838a90c8be48bef871011a4519e02430e5b41c6a433170f96\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"77ee80f7bd1f3be3318ff5724822b9283a0ef9456e252084f0a7d322b68e97c4\"" Jan 30 13:09:44.022985 containerd[1498]: time="2025-01-30T13:09:44.022932538Z" level=info msg="StartContainer for \"77ee80f7bd1f3be3318ff5724822b9283a0ef9456e252084f0a7d322b68e97c4\"" Jan 30 13:09:44.054622 systemd[1]: Started cri-containerd-77ee80f7bd1f3be3318ff5724822b9283a0ef9456e252084f0a7d322b68e97c4.scope - libcontainer container 77ee80f7bd1f3be3318ff5724822b9283a0ef9456e252084f0a7d322b68e97c4. Jan 30 13:09:44.058381 systemd[1]: Started cri-containerd-bc55534234c9f359ec1bdb61d8cbb9489ad64ae6c5009a46201e03db543c6e6d.scope - libcontainer container bc55534234c9f359ec1bdb61d8cbb9489ad64ae6c5009a46201e03db543c6e6d. Jan 30 13:09:44.105239 containerd[1498]: time="2025-01-30T13:09:44.104601947Z" level=info msg="StartContainer for \"bc55534234c9f359ec1bdb61d8cbb9489ad64ae6c5009a46201e03db543c6e6d\" returns successfully" Jan 30 13:09:44.105239 containerd[1498]: time="2025-01-30T13:09:44.104760415Z" level=info msg="StartContainer for \"77ee80f7bd1f3be3318ff5724822b9283a0ef9456e252084f0a7d322b68e97c4\" returns successfully" Jan 30 13:09:44.750506 kubelet[2747]: I0130 13:09:44.749910 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-vh5sb" podStartSLOduration=19.749889713 podStartE2EDuration="19.749889713s" podCreationTimestamp="2025-01-30 13:09:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:09:44.748592127 +0000 UTC m=+34.252660927" watchObservedRunningTime="2025-01-30 13:09:44.749889713 +0000 UTC m=+34.253958513" Jan 30 13:09:44.780372 kubelet[2747]: I0130 13:09:44.780231 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-mjz4m" podStartSLOduration=19.780208489 podStartE2EDuration="19.780208489s" podCreationTimestamp="2025-01-30 13:09:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:09:44.77912308 +0000 UTC m=+34.283191880" watchObservedRunningTime="2025-01-30 13:09:44.780208489 +0000 UTC m=+34.284277319" Jan 30 13:09:44.831397 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount853639332.mount: Deactivated successfully. Jan 30 13:09:48.253961 kubelet[2747]: I0130 13:09:48.253773 2747 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:11:51.332872 systemd[1]: Started sshd@7-138.199.163.224:22-139.178.89.65:60824.service - OpenSSH per-connection server daemon (139.178.89.65:60824). Jan 30 13:11:52.338331 sshd[4118]: Accepted publickey for core from 139.178.89.65 port 60824 ssh2: RSA SHA256:5b7aLHOxh/fZTvNGxGsjXyEVE8Wd5gb2YihhQWnHlKs Jan 30 13:11:52.340511 sshd-session[4118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:11:52.345524 systemd-logind[1474]: New session 8 of user core. Jan 30 13:11:52.350634 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 13:11:53.415099 sshd[4122]: Connection closed by 139.178.89.65 port 60824 Jan 30 13:11:53.415928 sshd-session[4118]: pam_unix(sshd:session): session closed for user core Jan 30 13:11:53.421329 systemd-logind[1474]: Session 8 logged out. Waiting for processes to exit. Jan 30 13:11:53.421915 systemd[1]: sshd@7-138.199.163.224:22-139.178.89.65:60824.service: Deactivated successfully. Jan 30 13:11:53.424991 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 13:11:53.427327 systemd-logind[1474]: Removed session 8. Jan 30 13:11:58.586078 systemd[1]: Started sshd@8-138.199.163.224:22-139.178.89.65:60840.service - OpenSSH per-connection server daemon (139.178.89.65:60840). Jan 30 13:11:59.600246 sshd[4136]: Accepted publickey for core from 139.178.89.65 port 60840 ssh2: RSA SHA256:5b7aLHOxh/fZTvNGxGsjXyEVE8Wd5gb2YihhQWnHlKs Jan 30 13:11:59.602104 sshd-session[4136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:11:59.606806 systemd-logind[1474]: New session 9 of user core. Jan 30 13:11:59.619659 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 13:12:00.368996 sshd[4138]: Connection closed by 139.178.89.65 port 60840 Jan 30 13:12:00.370105 sshd-session[4136]: pam_unix(sshd:session): session closed for user core Jan 30 13:12:00.375398 systemd[1]: sshd@8-138.199.163.224:22-139.178.89.65:60840.service: Deactivated successfully. Jan 30 13:12:00.378277 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 13:12:00.379410 systemd-logind[1474]: Session 9 logged out. Waiting for processes to exit. Jan 30 13:12:00.380784 systemd-logind[1474]: Removed session 9. Jan 30 13:12:05.542911 systemd[1]: Started sshd@9-138.199.163.224:22-139.178.89.65:55048.service - OpenSSH per-connection server daemon (139.178.89.65:55048). Jan 30 13:12:06.550462 sshd[4150]: Accepted publickey for core from 139.178.89.65 port 55048 ssh2: RSA SHA256:5b7aLHOxh/fZTvNGxGsjXyEVE8Wd5gb2YihhQWnHlKs Jan 30 13:12:06.552730 sshd-session[4150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:12:06.559446 systemd-logind[1474]: New session 10 of user core. Jan 30 13:12:06.564664 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 13:12:07.306466 sshd[4152]: Connection closed by 139.178.89.65 port 55048 Jan 30 13:12:07.307139 sshd-session[4150]: pam_unix(sshd:session): session closed for user core Jan 30 13:12:07.310201 systemd[1]: sshd@9-138.199.163.224:22-139.178.89.65:55048.service: Deactivated successfully. Jan 30 13:12:07.312727 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 13:12:07.314877 systemd-logind[1474]: Session 10 logged out. Waiting for processes to exit. Jan 30 13:12:07.316239 systemd-logind[1474]: Removed session 10. Jan 30 13:12:07.480753 systemd[1]: Started sshd@10-138.199.163.224:22-139.178.89.65:55062.service - OpenSSH per-connection server daemon (139.178.89.65:55062). Jan 30 13:12:08.454098 sshd[4164]: Accepted publickey for core from 139.178.89.65 port 55062 ssh2: RSA SHA256:5b7aLHOxh/fZTvNGxGsjXyEVE8Wd5gb2YihhQWnHlKs Jan 30 13:12:08.456972 sshd-session[4164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:12:08.465280 systemd-logind[1474]: New session 11 of user core. Jan 30 13:12:08.476758 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 13:12:09.239901 sshd[4166]: Connection closed by 139.178.89.65 port 55062 Jan 30 13:12:09.240581 sshd-session[4164]: pam_unix(sshd:session): session closed for user core Jan 30 13:12:09.246390 systemd[1]: sshd@10-138.199.163.224:22-139.178.89.65:55062.service: Deactivated successfully. Jan 30 13:12:09.250327 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 13:12:09.251658 systemd-logind[1474]: Session 11 logged out. Waiting for processes to exit. Jan 30 13:12:09.253578 systemd-logind[1474]: Removed session 11. Jan 30 13:12:09.406025 systemd[1]: Started sshd@11-138.199.163.224:22-139.178.89.65:55078.service - OpenSSH per-connection server daemon (139.178.89.65:55078). Jan 30 13:12:10.385426 sshd[4175]: Accepted publickey for core from 139.178.89.65 port 55078 ssh2: RSA SHA256:5b7aLHOxh/fZTvNGxGsjXyEVE8Wd5gb2YihhQWnHlKs Jan 30 13:12:10.387122 sshd-session[4175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:12:10.392305 systemd-logind[1474]: New session 12 of user core. Jan 30 13:12:10.396624 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 30 13:12:11.118584 sshd[4177]: Connection closed by 139.178.89.65 port 55078 Jan 30 13:12:11.119292 sshd-session[4175]: pam_unix(sshd:session): session closed for user core Jan 30 13:12:11.122796 systemd-logind[1474]: Session 12 logged out. Waiting for processes to exit. Jan 30 13:12:11.123704 systemd[1]: sshd@11-138.199.163.224:22-139.178.89.65:55078.service: Deactivated successfully. Jan 30 13:12:11.125693 systemd[1]: session-12.scope: Deactivated successfully. Jan 30 13:12:11.126892 systemd-logind[1474]: Removed session 12. Jan 30 13:12:16.291201 systemd[1]: Started sshd@12-138.199.163.224:22-139.178.89.65:53354.service - OpenSSH per-connection server daemon (139.178.89.65:53354). Jan 30 13:12:17.282120 sshd[4190]: Accepted publickey for core from 139.178.89.65 port 53354 ssh2: RSA SHA256:5b7aLHOxh/fZTvNGxGsjXyEVE8Wd5gb2YihhQWnHlKs Jan 30 13:12:17.283776 sshd-session[4190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:12:17.290161 systemd-logind[1474]: New session 13 of user core. Jan 30 13:12:17.297768 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 30 13:12:18.016166 sshd[4192]: Connection closed by 139.178.89.65 port 53354 Jan 30 13:12:18.017153 sshd-session[4190]: pam_unix(sshd:session): session closed for user core Jan 30 13:12:18.021069 systemd[1]: sshd@12-138.199.163.224:22-139.178.89.65:53354.service: Deactivated successfully. Jan 30 13:12:18.023703 systemd[1]: session-13.scope: Deactivated successfully. Jan 30 13:12:18.025972 systemd-logind[1474]: Session 13 logged out. Waiting for processes to exit. Jan 30 13:12:18.028984 systemd-logind[1474]: Removed session 13. Jan 30 13:12:18.191171 systemd[1]: Started sshd@13-138.199.163.224:22-139.178.89.65:53360.service - OpenSSH per-connection server daemon (139.178.89.65:53360). Jan 30 13:12:19.170984 sshd[4203]: Accepted publickey for core from 139.178.89.65 port 53360 ssh2: RSA SHA256:5b7aLHOxh/fZTvNGxGsjXyEVE8Wd5gb2YihhQWnHlKs Jan 30 13:12:19.173138 sshd-session[4203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:12:19.182023 systemd-logind[1474]: New session 14 of user core. Jan 30 13:12:19.187728 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 30 13:12:20.075178 sshd[4205]: Connection closed by 139.178.89.65 port 53360 Jan 30 13:12:20.075898 sshd-session[4203]: pam_unix(sshd:session): session closed for user core Jan 30 13:12:20.082726 systemd[1]: sshd@13-138.199.163.224:22-139.178.89.65:53360.service: Deactivated successfully. Jan 30 13:12:20.084997 systemd[1]: session-14.scope: Deactivated successfully. Jan 30 13:12:20.086966 systemd-logind[1474]: Session 14 logged out. Waiting for processes to exit. Jan 30 13:12:20.088356 systemd-logind[1474]: Removed session 14. Jan 30 13:12:20.252987 systemd[1]: Started sshd@14-138.199.163.224:22-139.178.89.65:53370.service - OpenSSH per-connection server daemon (139.178.89.65:53370). Jan 30 13:12:21.268898 sshd[4215]: Accepted publickey for core from 139.178.89.65 port 53370 ssh2: RSA SHA256:5b7aLHOxh/fZTvNGxGsjXyEVE8Wd5gb2YihhQWnHlKs Jan 30 13:12:21.270696 sshd-session[4215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:12:21.276056 systemd-logind[1474]: New session 15 of user core. Jan 30 13:12:21.283717 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 30 13:12:23.477190 sshd[4217]: Connection closed by 139.178.89.65 port 53370 Jan 30 13:12:23.478106 sshd-session[4215]: pam_unix(sshd:session): session closed for user core Jan 30 13:12:23.481387 systemd[1]: sshd@14-138.199.163.224:22-139.178.89.65:53370.service: Deactivated successfully. Jan 30 13:12:23.483895 systemd[1]: session-15.scope: Deactivated successfully. Jan 30 13:12:23.485759 systemd-logind[1474]: Session 15 logged out. Waiting for processes to exit. Jan 30 13:12:23.487705 systemd-logind[1474]: Removed session 15. Jan 30 13:12:23.644895 systemd[1]: Started sshd@15-138.199.163.224:22-139.178.89.65:34234.service - OpenSSH per-connection server daemon (139.178.89.65:34234). Jan 30 13:12:24.632005 sshd[4234]: Accepted publickey for core from 139.178.89.65 port 34234 ssh2: RSA SHA256:5b7aLHOxh/fZTvNGxGsjXyEVE8Wd5gb2YihhQWnHlKs Jan 30 13:12:24.633694 sshd-session[4234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:12:24.638955 systemd-logind[1474]: New session 16 of user core. Jan 30 13:12:24.642678 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 30 13:12:25.499343 sshd[4236]: Connection closed by 139.178.89.65 port 34234 Jan 30 13:12:25.499714 sshd-session[4234]: pam_unix(sshd:session): session closed for user core Jan 30 13:12:25.503715 systemd-logind[1474]: Session 16 logged out. Waiting for processes to exit. Jan 30 13:12:25.504414 systemd[1]: sshd@15-138.199.163.224:22-139.178.89.65:34234.service: Deactivated successfully. Jan 30 13:12:25.507054 systemd[1]: session-16.scope: Deactivated successfully. Jan 30 13:12:25.508227 systemd-logind[1474]: Removed session 16. Jan 30 13:12:25.678989 systemd[1]: Started sshd@16-138.199.163.224:22-139.178.89.65:34250.service - OpenSSH per-connection server daemon (139.178.89.65:34250). Jan 30 13:12:26.665822 sshd[4245]: Accepted publickey for core from 139.178.89.65 port 34250 ssh2: RSA SHA256:5b7aLHOxh/fZTvNGxGsjXyEVE8Wd5gb2YihhQWnHlKs Jan 30 13:12:26.667752 sshd-session[4245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:12:26.672785 systemd-logind[1474]: New session 17 of user core. Jan 30 13:12:26.678631 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 30 13:12:27.410694 sshd[4247]: Connection closed by 139.178.89.65 port 34250 Jan 30 13:12:27.411358 sshd-session[4245]: pam_unix(sshd:session): session closed for user core Jan 30 13:12:27.414853 systemd[1]: sshd@16-138.199.163.224:22-139.178.89.65:34250.service: Deactivated successfully. Jan 30 13:12:27.417102 systemd[1]: session-17.scope: Deactivated successfully. Jan 30 13:12:27.418537 systemd-logind[1474]: Session 17 logged out. Waiting for processes to exit. Jan 30 13:12:27.419892 systemd-logind[1474]: Removed session 17. Jan 30 13:12:32.581802 systemd[1]: Started sshd@17-138.199.163.224:22-139.178.89.65:47452.service - OpenSSH per-connection server daemon (139.178.89.65:47452). Jan 30 13:12:33.555242 sshd[4263]: Accepted publickey for core from 139.178.89.65 port 47452 ssh2: RSA SHA256:5b7aLHOxh/fZTvNGxGsjXyEVE8Wd5gb2YihhQWnHlKs Jan 30 13:12:33.556955 sshd-session[4263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:12:33.562379 systemd-logind[1474]: New session 18 of user core. Jan 30 13:12:33.567633 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 30 13:12:34.286816 sshd[4265]: Connection closed by 139.178.89.65 port 47452 Jan 30 13:12:34.287617 sshd-session[4263]: pam_unix(sshd:session): session closed for user core Jan 30 13:12:34.291207 systemd[1]: sshd@17-138.199.163.224:22-139.178.89.65:47452.service: Deactivated successfully. Jan 30 13:12:34.293892 systemd[1]: session-18.scope: Deactivated successfully. Jan 30 13:12:34.296093 systemd-logind[1474]: Session 18 logged out. Waiting for processes to exit. Jan 30 13:12:34.297994 systemd-logind[1474]: Removed session 18. Jan 30 13:12:39.465304 systemd[1]: Started sshd@18-138.199.163.224:22-139.178.89.65:47464.service - OpenSSH per-connection server daemon (139.178.89.65:47464). Jan 30 13:12:40.464404 sshd[4276]: Accepted publickey for core from 139.178.89.65 port 47464 ssh2: RSA SHA256:5b7aLHOxh/fZTvNGxGsjXyEVE8Wd5gb2YihhQWnHlKs Jan 30 13:12:40.466181 sshd-session[4276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:12:40.471546 systemd-logind[1474]: New session 19 of user core. Jan 30 13:12:40.476648 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 30 13:12:41.204437 sshd[4278]: Connection closed by 139.178.89.65 port 47464 Jan 30 13:12:41.205126 sshd-session[4276]: pam_unix(sshd:session): session closed for user core Jan 30 13:12:41.209977 systemd-logind[1474]: Session 19 logged out. Waiting for processes to exit. Jan 30 13:12:41.210694 systemd[1]: sshd@18-138.199.163.224:22-139.178.89.65:47464.service: Deactivated successfully. Jan 30 13:12:41.213361 systemd[1]: session-19.scope: Deactivated successfully. Jan 30 13:12:41.214482 systemd-logind[1474]: Removed session 19. Jan 30 13:12:41.377822 systemd[1]: Started sshd@19-138.199.163.224:22-139.178.89.65:37156.service - OpenSSH per-connection server daemon (139.178.89.65:37156). Jan 30 13:12:42.354518 sshd[4289]: Accepted publickey for core from 139.178.89.65 port 37156 ssh2: RSA SHA256:5b7aLHOxh/fZTvNGxGsjXyEVE8Wd5gb2YihhQWnHlKs Jan 30 13:12:42.355247 sshd-session[4289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:12:42.360825 systemd-logind[1474]: New session 20 of user core. Jan 30 13:12:42.370716 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 30 13:12:44.192226 containerd[1498]: time="2025-01-30T13:12:44.190799877Z" level=info msg="StopContainer for \"7b60c61b42e876fc732542716b0a301ffa5e7665bc0b4b25cf5b3c74fe525dec\" with timeout 30 (s)" Jan 30 13:12:44.197528 containerd[1498]: time="2025-01-30T13:12:44.197227071Z" level=info msg="Stop container \"7b60c61b42e876fc732542716b0a301ffa5e7665bc0b4b25cf5b3c74fe525dec\" with signal terminated" Jan 30 13:12:44.208127 containerd[1498]: time="2025-01-30T13:12:44.208087396Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:12:44.211980 systemd[1]: cri-containerd-7b60c61b42e876fc732542716b0a301ffa5e7665bc0b4b25cf5b3c74fe525dec.scope: Deactivated successfully. Jan 30 13:12:44.218571 containerd[1498]: time="2025-01-30T13:12:44.218543671Z" level=info msg="StopContainer for \"b67e52ae726e24c1701730904e36c69cb3497b286f27d92e6a837347ed5caf9f\" with timeout 2 (s)" Jan 30 13:12:44.219195 containerd[1498]: time="2025-01-30T13:12:44.219130246Z" level=info msg="Stop container \"b67e52ae726e24c1701730904e36c69cb3497b286f27d92e6a837347ed5caf9f\" with signal terminated" Jan 30 13:12:44.226520 systemd-networkd[1393]: lxc_health: Link DOWN Jan 30 13:12:44.226527 systemd-networkd[1393]: lxc_health: Lost carrier Jan 30 13:12:44.250858 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7b60c61b42e876fc732542716b0a301ffa5e7665bc0b4b25cf5b3c74fe525dec-rootfs.mount: Deactivated successfully. Jan 30 13:12:44.255176 systemd[1]: cri-containerd-b67e52ae726e24c1701730904e36c69cb3497b286f27d92e6a837347ed5caf9f.scope: Deactivated successfully. Jan 30 13:12:44.255652 systemd[1]: cri-containerd-b67e52ae726e24c1701730904e36c69cb3497b286f27d92e6a837347ed5caf9f.scope: Consumed 6.883s CPU time. Jan 30 13:12:44.263617 containerd[1498]: time="2025-01-30T13:12:44.263433434Z" level=info msg="shim disconnected" id=7b60c61b42e876fc732542716b0a301ffa5e7665bc0b4b25cf5b3c74fe525dec namespace=k8s.io Jan 30 13:12:44.263812 containerd[1498]: time="2025-01-30T13:12:44.263795231Z" level=warning msg="cleaning up after shim disconnected" id=7b60c61b42e876fc732542716b0a301ffa5e7665bc0b4b25cf5b3c74fe525dec namespace=k8s.io Jan 30 13:12:44.263891 containerd[1498]: time="2025-01-30T13:12:44.263878043Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:12:44.277747 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b67e52ae726e24c1701730904e36c69cb3497b286f27d92e6a837347ed5caf9f-rootfs.mount: Deactivated successfully. Jan 30 13:12:44.281770 containerd[1498]: time="2025-01-30T13:12:44.281725906Z" level=info msg="StopContainer for \"7b60c61b42e876fc732542716b0a301ffa5e7665bc0b4b25cf5b3c74fe525dec\" returns successfully" Jan 30 13:12:44.283072 containerd[1498]: time="2025-01-30T13:12:44.283023881Z" level=info msg="shim disconnected" id=b67e52ae726e24c1701730904e36c69cb3497b286f27d92e6a837347ed5caf9f namespace=k8s.io Jan 30 13:12:44.283072 containerd[1498]: time="2025-01-30T13:12:44.283060493Z" level=warning msg="cleaning up after shim disconnected" id=b67e52ae726e24c1701730904e36c69cb3497b286f27d92e6a837347ed5caf9f namespace=k8s.io Jan 30 13:12:44.283072 containerd[1498]: time="2025-01-30T13:12:44.283068087Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:12:44.286480 containerd[1498]: time="2025-01-30T13:12:44.286208192Z" level=info msg="StopPodSandbox for \"8ead30684677970f9efa346a5e91c69e8941cf1fb53f618a1f98ce32680fa570\"" Jan 30 13:12:44.287564 containerd[1498]: time="2025-01-30T13:12:44.287523090Z" level=info msg="Container to stop \"7b60c61b42e876fc732542716b0a301ffa5e7665bc0b4b25cf5b3c74fe525dec\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:12:44.290145 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8ead30684677970f9efa346a5e91c69e8941cf1fb53f618a1f98ce32680fa570-shm.mount: Deactivated successfully. Jan 30 13:12:44.298667 systemd[1]: cri-containerd-8ead30684677970f9efa346a5e91c69e8941cf1fb53f618a1f98ce32680fa570.scope: Deactivated successfully. Jan 30 13:12:44.304358 containerd[1498]: time="2025-01-30T13:12:44.304314779Z" level=info msg="StopContainer for \"b67e52ae726e24c1701730904e36c69cb3497b286f27d92e6a837347ed5caf9f\" returns successfully" Jan 30 13:12:44.304930 containerd[1498]: time="2025-01-30T13:12:44.304786832Z" level=info msg="StopPodSandbox for \"4b6bce24615ffe963a14f15c01a4dfc5f81b4005be6b9f9264fb179063444123\"" Jan 30 13:12:44.304930 containerd[1498]: time="2025-01-30T13:12:44.304811060Z" level=info msg="Container to stop \"b8aba3645038723c837581f624f05a93810695644fce497ef77e835e2e086770\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:12:44.304930 containerd[1498]: time="2025-01-30T13:12:44.304838313Z" level=info msg="Container to stop \"a7182ea3fb540a99af63524074d3b25ac1649760bae1d0847fe9641559e9d3d4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:12:44.304930 containerd[1498]: time="2025-01-30T13:12:44.304846908Z" level=info msg="Container to stop \"0279af0f57dbd404e6ff5e400049190568b28189494990e6da848747a1e4f17f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:12:44.304930 containerd[1498]: time="2025-01-30T13:12:44.304855035Z" level=info msg="Container to stop \"f08ffd236c247be17d88d16a4ffc0eba9bc0c16cc23dea995659eff6f8260a4d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:12:44.304930 containerd[1498]: time="2025-01-30T13:12:44.304862390Z" level=info msg="Container to stop \"b67e52ae726e24c1701730904e36c69cb3497b286f27d92e6a837347ed5caf9f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:12:44.310700 systemd[1]: cri-containerd-4b6bce24615ffe963a14f15c01a4dfc5f81b4005be6b9f9264fb179063444123.scope: Deactivated successfully. Jan 30 13:12:44.328919 containerd[1498]: time="2025-01-30T13:12:44.328861257Z" level=info msg="shim disconnected" id=8ead30684677970f9efa346a5e91c69e8941cf1fb53f618a1f98ce32680fa570 namespace=k8s.io Jan 30 13:12:44.329160 containerd[1498]: time="2025-01-30T13:12:44.328912637Z" level=warning msg="cleaning up after shim disconnected" id=8ead30684677970f9efa346a5e91c69e8941cf1fb53f618a1f98ce32680fa570 namespace=k8s.io Jan 30 13:12:44.329160 containerd[1498]: time="2025-01-30T13:12:44.328947927Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:12:44.343898 containerd[1498]: time="2025-01-30T13:12:44.343867119Z" level=info msg="TearDown network for sandbox \"8ead30684677970f9efa346a5e91c69e8941cf1fb53f618a1f98ce32680fa570\" successfully" Jan 30 13:12:44.344095 containerd[1498]: time="2025-01-30T13:12:44.344080094Z" level=info msg="StopPodSandbox for \"8ead30684677970f9efa346a5e91c69e8941cf1fb53f618a1f98ce32680fa570\" returns successfully" Jan 30 13:12:44.347466 containerd[1498]: time="2025-01-30T13:12:44.346774057Z" level=info msg="shim disconnected" id=4b6bce24615ffe963a14f15c01a4dfc5f81b4005be6b9f9264fb179063444123 namespace=k8s.io Jan 30 13:12:44.347466 containerd[1498]: time="2025-01-30T13:12:44.347354300Z" level=warning msg="cleaning up after shim disconnected" id=4b6bce24615ffe963a14f15c01a4dfc5f81b4005be6b9f9264fb179063444123 namespace=k8s.io Jan 30 13:12:44.347466 containerd[1498]: time="2025-01-30T13:12:44.347365361Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:12:44.368156 containerd[1498]: time="2025-01-30T13:12:44.368114913Z" level=info msg="TearDown network for sandbox \"4b6bce24615ffe963a14f15c01a4dfc5f81b4005be6b9f9264fb179063444123\" successfully" Jan 30 13:12:44.368156 containerd[1498]: time="2025-01-30T13:12:44.368146254Z" level=info msg="StopPodSandbox for \"4b6bce24615ffe963a14f15c01a4dfc5f81b4005be6b9f9264fb179063444123\" returns successfully" Jan 30 13:12:44.517942 kubelet[2747]: I0130 13:12:44.517532 2747 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/765f7396-700d-42f9-a5b4-1c24ddc6850b-etc-cni-netd\") pod \"765f7396-700d-42f9-a5b4-1c24ddc6850b\" (UID: \"765f7396-700d-42f9-a5b4-1c24ddc6850b\") " Jan 30 13:12:44.517942 kubelet[2747]: I0130 13:12:44.517597 2747 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/765f7396-700d-42f9-a5b4-1c24ddc6850b-cilium-config-path\") pod \"765f7396-700d-42f9-a5b4-1c24ddc6850b\" (UID: \"765f7396-700d-42f9-a5b4-1c24ddc6850b\") " Jan 30 13:12:44.517942 kubelet[2747]: I0130 13:12:44.517617 2747 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/765f7396-700d-42f9-a5b4-1c24ddc6850b-host-proc-sys-kernel\") pod \"765f7396-700d-42f9-a5b4-1c24ddc6850b\" (UID: \"765f7396-700d-42f9-a5b4-1c24ddc6850b\") " Jan 30 13:12:44.517942 kubelet[2747]: I0130 13:12:44.517633 2747 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/765f7396-700d-42f9-a5b4-1c24ddc6850b-hostproc\") pod \"765f7396-700d-42f9-a5b4-1c24ddc6850b\" (UID: \"765f7396-700d-42f9-a5b4-1c24ddc6850b\") " Jan 30 13:12:44.517942 kubelet[2747]: I0130 13:12:44.517650 2747 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/765f7396-700d-42f9-a5b4-1c24ddc6850b-cni-path\") pod \"765f7396-700d-42f9-a5b4-1c24ddc6850b\" (UID: \"765f7396-700d-42f9-a5b4-1c24ddc6850b\") " Jan 30 13:12:44.517942 kubelet[2747]: I0130 13:12:44.517665 2747 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/765f7396-700d-42f9-a5b4-1c24ddc6850b-lib-modules\") pod \"765f7396-700d-42f9-a5b4-1c24ddc6850b\" (UID: \"765f7396-700d-42f9-a5b4-1c24ddc6850b\") " Jan 30 13:12:44.518754 kubelet[2747]: I0130 13:12:44.517679 2747 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/765f7396-700d-42f9-a5b4-1c24ddc6850b-cilium-cgroup\") pod \"765f7396-700d-42f9-a5b4-1c24ddc6850b\" (UID: \"765f7396-700d-42f9-a5b4-1c24ddc6850b\") " Jan 30 13:12:44.518754 kubelet[2747]: I0130 13:12:44.517709 2747 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/765f7396-700d-42f9-a5b4-1c24ddc6850b-cilium-run\") pod \"765f7396-700d-42f9-a5b4-1c24ddc6850b\" (UID: \"765f7396-700d-42f9-a5b4-1c24ddc6850b\") " Jan 30 13:12:44.518754 kubelet[2747]: I0130 13:12:44.517726 2747 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5q55z\" (UniqueName: \"kubernetes.io/projected/44dc237f-6fbb-4c76-bdda-9a1b193343de-kube-api-access-5q55z\") pod \"44dc237f-6fbb-4c76-bdda-9a1b193343de\" (UID: \"44dc237f-6fbb-4c76-bdda-9a1b193343de\") " Jan 30 13:12:44.518754 kubelet[2747]: I0130 13:12:44.517745 2747 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/765f7396-700d-42f9-a5b4-1c24ddc6850b-bpf-maps\") pod \"765f7396-700d-42f9-a5b4-1c24ddc6850b\" (UID: \"765f7396-700d-42f9-a5b4-1c24ddc6850b\") " Jan 30 13:12:44.518754 kubelet[2747]: I0130 13:12:44.517760 2747 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvph7\" (UniqueName: \"kubernetes.io/projected/765f7396-700d-42f9-a5b4-1c24ddc6850b-kube-api-access-cvph7\") pod \"765f7396-700d-42f9-a5b4-1c24ddc6850b\" (UID: \"765f7396-700d-42f9-a5b4-1c24ddc6850b\") " Jan 30 13:12:44.518754 kubelet[2747]: I0130 13:12:44.517777 2747 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/765f7396-700d-42f9-a5b4-1c24ddc6850b-hubble-tls\") pod \"765f7396-700d-42f9-a5b4-1c24ddc6850b\" (UID: \"765f7396-700d-42f9-a5b4-1c24ddc6850b\") " Jan 30 13:12:44.518926 kubelet[2747]: I0130 13:12:44.517790 2747 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/765f7396-700d-42f9-a5b4-1c24ddc6850b-xtables-lock\") pod \"765f7396-700d-42f9-a5b4-1c24ddc6850b\" (UID: \"765f7396-700d-42f9-a5b4-1c24ddc6850b\") " Jan 30 13:12:44.518926 kubelet[2747]: I0130 13:12:44.517805 2747 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/44dc237f-6fbb-4c76-bdda-9a1b193343de-cilium-config-path\") pod \"44dc237f-6fbb-4c76-bdda-9a1b193343de\" (UID: \"44dc237f-6fbb-4c76-bdda-9a1b193343de\") " Jan 30 13:12:44.518926 kubelet[2747]: I0130 13:12:44.517821 2747 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/765f7396-700d-42f9-a5b4-1c24ddc6850b-clustermesh-secrets\") pod \"765f7396-700d-42f9-a5b4-1c24ddc6850b\" (UID: \"765f7396-700d-42f9-a5b4-1c24ddc6850b\") " Jan 30 13:12:44.518926 kubelet[2747]: I0130 13:12:44.517837 2747 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/765f7396-700d-42f9-a5b4-1c24ddc6850b-host-proc-sys-net\") pod \"765f7396-700d-42f9-a5b4-1c24ddc6850b\" (UID: \"765f7396-700d-42f9-a5b4-1c24ddc6850b\") " Jan 30 13:12:44.520899 kubelet[2747]: I0130 13:12:44.518776 2747 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/765f7396-700d-42f9-a5b4-1c24ddc6850b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "765f7396-700d-42f9-a5b4-1c24ddc6850b" (UID: "765f7396-700d-42f9-a5b4-1c24ddc6850b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:12:44.522177 kubelet[2747]: I0130 13:12:44.521956 2747 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/765f7396-700d-42f9-a5b4-1c24ddc6850b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "765f7396-700d-42f9-a5b4-1c24ddc6850b" (UID: "765f7396-700d-42f9-a5b4-1c24ddc6850b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:12:44.533357 kubelet[2747]: I0130 13:12:44.533250 2747 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/765f7396-700d-42f9-a5b4-1c24ddc6850b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "765f7396-700d-42f9-a5b4-1c24ddc6850b" (UID: "765f7396-700d-42f9-a5b4-1c24ddc6850b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:12:44.533357 kubelet[2747]: I0130 13:12:44.533308 2747 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/765f7396-700d-42f9-a5b4-1c24ddc6850b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "765f7396-700d-42f9-a5b4-1c24ddc6850b" (UID: "765f7396-700d-42f9-a5b4-1c24ddc6850b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:12:44.537866 kubelet[2747]: I0130 13:12:44.536803 2747 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/765f7396-700d-42f9-a5b4-1c24ddc6850b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "765f7396-700d-42f9-a5b4-1c24ddc6850b" (UID: "765f7396-700d-42f9-a5b4-1c24ddc6850b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:12:44.537866 kubelet[2747]: I0130 13:12:44.536841 2747 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/765f7396-700d-42f9-a5b4-1c24ddc6850b-hostproc" (OuterVolumeSpecName: "hostproc") pod "765f7396-700d-42f9-a5b4-1c24ddc6850b" (UID: "765f7396-700d-42f9-a5b4-1c24ddc6850b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:12:44.537866 kubelet[2747]: I0130 13:12:44.536859 2747 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/765f7396-700d-42f9-a5b4-1c24ddc6850b-cni-path" (OuterVolumeSpecName: "cni-path") pod "765f7396-700d-42f9-a5b4-1c24ddc6850b" (UID: "765f7396-700d-42f9-a5b4-1c24ddc6850b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:12:44.537866 kubelet[2747]: I0130 13:12:44.536876 2747 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/765f7396-700d-42f9-a5b4-1c24ddc6850b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "765f7396-700d-42f9-a5b4-1c24ddc6850b" (UID: "765f7396-700d-42f9-a5b4-1c24ddc6850b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:12:44.537866 kubelet[2747]: I0130 13:12:44.536891 2747 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/765f7396-700d-42f9-a5b4-1c24ddc6850b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "765f7396-700d-42f9-a5b4-1c24ddc6850b" (UID: "765f7396-700d-42f9-a5b4-1c24ddc6850b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:12:44.538055 kubelet[2747]: I0130 13:12:44.536904 2747 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/765f7396-700d-42f9-a5b4-1c24ddc6850b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "765f7396-700d-42f9-a5b4-1c24ddc6850b" (UID: "765f7396-700d-42f9-a5b4-1c24ddc6850b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:12:44.540070 kubelet[2747]: I0130 13:12:44.539935 2747 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/765f7396-700d-42f9-a5b4-1c24ddc6850b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "765f7396-700d-42f9-a5b4-1c24ddc6850b" (UID: "765f7396-700d-42f9-a5b4-1c24ddc6850b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:12:44.543376 kubelet[2747]: I0130 13:12:44.543345 2747 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44dc237f-6fbb-4c76-bdda-9a1b193343de-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "44dc237f-6fbb-4c76-bdda-9a1b193343de" (UID: "44dc237f-6fbb-4c76-bdda-9a1b193343de"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:12:44.543890 kubelet[2747]: I0130 13:12:44.543803 2747 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44dc237f-6fbb-4c76-bdda-9a1b193343de-kube-api-access-5q55z" (OuterVolumeSpecName: "kube-api-access-5q55z") pod "44dc237f-6fbb-4c76-bdda-9a1b193343de" (UID: "44dc237f-6fbb-4c76-bdda-9a1b193343de"). InnerVolumeSpecName "kube-api-access-5q55z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:12:44.544033 kubelet[2747]: I0130 13:12:44.544004 2747 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/765f7396-700d-42f9-a5b4-1c24ddc6850b-kube-api-access-cvph7" (OuterVolumeSpecName: "kube-api-access-cvph7") pod "765f7396-700d-42f9-a5b4-1c24ddc6850b" (UID: "765f7396-700d-42f9-a5b4-1c24ddc6850b"). InnerVolumeSpecName "kube-api-access-cvph7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:12:44.544079 kubelet[2747]: I0130 13:12:44.544031 2747 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/765f7396-700d-42f9-a5b4-1c24ddc6850b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "765f7396-700d-42f9-a5b4-1c24ddc6850b" (UID: "765f7396-700d-42f9-a5b4-1c24ddc6850b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:12:44.544249 kubelet[2747]: I0130 13:12:44.544218 2747 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/765f7396-700d-42f9-a5b4-1c24ddc6850b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "765f7396-700d-42f9-a5b4-1c24ddc6850b" (UID: "765f7396-700d-42f9-a5b4-1c24ddc6850b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:12:44.606741 systemd[1]: Removed slice kubepods-burstable-pod765f7396_700d_42f9_a5b4_1c24ddc6850b.slice - libcontainer container kubepods-burstable-pod765f7396_700d_42f9_a5b4_1c24ddc6850b.slice. Jan 30 13:12:44.606855 systemd[1]: kubepods-burstable-pod765f7396_700d_42f9_a5b4_1c24ddc6850b.slice: Consumed 6.975s CPU time. Jan 30 13:12:44.609177 systemd[1]: Removed slice kubepods-besteffort-pod44dc237f_6fbb_4c76_bdda_9a1b193343de.slice - libcontainer container kubepods-besteffort-pod44dc237f_6fbb_4c76_bdda_9a1b193343de.slice. Jan 30 13:12:44.621115 kubelet[2747]: I0130 13:12:44.621070 2747 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/765f7396-700d-42f9-a5b4-1c24ddc6850b-bpf-maps\") on node \"ci-4186-1-0-d-73846a73c0\" DevicePath \"\"" Jan 30 13:12:44.621115 kubelet[2747]: I0130 13:12:44.621108 2747 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-cvph7\" (UniqueName: \"kubernetes.io/projected/765f7396-700d-42f9-a5b4-1c24ddc6850b-kube-api-access-cvph7\") on node \"ci-4186-1-0-d-73846a73c0\" DevicePath \"\"" Jan 30 13:12:44.621115 kubelet[2747]: I0130 13:12:44.621124 2747 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/765f7396-700d-42f9-a5b4-1c24ddc6850b-hubble-tls\") on node \"ci-4186-1-0-d-73846a73c0\" DevicePath \"\"" Jan 30 13:12:44.621318 kubelet[2747]: I0130 13:12:44.621135 2747 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/765f7396-700d-42f9-a5b4-1c24ddc6850b-xtables-lock\") on node \"ci-4186-1-0-d-73846a73c0\" DevicePath \"\"" Jan 30 13:12:44.621318 kubelet[2747]: I0130 13:12:44.621144 2747 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/44dc237f-6fbb-4c76-bdda-9a1b193343de-cilium-config-path\") on node \"ci-4186-1-0-d-73846a73c0\" DevicePath \"\"" Jan 30 13:12:44.621318 kubelet[2747]: I0130 13:12:44.621153 2747 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/765f7396-700d-42f9-a5b4-1c24ddc6850b-clustermesh-secrets\") on node \"ci-4186-1-0-d-73846a73c0\" DevicePath \"\"" Jan 30 13:12:44.621318 kubelet[2747]: I0130 13:12:44.621163 2747 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/765f7396-700d-42f9-a5b4-1c24ddc6850b-host-proc-sys-net\") on node \"ci-4186-1-0-d-73846a73c0\" DevicePath \"\"" Jan 30 13:12:44.621318 kubelet[2747]: I0130 13:12:44.621172 2747 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/765f7396-700d-42f9-a5b4-1c24ddc6850b-etc-cni-netd\") on node \"ci-4186-1-0-d-73846a73c0\" DevicePath \"\"" Jan 30 13:12:44.621318 kubelet[2747]: I0130 13:12:44.621181 2747 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/765f7396-700d-42f9-a5b4-1c24ddc6850b-cilium-config-path\") on node \"ci-4186-1-0-d-73846a73c0\" DevicePath \"\"" Jan 30 13:12:44.621318 kubelet[2747]: I0130 13:12:44.621191 2747 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/765f7396-700d-42f9-a5b4-1c24ddc6850b-host-proc-sys-kernel\") on node \"ci-4186-1-0-d-73846a73c0\" DevicePath \"\"" Jan 30 13:12:44.621318 kubelet[2747]: I0130 13:12:44.621203 2747 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/765f7396-700d-42f9-a5b4-1c24ddc6850b-hostproc\") on node \"ci-4186-1-0-d-73846a73c0\" DevicePath \"\"" Jan 30 13:12:44.621575 kubelet[2747]: I0130 13:12:44.621212 2747 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/765f7396-700d-42f9-a5b4-1c24ddc6850b-cni-path\") on node \"ci-4186-1-0-d-73846a73c0\" DevicePath \"\"" Jan 30 13:12:44.621575 kubelet[2747]: I0130 13:12:44.621220 2747 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/765f7396-700d-42f9-a5b4-1c24ddc6850b-lib-modules\") on node \"ci-4186-1-0-d-73846a73c0\" DevicePath \"\"" Jan 30 13:12:44.621575 kubelet[2747]: I0130 13:12:44.621232 2747 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/765f7396-700d-42f9-a5b4-1c24ddc6850b-cilium-cgroup\") on node \"ci-4186-1-0-d-73846a73c0\" DevicePath \"\"" Jan 30 13:12:44.621575 kubelet[2747]: I0130 13:12:44.621241 2747 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/765f7396-700d-42f9-a5b4-1c24ddc6850b-cilium-run\") on node \"ci-4186-1-0-d-73846a73c0\" DevicePath \"\"" Jan 30 13:12:44.621575 kubelet[2747]: I0130 13:12:44.621251 2747 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-5q55z\" (UniqueName: \"kubernetes.io/projected/44dc237f-6fbb-4c76-bdda-9a1b193343de-kube-api-access-5q55z\") on node \"ci-4186-1-0-d-73846a73c0\" DevicePath \"\"" Jan 30 13:12:45.105452 kubelet[2747]: I0130 13:12:45.105054 2747 scope.go:117] "RemoveContainer" containerID="b67e52ae726e24c1701730904e36c69cb3497b286f27d92e6a837347ed5caf9f" Jan 30 13:12:45.112503 containerd[1498]: time="2025-01-30T13:12:45.112438331Z" level=info msg="RemoveContainer for \"b67e52ae726e24c1701730904e36c69cb3497b286f27d92e6a837347ed5caf9f\"" Jan 30 13:12:45.117403 containerd[1498]: time="2025-01-30T13:12:45.116900443Z" level=info msg="RemoveContainer for \"b67e52ae726e24c1701730904e36c69cb3497b286f27d92e6a837347ed5caf9f\" returns successfully" Jan 30 13:12:45.117541 kubelet[2747]: I0130 13:12:45.117379 2747 scope.go:117] "RemoveContainer" containerID="b8aba3645038723c837581f624f05a93810695644fce497ef77e835e2e086770" Jan 30 13:12:45.118992 containerd[1498]: time="2025-01-30T13:12:45.118959974Z" level=info msg="RemoveContainer for \"b8aba3645038723c837581f624f05a93810695644fce497ef77e835e2e086770\"" Jan 30 13:12:45.121886 containerd[1498]: time="2025-01-30T13:12:45.121856520Z" level=info msg="RemoveContainer for \"b8aba3645038723c837581f624f05a93810695644fce497ef77e835e2e086770\" returns successfully" Jan 30 13:12:45.122114 kubelet[2747]: I0130 13:12:45.122009 2747 scope.go:117] "RemoveContainer" containerID="f08ffd236c247be17d88d16a4ffc0eba9bc0c16cc23dea995659eff6f8260a4d" Jan 30 13:12:45.123800 containerd[1498]: time="2025-01-30T13:12:45.123771059Z" level=info msg="RemoveContainer for \"f08ffd236c247be17d88d16a4ffc0eba9bc0c16cc23dea995659eff6f8260a4d\"" Jan 30 13:12:45.131294 containerd[1498]: time="2025-01-30T13:12:45.131265733Z" level=info msg="RemoveContainer for \"f08ffd236c247be17d88d16a4ffc0eba9bc0c16cc23dea995659eff6f8260a4d\" returns successfully" Jan 30 13:12:45.131588 kubelet[2747]: I0130 13:12:45.131560 2747 scope.go:117] "RemoveContainer" containerID="0279af0f57dbd404e6ff5e400049190568b28189494990e6da848747a1e4f17f" Jan 30 13:12:45.132401 containerd[1498]: time="2025-01-30T13:12:45.132350390Z" level=info msg="RemoveContainer for \"0279af0f57dbd404e6ff5e400049190568b28189494990e6da848747a1e4f17f\"" Jan 30 13:12:45.134803 containerd[1498]: time="2025-01-30T13:12:45.134773502Z" level=info msg="RemoveContainer for \"0279af0f57dbd404e6ff5e400049190568b28189494990e6da848747a1e4f17f\" returns successfully" Jan 30 13:12:45.134938 kubelet[2747]: I0130 13:12:45.134911 2747 scope.go:117] "RemoveContainer" containerID="a7182ea3fb540a99af63524074d3b25ac1649760bae1d0847fe9641559e9d3d4" Jan 30 13:12:45.135945 containerd[1498]: time="2025-01-30T13:12:45.135919620Z" level=info msg="RemoveContainer for \"a7182ea3fb540a99af63524074d3b25ac1649760bae1d0847fe9641559e9d3d4\"" Jan 30 13:12:45.138897 containerd[1498]: time="2025-01-30T13:12:45.138868206Z" level=info msg="RemoveContainer for \"a7182ea3fb540a99af63524074d3b25ac1649760bae1d0847fe9641559e9d3d4\" returns successfully" Jan 30 13:12:45.139141 kubelet[2747]: I0130 13:12:45.139012 2747 scope.go:117] "RemoveContainer" containerID="b67e52ae726e24c1701730904e36c69cb3497b286f27d92e6a837347ed5caf9f" Jan 30 13:12:45.139384 containerd[1498]: time="2025-01-30T13:12:45.139322183Z" level=error msg="ContainerStatus for \"b67e52ae726e24c1701730904e36c69cb3497b286f27d92e6a837347ed5caf9f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b67e52ae726e24c1701730904e36c69cb3497b286f27d92e6a837347ed5caf9f\": not found" Jan 30 13:12:45.140366 kubelet[2747]: E0130 13:12:45.140308 2747 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b67e52ae726e24c1701730904e36c69cb3497b286f27d92e6a837347ed5caf9f\": not found" containerID="b67e52ae726e24c1701730904e36c69cb3497b286f27d92e6a837347ed5caf9f" Jan 30 13:12:45.142511 kubelet[2747]: I0130 13:12:45.141978 2747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b67e52ae726e24c1701730904e36c69cb3497b286f27d92e6a837347ed5caf9f"} err="failed to get container status \"b67e52ae726e24c1701730904e36c69cb3497b286f27d92e6a837347ed5caf9f\": rpc error: code = NotFound desc = an error occurred when try to find container \"b67e52ae726e24c1701730904e36c69cb3497b286f27d92e6a837347ed5caf9f\": not found" Jan 30 13:12:45.142511 kubelet[2747]: I0130 13:12:45.142089 2747 scope.go:117] "RemoveContainer" containerID="b8aba3645038723c837581f624f05a93810695644fce497ef77e835e2e086770" Jan 30 13:12:45.142511 kubelet[2747]: E0130 13:12:45.142354 2747 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b8aba3645038723c837581f624f05a93810695644fce497ef77e835e2e086770\": not found" containerID="b8aba3645038723c837581f624f05a93810695644fce497ef77e835e2e086770" Jan 30 13:12:45.142511 kubelet[2747]: I0130 13:12:45.142372 2747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b8aba3645038723c837581f624f05a93810695644fce497ef77e835e2e086770"} err="failed to get container status \"b8aba3645038723c837581f624f05a93810695644fce497ef77e835e2e086770\": rpc error: code = NotFound desc = an error occurred when try to find container \"b8aba3645038723c837581f624f05a93810695644fce497ef77e835e2e086770\": not found" Jan 30 13:12:45.142511 kubelet[2747]: I0130 13:12:45.142461 2747 scope.go:117] "RemoveContainer" containerID="f08ffd236c247be17d88d16a4ffc0eba9bc0c16cc23dea995659eff6f8260a4d" Jan 30 13:12:45.143558 containerd[1498]: time="2025-01-30T13:12:45.142252946Z" level=error msg="ContainerStatus for \"b8aba3645038723c837581f624f05a93810695644fce497ef77e835e2e086770\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b8aba3645038723c837581f624f05a93810695644fce497ef77e835e2e086770\": not found" Jan 30 13:12:45.143558 containerd[1498]: time="2025-01-30T13:12:45.142722974Z" level=error msg="ContainerStatus for \"f08ffd236c247be17d88d16a4ffc0eba9bc0c16cc23dea995659eff6f8260a4d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f08ffd236c247be17d88d16a4ffc0eba9bc0c16cc23dea995659eff6f8260a4d\": not found" Jan 30 13:12:45.143558 containerd[1498]: time="2025-01-30T13:12:45.143074951Z" level=error msg="ContainerStatus for \"0279af0f57dbd404e6ff5e400049190568b28189494990e6da848747a1e4f17f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0279af0f57dbd404e6ff5e400049190568b28189494990e6da848747a1e4f17f\": not found" Jan 30 13:12:45.143558 containerd[1498]: time="2025-01-30T13:12:45.143468369Z" level=error msg="ContainerStatus for \"a7182ea3fb540a99af63524074d3b25ac1649760bae1d0847fe9641559e9d3d4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a7182ea3fb540a99af63524074d3b25ac1649760bae1d0847fe9641559e9d3d4\": not found" Jan 30 13:12:45.143649 kubelet[2747]: E0130 13:12:45.142815 2747 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f08ffd236c247be17d88d16a4ffc0eba9bc0c16cc23dea995659eff6f8260a4d\": not found" containerID="f08ffd236c247be17d88d16a4ffc0eba9bc0c16cc23dea995659eff6f8260a4d" Jan 30 13:12:45.143649 kubelet[2747]: I0130 13:12:45.142831 2747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f08ffd236c247be17d88d16a4ffc0eba9bc0c16cc23dea995659eff6f8260a4d"} err="failed to get container status \"f08ffd236c247be17d88d16a4ffc0eba9bc0c16cc23dea995659eff6f8260a4d\": rpc error: code = NotFound desc = an error occurred when try to find container \"f08ffd236c247be17d88d16a4ffc0eba9bc0c16cc23dea995659eff6f8260a4d\": not found" Jan 30 13:12:45.143649 kubelet[2747]: I0130 13:12:45.142842 2747 scope.go:117] "RemoveContainer" containerID="0279af0f57dbd404e6ff5e400049190568b28189494990e6da848747a1e4f17f" Jan 30 13:12:45.143649 kubelet[2747]: E0130 13:12:45.143201 2747 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0279af0f57dbd404e6ff5e400049190568b28189494990e6da848747a1e4f17f\": not found" containerID="0279af0f57dbd404e6ff5e400049190568b28189494990e6da848747a1e4f17f" Jan 30 13:12:45.143649 kubelet[2747]: I0130 13:12:45.143256 2747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0279af0f57dbd404e6ff5e400049190568b28189494990e6da848747a1e4f17f"} err="failed to get container status \"0279af0f57dbd404e6ff5e400049190568b28189494990e6da848747a1e4f17f\": rpc error: code = NotFound desc = an error occurred when try to find container \"0279af0f57dbd404e6ff5e400049190568b28189494990e6da848747a1e4f17f\": not found" Jan 30 13:12:45.143649 kubelet[2747]: I0130 13:12:45.143267 2747 scope.go:117] "RemoveContainer" containerID="a7182ea3fb540a99af63524074d3b25ac1649760bae1d0847fe9641559e9d3d4" Jan 30 13:12:45.143794 kubelet[2747]: E0130 13:12:45.143596 2747 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a7182ea3fb540a99af63524074d3b25ac1649760bae1d0847fe9641559e9d3d4\": not found" containerID="a7182ea3fb540a99af63524074d3b25ac1649760bae1d0847fe9641559e9d3d4" Jan 30 13:12:45.143794 kubelet[2747]: I0130 13:12:45.143611 2747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a7182ea3fb540a99af63524074d3b25ac1649760bae1d0847fe9641559e9d3d4"} err="failed to get container status \"a7182ea3fb540a99af63524074d3b25ac1649760bae1d0847fe9641559e9d3d4\": rpc error: code = NotFound desc = an error occurred when try to find container \"a7182ea3fb540a99af63524074d3b25ac1649760bae1d0847fe9641559e9d3d4\": not found" Jan 30 13:12:45.143794 kubelet[2747]: I0130 13:12:45.143687 2747 scope.go:117] "RemoveContainer" containerID="7b60c61b42e876fc732542716b0a301ffa5e7665bc0b4b25cf5b3c74fe525dec" Jan 30 13:12:45.144831 containerd[1498]: time="2025-01-30T13:12:45.144803125Z" level=info msg="RemoveContainer for \"7b60c61b42e876fc732542716b0a301ffa5e7665bc0b4b25cf5b3c74fe525dec\"" Jan 30 13:12:45.151505 containerd[1498]: time="2025-01-30T13:12:45.151220916Z" level=info msg="RemoveContainer for \"7b60c61b42e876fc732542716b0a301ffa5e7665bc0b4b25cf5b3c74fe525dec\" returns successfully" Jan 30 13:12:45.152525 kubelet[2747]: I0130 13:12:45.152509 2747 scope.go:117] "RemoveContainer" containerID="7b60c61b42e876fc732542716b0a301ffa5e7665bc0b4b25cf5b3c74fe525dec" Jan 30 13:12:45.153378 containerd[1498]: time="2025-01-30T13:12:45.153318602Z" level=error msg="ContainerStatus for \"7b60c61b42e876fc732542716b0a301ffa5e7665bc0b4b25cf5b3c74fe525dec\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7b60c61b42e876fc732542716b0a301ffa5e7665bc0b4b25cf5b3c74fe525dec\": not found" Jan 30 13:12:45.153597 kubelet[2747]: E0130 13:12:45.153558 2747 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7b60c61b42e876fc732542716b0a301ffa5e7665bc0b4b25cf5b3c74fe525dec\": not found" containerID="7b60c61b42e876fc732542716b0a301ffa5e7665bc0b4b25cf5b3c74fe525dec" Jan 30 13:12:45.153641 kubelet[2747]: I0130 13:12:45.153589 2747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7b60c61b42e876fc732542716b0a301ffa5e7665bc0b4b25cf5b3c74fe525dec"} err="failed to get container status \"7b60c61b42e876fc732542716b0a301ffa5e7665bc0b4b25cf5b3c74fe525dec\": rpc error: code = NotFound desc = an error occurred when try to find container \"7b60c61b42e876fc732542716b0a301ffa5e7665bc0b4b25cf5b3c74fe525dec\": not found" Jan 30 13:12:45.192176 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8ead30684677970f9efa346a5e91c69e8941cf1fb53f618a1f98ce32680fa570-rootfs.mount: Deactivated successfully. Jan 30 13:12:45.192478 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4b6bce24615ffe963a14f15c01a4dfc5f81b4005be6b9f9264fb179063444123-rootfs.mount: Deactivated successfully. Jan 30 13:12:45.192670 systemd[1]: var-lib-kubelet-pods-44dc237f\x2d6fbb\x2d4c76\x2dbdda\x2d9a1b193343de-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5q55z.mount: Deactivated successfully. Jan 30 13:12:45.192780 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4b6bce24615ffe963a14f15c01a4dfc5f81b4005be6b9f9264fb179063444123-shm.mount: Deactivated successfully. Jan 30 13:12:45.192854 systemd[1]: var-lib-kubelet-pods-765f7396\x2d700d\x2d42f9\x2da5b4\x2d1c24ddc6850b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcvph7.mount: Deactivated successfully. Jan 30 13:12:45.192951 systemd[1]: var-lib-kubelet-pods-765f7396\x2d700d\x2d42f9\x2da5b4\x2d1c24ddc6850b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 30 13:12:45.193034 systemd[1]: var-lib-kubelet-pods-765f7396\x2d700d\x2d42f9\x2da5b4\x2d1c24ddc6850b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 30 13:12:45.700372 kubelet[2747]: E0130 13:12:45.689842 2747 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 30 13:12:46.281137 sshd[4291]: Connection closed by 139.178.89.65 port 37156 Jan 30 13:12:46.281507 sshd-session[4289]: pam_unix(sshd:session): session closed for user core Jan 30 13:12:46.285520 systemd-logind[1474]: Session 20 logged out. Waiting for processes to exit. Jan 30 13:12:46.286430 systemd[1]: sshd@19-138.199.163.224:22-139.178.89.65:37156.service: Deactivated successfully. Jan 30 13:12:46.288385 systemd[1]: session-20.scope: Deactivated successfully. Jan 30 13:12:46.289614 systemd-logind[1474]: Removed session 20. Jan 30 13:12:46.453988 systemd[1]: Started sshd@20-138.199.163.224:22-139.178.89.65:37158.service - OpenSSH per-connection server daemon (139.178.89.65:37158). Jan 30 13:12:46.597984 kubelet[2747]: I0130 13:12:46.597263 2747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44dc237f-6fbb-4c76-bdda-9a1b193343de" path="/var/lib/kubelet/pods/44dc237f-6fbb-4c76-bdda-9a1b193343de/volumes" Jan 30 13:12:46.598566 kubelet[2747]: I0130 13:12:46.598419 2747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="765f7396-700d-42f9-a5b4-1c24ddc6850b" path="/var/lib/kubelet/pods/765f7396-700d-42f9-a5b4-1c24ddc6850b/volumes" Jan 30 13:12:47.453652 sshd[4450]: Accepted publickey for core from 139.178.89.65 port 37158 ssh2: RSA SHA256:5b7aLHOxh/fZTvNGxGsjXyEVE8Wd5gb2YihhQWnHlKs Jan 30 13:12:47.455407 sshd-session[4450]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:12:47.463677 systemd-logind[1474]: New session 21 of user core. Jan 30 13:12:47.470738 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 30 13:12:48.394456 kubelet[2747]: I0130 13:12:48.392522 2747 topology_manager.go:215] "Topology Admit Handler" podUID="0a50295d-1de8-4dc3-a318-47ea2ba6ddf9" podNamespace="kube-system" podName="cilium-765kh" Jan 30 13:12:48.394861 kubelet[2747]: E0130 13:12:48.394519 2747 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="44dc237f-6fbb-4c76-bdda-9a1b193343de" containerName="cilium-operator" Jan 30 13:12:48.394861 kubelet[2747]: E0130 13:12:48.394530 2747 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="765f7396-700d-42f9-a5b4-1c24ddc6850b" containerName="clean-cilium-state" Jan 30 13:12:48.394861 kubelet[2747]: E0130 13:12:48.394537 2747 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="765f7396-700d-42f9-a5b4-1c24ddc6850b" containerName="cilium-agent" Jan 30 13:12:48.394861 kubelet[2747]: E0130 13:12:48.394543 2747 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="765f7396-700d-42f9-a5b4-1c24ddc6850b" containerName="mount-cgroup" Jan 30 13:12:48.394861 kubelet[2747]: E0130 13:12:48.394548 2747 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="765f7396-700d-42f9-a5b4-1c24ddc6850b" containerName="apply-sysctl-overwrites" Jan 30 13:12:48.394861 kubelet[2747]: E0130 13:12:48.394571 2747 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="765f7396-700d-42f9-a5b4-1c24ddc6850b" containerName="mount-bpf-fs" Jan 30 13:12:48.397361 kubelet[2747]: I0130 13:12:48.394592 2747 memory_manager.go:354] "RemoveStaleState removing state" podUID="765f7396-700d-42f9-a5b4-1c24ddc6850b" containerName="cilium-agent" Jan 30 13:12:48.397361 kubelet[2747]: I0130 13:12:48.397361 2747 memory_manager.go:354] "RemoveStaleState removing state" podUID="44dc237f-6fbb-4c76-bdda-9a1b193343de" containerName="cilium-operator" Jan 30 13:12:48.417145 systemd[1]: Created slice kubepods-burstable-pod0a50295d_1de8_4dc3_a318_47ea2ba6ddf9.slice - libcontainer container kubepods-burstable-pod0a50295d_1de8_4dc3_a318_47ea2ba6ddf9.slice. Jan 30 13:12:48.441693 kubelet[2747]: I0130 13:12:48.441655 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0a50295d-1de8-4dc3-a318-47ea2ba6ddf9-host-proc-sys-net\") pod \"cilium-765kh\" (UID: \"0a50295d-1de8-4dc3-a318-47ea2ba6ddf9\") " pod="kube-system/cilium-765kh" Jan 30 13:12:48.441834 kubelet[2747]: I0130 13:12:48.441747 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0a50295d-1de8-4dc3-a318-47ea2ba6ddf9-etc-cni-netd\") pod \"cilium-765kh\" (UID: \"0a50295d-1de8-4dc3-a318-47ea2ba6ddf9\") " pod="kube-system/cilium-765kh" Jan 30 13:12:48.441834 kubelet[2747]: I0130 13:12:48.441766 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0a50295d-1de8-4dc3-a318-47ea2ba6ddf9-host-proc-sys-kernel\") pod \"cilium-765kh\" (UID: \"0a50295d-1de8-4dc3-a318-47ea2ba6ddf9\") " pod="kube-system/cilium-765kh" Jan 30 13:12:48.441834 kubelet[2747]: I0130 13:12:48.441819 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0a50295d-1de8-4dc3-a318-47ea2ba6ddf9-bpf-maps\") pod \"cilium-765kh\" (UID: \"0a50295d-1de8-4dc3-a318-47ea2ba6ddf9\") " pod="kube-system/cilium-765kh" Jan 30 13:12:48.441834 kubelet[2747]: I0130 13:12:48.441835 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0a50295d-1de8-4dc3-a318-47ea2ba6ddf9-hubble-tls\") pod \"cilium-765kh\" (UID: \"0a50295d-1de8-4dc3-a318-47ea2ba6ddf9\") " pod="kube-system/cilium-765kh" Jan 30 13:12:48.441923 kubelet[2747]: I0130 13:12:48.441847 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0a50295d-1de8-4dc3-a318-47ea2ba6ddf9-cni-path\") pod \"cilium-765kh\" (UID: \"0a50295d-1de8-4dc3-a318-47ea2ba6ddf9\") " pod="kube-system/cilium-765kh" Jan 30 13:12:48.441923 kubelet[2747]: I0130 13:12:48.441862 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0a50295d-1de8-4dc3-a318-47ea2ba6ddf9-xtables-lock\") pod \"cilium-765kh\" (UID: \"0a50295d-1de8-4dc3-a318-47ea2ba6ddf9\") " pod="kube-system/cilium-765kh" Jan 30 13:12:48.441923 kubelet[2747]: I0130 13:12:48.441909 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0a50295d-1de8-4dc3-a318-47ea2ba6ddf9-clustermesh-secrets\") pod \"cilium-765kh\" (UID: \"0a50295d-1de8-4dc3-a318-47ea2ba6ddf9\") " pod="kube-system/cilium-765kh" Jan 30 13:12:48.441979 kubelet[2747]: I0130 13:12:48.441922 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0a50295d-1de8-4dc3-a318-47ea2ba6ddf9-cilium-ipsec-secrets\") pod \"cilium-765kh\" (UID: \"0a50295d-1de8-4dc3-a318-47ea2ba6ddf9\") " pod="kube-system/cilium-765kh" Jan 30 13:12:48.441979 kubelet[2747]: I0130 13:12:48.441935 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0a50295d-1de8-4dc3-a318-47ea2ba6ddf9-cilium-run\") pod \"cilium-765kh\" (UID: \"0a50295d-1de8-4dc3-a318-47ea2ba6ddf9\") " pod="kube-system/cilium-765kh" Jan 30 13:12:48.442022 kubelet[2747]: I0130 13:12:48.441948 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0a50295d-1de8-4dc3-a318-47ea2ba6ddf9-hostproc\") pod \"cilium-765kh\" (UID: \"0a50295d-1de8-4dc3-a318-47ea2ba6ddf9\") " pod="kube-system/cilium-765kh" Jan 30 13:12:48.442022 kubelet[2747]: I0130 13:12:48.442005 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98q7n\" (UniqueName: \"kubernetes.io/projected/0a50295d-1de8-4dc3-a318-47ea2ba6ddf9-kube-api-access-98q7n\") pod \"cilium-765kh\" (UID: \"0a50295d-1de8-4dc3-a318-47ea2ba6ddf9\") " pod="kube-system/cilium-765kh" Jan 30 13:12:48.442022 kubelet[2747]: I0130 13:12:48.442018 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0a50295d-1de8-4dc3-a318-47ea2ba6ddf9-cilium-cgroup\") pod \"cilium-765kh\" (UID: \"0a50295d-1de8-4dc3-a318-47ea2ba6ddf9\") " pod="kube-system/cilium-765kh" Jan 30 13:12:48.442075 kubelet[2747]: I0130 13:12:48.442029 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0a50295d-1de8-4dc3-a318-47ea2ba6ddf9-lib-modules\") pod \"cilium-765kh\" (UID: \"0a50295d-1de8-4dc3-a318-47ea2ba6ddf9\") " pod="kube-system/cilium-765kh" Jan 30 13:12:48.442075 kubelet[2747]: I0130 13:12:48.442056 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0a50295d-1de8-4dc3-a318-47ea2ba6ddf9-cilium-config-path\") pod \"cilium-765kh\" (UID: \"0a50295d-1de8-4dc3-a318-47ea2ba6ddf9\") " pod="kube-system/cilium-765kh" Jan 30 13:12:48.626229 sshd[4455]: Connection closed by 139.178.89.65 port 37158 Jan 30 13:12:48.626925 sshd-session[4450]: pam_unix(sshd:session): session closed for user core Jan 30 13:12:48.630436 systemd[1]: sshd@20-138.199.163.224:22-139.178.89.65:37158.service: Deactivated successfully. Jan 30 13:12:48.633203 systemd[1]: session-21.scope: Deactivated successfully. Jan 30 13:12:48.635797 systemd-logind[1474]: Session 21 logged out. Waiting for processes to exit. Jan 30 13:12:48.637426 systemd-logind[1474]: Removed session 21. Jan 30 13:12:48.722443 containerd[1498]: time="2025-01-30T13:12:48.721878646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-765kh,Uid:0a50295d-1de8-4dc3-a318-47ea2ba6ddf9,Namespace:kube-system,Attempt:0,}" Jan 30 13:12:48.745685 containerd[1498]: time="2025-01-30T13:12:48.745539714Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:12:48.745685 containerd[1498]: time="2025-01-30T13:12:48.745597026Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:12:48.745685 containerd[1498]: time="2025-01-30T13:12:48.745612196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:12:48.745944 containerd[1498]: time="2025-01-30T13:12:48.745688434Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:12:48.765706 systemd[1]: Started cri-containerd-4561b50f5b4d54106f5ed616ce1d14ae67f0ed60121fdb5a4584a5f7e160a527.scope - libcontainer container 4561b50f5b4d54106f5ed616ce1d14ae67f0ed60121fdb5a4584a5f7e160a527. Jan 30 13:12:48.796737 containerd[1498]: time="2025-01-30T13:12:48.796675989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-765kh,Uid:0a50295d-1de8-4dc3-a318-47ea2ba6ddf9,Namespace:kube-system,Attempt:0,} returns sandbox id \"4561b50f5b4d54106f5ed616ce1d14ae67f0ed60121fdb5a4584a5f7e160a527\"" Jan 30 13:12:48.806129 containerd[1498]: time="2025-01-30T13:12:48.805966288Z" level=info msg="CreateContainer within sandbox \"4561b50f5b4d54106f5ed616ce1d14ae67f0ed60121fdb5a4584a5f7e160a527\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 13:12:48.807561 systemd[1]: Started sshd@21-138.199.163.224:22-139.178.89.65:37162.service - OpenSSH per-connection server daemon (139.178.89.65:37162). Jan 30 13:12:48.815865 containerd[1498]: time="2025-01-30T13:12:48.815704631Z" level=info msg="CreateContainer within sandbox \"4561b50f5b4d54106f5ed616ce1d14ae67f0ed60121fdb5a4584a5f7e160a527\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8321684688c2e168438a724521b80911e0693807735d9e92052467d9727f6671\"" Jan 30 13:12:48.817146 containerd[1498]: time="2025-01-30T13:12:48.816372474Z" level=info msg="StartContainer for \"8321684688c2e168438a724521b80911e0693807735d9e92052467d9727f6671\"" Jan 30 13:12:48.845690 systemd[1]: Started cri-containerd-8321684688c2e168438a724521b80911e0693807735d9e92052467d9727f6671.scope - libcontainer container 8321684688c2e168438a724521b80911e0693807735d9e92052467d9727f6671. Jan 30 13:12:48.870766 containerd[1498]: time="2025-01-30T13:12:48.870696215Z" level=info msg="StartContainer for \"8321684688c2e168438a724521b80911e0693807735d9e92052467d9727f6671\" returns successfully" Jan 30 13:12:48.881711 systemd[1]: cri-containerd-8321684688c2e168438a724521b80911e0693807735d9e92052467d9727f6671.scope: Deactivated successfully. Jan 30 13:12:48.917150 containerd[1498]: time="2025-01-30T13:12:48.917086223Z" level=info msg="shim disconnected" id=8321684688c2e168438a724521b80911e0693807735d9e92052467d9727f6671 namespace=k8s.io Jan 30 13:12:48.917150 containerd[1498]: time="2025-01-30T13:12:48.917135940Z" level=warning msg="cleaning up after shim disconnected" id=8321684688c2e168438a724521b80911e0693807735d9e92052467d9727f6671 namespace=k8s.io Jan 30 13:12:48.917150 containerd[1498]: time="2025-01-30T13:12:48.917143475Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:12:49.124637 containerd[1498]: time="2025-01-30T13:12:49.123007198Z" level=info msg="CreateContainer within sandbox \"4561b50f5b4d54106f5ed616ce1d14ae67f0ed60121fdb5a4584a5f7e160a527\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 13:12:49.134218 containerd[1498]: time="2025-01-30T13:12:49.134162958Z" level=info msg="CreateContainer within sandbox \"4561b50f5b4d54106f5ed616ce1d14ae67f0ed60121fdb5a4584a5f7e160a527\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6550dbea0aeda809ad282cbe622e39ccebbe0877179421713c51fc69658bb867\"" Jan 30 13:12:49.135856 containerd[1498]: time="2025-01-30T13:12:49.134682110Z" level=info msg="StartContainer for \"6550dbea0aeda809ad282cbe622e39ccebbe0877179421713c51fc69658bb867\"" Jan 30 13:12:49.164622 systemd[1]: Started cri-containerd-6550dbea0aeda809ad282cbe622e39ccebbe0877179421713c51fc69658bb867.scope - libcontainer container 6550dbea0aeda809ad282cbe622e39ccebbe0877179421713c51fc69658bb867. Jan 30 13:12:49.189957 containerd[1498]: time="2025-01-30T13:12:49.189908783Z" level=info msg="StartContainer for \"6550dbea0aeda809ad282cbe622e39ccebbe0877179421713c51fc69658bb867\" returns successfully" Jan 30 13:12:49.198637 systemd[1]: cri-containerd-6550dbea0aeda809ad282cbe622e39ccebbe0877179421713c51fc69658bb867.scope: Deactivated successfully. Jan 30 13:12:49.220152 containerd[1498]: time="2025-01-30T13:12:49.220064867Z" level=info msg="shim disconnected" id=6550dbea0aeda809ad282cbe622e39ccebbe0877179421713c51fc69658bb867 namespace=k8s.io Jan 30 13:12:49.220152 containerd[1498]: time="2025-01-30T13:12:49.220147599Z" level=warning msg="cleaning up after shim disconnected" id=6550dbea0aeda809ad282cbe622e39ccebbe0877179421713c51fc69658bb867 namespace=k8s.io Jan 30 13:12:49.220152 containerd[1498]: time="2025-01-30T13:12:49.220156956Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:12:49.793290 sshd[4508]: Accepted publickey for core from 139.178.89.65 port 37162 ssh2: RSA SHA256:5b7aLHOxh/fZTvNGxGsjXyEVE8Wd5gb2YihhQWnHlKs Jan 30 13:12:49.794958 sshd-session[4508]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:12:49.800232 systemd-logind[1474]: New session 22 of user core. Jan 30 13:12:49.804611 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 30 13:12:50.130800 containerd[1498]: time="2025-01-30T13:12:50.130480095Z" level=info msg="CreateContainer within sandbox \"4561b50f5b4d54106f5ed616ce1d14ae67f0ed60121fdb5a4584a5f7e160a527\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 13:12:50.167464 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1228083295.mount: Deactivated successfully. Jan 30 13:12:50.169395 containerd[1498]: time="2025-01-30T13:12:50.169279532Z" level=info msg="CreateContainer within sandbox \"4561b50f5b4d54106f5ed616ce1d14ae67f0ed60121fdb5a4584a5f7e160a527\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4a5bf58e6646e4387119468220aad1d2ab9f778486325d99a7282913d58f5ed1\"" Jan 30 13:12:50.170434 containerd[1498]: time="2025-01-30T13:12:50.169879231Z" level=info msg="StartContainer for \"4a5bf58e6646e4387119468220aad1d2ab9f778486325d99a7282913d58f5ed1\"" Jan 30 13:12:50.209645 systemd[1]: Started cri-containerd-4a5bf58e6646e4387119468220aad1d2ab9f778486325d99a7282913d58f5ed1.scope - libcontainer container 4a5bf58e6646e4387119468220aad1d2ab9f778486325d99a7282913d58f5ed1. Jan 30 13:12:50.237326 containerd[1498]: time="2025-01-30T13:12:50.237254262Z" level=info msg="StartContainer for \"4a5bf58e6646e4387119468220aad1d2ab9f778486325d99a7282913d58f5ed1\" returns successfully" Jan 30 13:12:50.244227 systemd[1]: cri-containerd-4a5bf58e6646e4387119468220aad1d2ab9f778486325d99a7282913d58f5ed1.scope: Deactivated successfully. Jan 30 13:12:50.265622 containerd[1498]: time="2025-01-30T13:12:50.265557454Z" level=info msg="shim disconnected" id=4a5bf58e6646e4387119468220aad1d2ab9f778486325d99a7282913d58f5ed1 namespace=k8s.io Jan 30 13:12:50.265622 containerd[1498]: time="2025-01-30T13:12:50.265607903Z" level=warning msg="cleaning up after shim disconnected" id=4a5bf58e6646e4387119468220aad1d2ab9f778486325d99a7282913d58f5ed1 namespace=k8s.io Jan 30 13:12:50.265622 containerd[1498]: time="2025-01-30T13:12:50.265616079Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:12:50.479822 sshd[4634]: Connection closed by 139.178.89.65 port 37162 Jan 30 13:12:50.480955 sshd-session[4508]: pam_unix(sshd:session): session closed for user core Jan 30 13:12:50.486419 systemd[1]: sshd@21-138.199.163.224:22-139.178.89.65:37162.service: Deactivated successfully. Jan 30 13:12:50.491298 systemd[1]: session-22.scope: Deactivated successfully. Jan 30 13:12:50.494461 systemd-logind[1474]: Session 22 logged out. Waiting for processes to exit. Jan 30 13:12:50.497274 systemd-logind[1474]: Removed session 22. Jan 30 13:12:50.553173 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4a5bf58e6646e4387119468220aad1d2ab9f778486325d99a7282913d58f5ed1-rootfs.mount: Deactivated successfully. Jan 30 13:12:50.654139 systemd[1]: Started sshd@22-138.199.163.224:22-139.178.89.65:37168.service - OpenSSH per-connection server daemon (139.178.89.65:37168). Jan 30 13:12:50.701968 kubelet[2747]: E0130 13:12:50.701902 2747 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 30 13:12:51.139607 containerd[1498]: time="2025-01-30T13:12:51.139107139Z" level=info msg="CreateContainer within sandbox \"4561b50f5b4d54106f5ed616ce1d14ae67f0ed60121fdb5a4584a5f7e160a527\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 13:12:51.161347 containerd[1498]: time="2025-01-30T13:12:51.161306900Z" level=info msg="CreateContainer within sandbox \"4561b50f5b4d54106f5ed616ce1d14ae67f0ed60121fdb5a4584a5f7e160a527\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8f38fc3bc09b11e82687c608ff3a46ab4a772763f678960b84c9d81c90ebb16f\"" Jan 30 13:12:51.163526 containerd[1498]: time="2025-01-30T13:12:51.162925543Z" level=info msg="StartContainer for \"8f38fc3bc09b11e82687c608ff3a46ab4a772763f678960b84c9d81c90ebb16f\"" Jan 30 13:12:51.193609 systemd[1]: Started cri-containerd-8f38fc3bc09b11e82687c608ff3a46ab4a772763f678960b84c9d81c90ebb16f.scope - libcontainer container 8f38fc3bc09b11e82687c608ff3a46ab4a772763f678960b84c9d81c90ebb16f. Jan 30 13:12:51.215589 systemd[1]: cri-containerd-8f38fc3bc09b11e82687c608ff3a46ab4a772763f678960b84c9d81c90ebb16f.scope: Deactivated successfully. Jan 30 13:12:51.218314 containerd[1498]: time="2025-01-30T13:12:51.218246903Z" level=info msg="StartContainer for \"8f38fc3bc09b11e82687c608ff3a46ab4a772763f678960b84c9d81c90ebb16f\" returns successfully" Jan 30 13:12:51.240103 containerd[1498]: time="2025-01-30T13:12:51.240043740Z" level=info msg="shim disconnected" id=8f38fc3bc09b11e82687c608ff3a46ab4a772763f678960b84c9d81c90ebb16f namespace=k8s.io Jan 30 13:12:51.240103 containerd[1498]: time="2025-01-30T13:12:51.240092095Z" level=warning msg="cleaning up after shim disconnected" id=8f38fc3bc09b11e82687c608ff3a46ab4a772763f678960b84c9d81c90ebb16f namespace=k8s.io Jan 30 13:12:51.240103 containerd[1498]: time="2025-01-30T13:12:51.240100240Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:12:51.552410 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8f38fc3bc09b11e82687c608ff3a46ab4a772763f678960b84c9d81c90ebb16f-rootfs.mount: Deactivated successfully. Jan 30 13:12:51.638336 sshd[4698]: Accepted publickey for core from 139.178.89.65 port 37168 ssh2: RSA SHA256:5b7aLHOxh/fZTvNGxGsjXyEVE8Wd5gb2YihhQWnHlKs Jan 30 13:12:51.640421 sshd-session[4698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:12:51.646296 systemd-logind[1474]: New session 23 of user core. Jan 30 13:12:51.651642 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 30 13:12:52.140507 containerd[1498]: time="2025-01-30T13:12:52.140449305Z" level=info msg="CreateContainer within sandbox \"4561b50f5b4d54106f5ed616ce1d14ae67f0ed60121fdb5a4584a5f7e160a527\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 13:12:52.163547 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3331491318.mount: Deactivated successfully. Jan 30 13:12:52.165185 containerd[1498]: time="2025-01-30T13:12:52.163915758Z" level=info msg="CreateContainer within sandbox \"4561b50f5b4d54106f5ed616ce1d14ae67f0ed60121fdb5a4584a5f7e160a527\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7ae4d270757e08f421d62ea72346264fe3867acf40b3c3439bf1522c30b13099\"" Jan 30 13:12:52.165429 containerd[1498]: time="2025-01-30T13:12:52.165400971Z" level=info msg="StartContainer for \"7ae4d270757e08f421d62ea72346264fe3867acf40b3c3439bf1522c30b13099\"" Jan 30 13:12:52.209544 systemd[1]: Started cri-containerd-7ae4d270757e08f421d62ea72346264fe3867acf40b3c3439bf1522c30b13099.scope - libcontainer container 7ae4d270757e08f421d62ea72346264fe3867acf40b3c3439bf1522c30b13099. Jan 30 13:12:52.251513 containerd[1498]: time="2025-01-30T13:12:52.251389581Z" level=info msg="StartContainer for \"7ae4d270757e08f421d62ea72346264fe3867acf40b3c3439bf1522c30b13099\" returns successfully" Jan 30 13:12:52.781678 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 30 13:12:55.098322 kubelet[2747]: I0130 13:12:55.098266 2747 setters.go:580] "Node became not ready" node="ci-4186-1-0-d-73846a73c0" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-30T13:12:55Z","lastTransitionTime":"2025-01-30T13:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 30 13:12:55.731649 systemd-networkd[1393]: lxc_health: Link UP Jan 30 13:12:55.732435 systemd-networkd[1393]: lxc_health: Gained carrier Jan 30 13:12:56.662797 systemd[1]: run-containerd-runc-k8s.io-7ae4d270757e08f421d62ea72346264fe3867acf40b3c3439bf1522c30b13099-runc.tE8PR5.mount: Deactivated successfully. Jan 30 13:12:56.744540 kubelet[2747]: I0130 13:12:56.743039 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-765kh" podStartSLOduration=8.743021654 podStartE2EDuration="8.743021654s" podCreationTimestamp="2025-01-30 13:12:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:12:53.169777205 +0000 UTC m=+222.673845995" watchObservedRunningTime="2025-01-30 13:12:56.743021654 +0000 UTC m=+226.247090444" Jan 30 13:12:56.878675 systemd-networkd[1393]: lxc_health: Gained IPv6LL Jan 30 13:13:01.102020 systemd[1]: run-containerd-runc-k8s.io-7ae4d270757e08f421d62ea72346264fe3867acf40b3c3439bf1522c30b13099-runc.gv9Vh2.mount: Deactivated successfully. Jan 30 13:13:03.526460 sshd[4755]: Connection closed by 139.178.89.65 port 37168 Jan 30 13:13:03.527649 sshd-session[4698]: pam_unix(sshd:session): session closed for user core Jan 30 13:13:03.532650 systemd[1]: sshd@22-138.199.163.224:22-139.178.89.65:37168.service: Deactivated successfully. Jan 30 13:13:03.536280 systemd[1]: session-23.scope: Deactivated successfully. Jan 30 13:13:03.537844 systemd-logind[1474]: Session 23 logged out. Waiting for processes to exit. Jan 30 13:13:03.539816 systemd-logind[1474]: Removed session 23. Jan 30 13:13:10.589772 containerd[1498]: time="2025-01-30T13:13:10.589652974Z" level=info msg="StopPodSandbox for \"4b6bce24615ffe963a14f15c01a4dfc5f81b4005be6b9f9264fb179063444123\"" Jan 30 13:13:10.590869 containerd[1498]: time="2025-01-30T13:13:10.589858982Z" level=info msg="TearDown network for sandbox \"4b6bce24615ffe963a14f15c01a4dfc5f81b4005be6b9f9264fb179063444123\" successfully" Jan 30 13:13:10.590869 containerd[1498]: time="2025-01-30T13:13:10.589925040Z" level=info msg="StopPodSandbox for \"4b6bce24615ffe963a14f15c01a4dfc5f81b4005be6b9f9264fb179063444123\" returns successfully" Jan 30 13:13:10.590869 containerd[1498]: time="2025-01-30T13:13:10.590616590Z" level=info msg="RemovePodSandbox for \"4b6bce24615ffe963a14f15c01a4dfc5f81b4005be6b9f9264fb179063444123\"" Jan 30 13:13:10.590869 containerd[1498]: time="2025-01-30T13:13:10.590670684Z" level=info msg="Forcibly stopping sandbox \"4b6bce24615ffe963a14f15c01a4dfc5f81b4005be6b9f9264fb179063444123\"" Jan 30 13:13:10.590869 containerd[1498]: time="2025-01-30T13:13:10.590779756Z" level=info msg="TearDown network for sandbox \"4b6bce24615ffe963a14f15c01a4dfc5f81b4005be6b9f9264fb179063444123\" successfully" Jan 30 13:13:10.599913 containerd[1498]: time="2025-01-30T13:13:10.599097527Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4b6bce24615ffe963a14f15c01a4dfc5f81b4005be6b9f9264fb179063444123\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:13:10.599913 containerd[1498]: time="2025-01-30T13:13:10.599210447Z" level=info msg="RemovePodSandbox \"4b6bce24615ffe963a14f15c01a4dfc5f81b4005be6b9f9264fb179063444123\" returns successfully" Jan 30 13:13:10.600431 containerd[1498]: time="2025-01-30T13:13:10.600172631Z" level=info msg="StopPodSandbox for \"8ead30684677970f9efa346a5e91c69e8941cf1fb53f618a1f98ce32680fa570\"" Jan 30 13:13:10.600431 containerd[1498]: time="2025-01-30T13:13:10.600308282Z" level=info msg="TearDown network for sandbox \"8ead30684677970f9efa346a5e91c69e8941cf1fb53f618a1f98ce32680fa570\" successfully" Jan 30 13:13:10.600431 containerd[1498]: time="2025-01-30T13:13:10.600329414Z" level=info msg="StopPodSandbox for \"8ead30684677970f9efa346a5e91c69e8941cf1fb53f618a1f98ce32680fa570\" returns successfully" Jan 30 13:13:10.602538 containerd[1498]: time="2025-01-30T13:13:10.601123381Z" level=info msg="RemovePodSandbox for \"8ead30684677970f9efa346a5e91c69e8941cf1fb53f618a1f98ce32680fa570\"" Jan 30 13:13:10.602538 containerd[1498]: time="2025-01-30T13:13:10.601166316Z" level=info msg="Forcibly stopping sandbox \"8ead30684677970f9efa346a5e91c69e8941cf1fb53f618a1f98ce32680fa570\"" Jan 30 13:13:10.602538 containerd[1498]: time="2025-01-30T13:13:10.601254286Z" level=info msg="TearDown network for sandbox \"8ead30684677970f9efa346a5e91c69e8941cf1fb53f618a1f98ce32680fa570\" successfully" Jan 30 13:13:10.608920 containerd[1498]: time="2025-01-30T13:13:10.608851331Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8ead30684677970f9efa346a5e91c69e8941cf1fb53f618a1f98ce32680fa570\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:13:10.608920 containerd[1498]: time="2025-01-30T13:13:10.608914684Z" level=info msg="RemovePodSandbox \"8ead30684677970f9efa346a5e91c69e8941cf1fb53f618a1f98ce32680fa570\" returns successfully" Jan 30 13:13:19.691464 systemd[1]: cri-containerd-05964ce6f1a4995773fe09d267c8d3c5b271b883d2036c6549db92b146ada98a.scope: Deactivated successfully. Jan 30 13:13:19.691793 systemd[1]: cri-containerd-05964ce6f1a4995773fe09d267c8d3c5b271b883d2036c6549db92b146ada98a.scope: Consumed 3.659s CPU time, 21.4M memory peak, 0B memory swap peak. Jan 30 13:13:19.713086 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-05964ce6f1a4995773fe09d267c8d3c5b271b883d2036c6549db92b146ada98a-rootfs.mount: Deactivated successfully. Jan 30 13:13:19.727641 containerd[1498]: time="2025-01-30T13:13:19.727579969Z" level=info msg="shim disconnected" id=05964ce6f1a4995773fe09d267c8d3c5b271b883d2036c6549db92b146ada98a namespace=k8s.io Jan 30 13:13:19.728872 containerd[1498]: time="2025-01-30T13:13:19.727952368Z" level=warning msg="cleaning up after shim disconnected" id=05964ce6f1a4995773fe09d267c8d3c5b271b883d2036c6549db92b146ada98a namespace=k8s.io Jan 30 13:13:19.728872 containerd[1498]: time="2025-01-30T13:13:19.727968800Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:13:20.115369 kubelet[2747]: E0130 13:13:20.115262 2747 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:56842->10.0.0.2:2379: read: connection timed out" Jan 30 13:13:20.210021 kubelet[2747]: I0130 13:13:20.209985 2747 scope.go:117] "RemoveContainer" containerID="05964ce6f1a4995773fe09d267c8d3c5b271b883d2036c6549db92b146ada98a" Jan 30 13:13:20.213907 containerd[1498]: time="2025-01-30T13:13:20.213854184Z" level=info msg="CreateContainer within sandbox \"49dc1eac08664a5976eee1807e4cf97f64dd0c9808886da0053b4250b774f1fa\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 30 13:13:20.239304 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3157143112.mount: Deactivated successfully. Jan 30 13:13:20.240329 containerd[1498]: time="2025-01-30T13:13:20.240230342Z" level=info msg="CreateContainer within sandbox \"49dc1eac08664a5976eee1807e4cf97f64dd0c9808886da0053b4250b774f1fa\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"e4634038fd074f627c703e6a6bf2a59a25653ef84b827fab5271b2dbf25df435\"" Jan 30 13:13:20.240901 containerd[1498]: time="2025-01-30T13:13:20.240870639Z" level=info msg="StartContainer for \"e4634038fd074f627c703e6a6bf2a59a25653ef84b827fab5271b2dbf25df435\"" Jan 30 13:13:20.280648 systemd[1]: Started cri-containerd-e4634038fd074f627c703e6a6bf2a59a25653ef84b827fab5271b2dbf25df435.scope - libcontainer container e4634038fd074f627c703e6a6bf2a59a25653ef84b827fab5271b2dbf25df435. Jan 30 13:13:20.321696 containerd[1498]: time="2025-01-30T13:13:20.321651393Z" level=info msg="StartContainer for \"e4634038fd074f627c703e6a6bf2a59a25653ef84b827fab5271b2dbf25df435\" returns successfully" Jan 30 13:13:24.213071 kubelet[2747]: E0130 13:13:24.208798 2747 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:56622->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4186-1-0-d-73846a73c0.181f7a9c7fbdf0cb kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4186-1-0-d-73846a73c0,UID:645698ea5f46b33dbe7628590336a7b0,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4186-1-0-d-73846a73c0,},FirstTimestamp:2025-01-30 13:13:13.764942027 +0000 UTC m=+243.269010827,LastTimestamp:2025-01-30 13:13:13.764942027 +0000 UTC m=+243.269010827,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186-1-0-d-73846a73c0,}"