Dec 16 09:36:52.982787 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Dec 12 23:15:00 -00 2024 Dec 16 09:36:52.982819 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 16 09:36:52.982831 kernel: BIOS-provided physical RAM map: Dec 16 09:36:52.982840 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 16 09:36:52.982848 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 16 09:36:52.982856 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 16 09:36:52.982865 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007cfdbfff] usable Dec 16 09:36:52.982874 kernel: BIOS-e820: [mem 0x000000007cfdc000-0x000000007cffffff] reserved Dec 16 09:36:52.982886 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 16 09:36:52.982894 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 16 09:36:52.982902 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 16 09:36:52.982910 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 16 09:36:52.982918 kernel: NX (Execute Disable) protection: active Dec 16 09:36:52.982927 kernel: APIC: Static calls initialized Dec 16 09:36:52.982941 kernel: SMBIOS 2.8 present. Dec 16 09:36:52.982950 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Dec 16 09:36:52.982959 kernel: Hypervisor detected: KVM Dec 16 09:36:52.982968 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 16 09:36:52.982977 kernel: kvm-clock: using sched offset of 3238714764 cycles Dec 16 09:36:52.982987 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 16 09:36:52.982995 kernel: tsc: Detected 2445.404 MHz processor Dec 16 09:36:52.983004 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 16 09:36:52.983014 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 16 09:36:52.983026 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 Dec 16 09:36:52.983035 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 16 09:36:52.983044 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 16 09:36:52.983053 kernel: Using GB pages for direct mapping Dec 16 09:36:52.983062 kernel: ACPI: Early table checksum verification disabled Dec 16 09:36:52.983071 kernel: ACPI: RSDP 0x00000000000F51F0 000014 (v00 BOCHS ) Dec 16 09:36:52.983080 kernel: ACPI: RSDT 0x000000007CFE265D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 09:36:52.983089 kernel: ACPI: FACP 0x000000007CFE244D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 09:36:52.983098 kernel: ACPI: DSDT 0x000000007CFE0040 00240D (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 09:36:52.983111 kernel: ACPI: FACS 0x000000007CFE0000 000040 Dec 16 09:36:52.983120 kernel: ACPI: APIC 0x000000007CFE2541 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 09:36:52.984437 kernel: ACPI: HPET 0x000000007CFE25C1 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 09:36:52.984451 kernel: ACPI: MCFG 0x000000007CFE25F9 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 09:36:52.984461 kernel: ACPI: WAET 0x000000007CFE2635 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 09:36:52.984470 kernel: ACPI: Reserving FACP table memory at [mem 0x7cfe244d-0x7cfe2540] Dec 16 09:36:52.984479 kernel: ACPI: Reserving DSDT table memory at [mem 0x7cfe0040-0x7cfe244c] Dec 16 09:36:52.984489 kernel: ACPI: Reserving FACS table memory at [mem 0x7cfe0000-0x7cfe003f] Dec 16 09:36:52.984508 kernel: ACPI: Reserving APIC table memory at [mem 0x7cfe2541-0x7cfe25c0] Dec 16 09:36:52.984518 kernel: ACPI: Reserving HPET table memory at [mem 0x7cfe25c1-0x7cfe25f8] Dec 16 09:36:52.984528 kernel: ACPI: Reserving MCFG table memory at [mem 0x7cfe25f9-0x7cfe2634] Dec 16 09:36:52.984538 kernel: ACPI: Reserving WAET table memory at [mem 0x7cfe2635-0x7cfe265c] Dec 16 09:36:52.984547 kernel: No NUMA configuration found Dec 16 09:36:52.984557 kernel: Faking a node at [mem 0x0000000000000000-0x000000007cfdbfff] Dec 16 09:36:52.984570 kernel: NODE_DATA(0) allocated [mem 0x7cfd6000-0x7cfdbfff] Dec 16 09:36:52.984579 kernel: Zone ranges: Dec 16 09:36:52.984589 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 16 09:36:52.984598 kernel: DMA32 [mem 0x0000000001000000-0x000000007cfdbfff] Dec 16 09:36:52.984608 kernel: Normal empty Dec 16 09:36:52.984617 kernel: Movable zone start for each node Dec 16 09:36:52.984627 kernel: Early memory node ranges Dec 16 09:36:52.984637 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 16 09:36:52.984646 kernel: node 0: [mem 0x0000000000100000-0x000000007cfdbfff] Dec 16 09:36:52.984659 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007cfdbfff] Dec 16 09:36:52.984669 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 16 09:36:52.984678 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 16 09:36:52.984688 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Dec 16 09:36:52.984697 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 16 09:36:52.984707 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 16 09:36:52.984716 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 16 09:36:52.984725 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 16 09:36:52.984735 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 16 09:36:52.984745 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 16 09:36:52.984758 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 16 09:36:52.984768 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 16 09:36:52.984777 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 16 09:36:52.984788 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 16 09:36:52.984798 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 16 09:36:52.984808 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 16 09:36:52.984817 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 16 09:36:52.984827 kernel: Booting paravirtualized kernel on KVM Dec 16 09:36:52.984837 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 16 09:36:52.984850 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 16 09:36:52.984860 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Dec 16 09:36:52.984870 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Dec 16 09:36:52.984879 kernel: pcpu-alloc: [0] 0 1 Dec 16 09:36:52.984889 kernel: kvm-guest: PV spinlocks disabled, no host support Dec 16 09:36:52.984900 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 16 09:36:52.984910 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 16 09:36:52.984919 kernel: random: crng init done Dec 16 09:36:52.984954 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 16 09:36:52.984964 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 16 09:36:52.984973 kernel: Fallback order for Node 0: 0 Dec 16 09:36:52.984984 kernel: Built 1 zonelists, mobility grouping on. Total pages: 503708 Dec 16 09:36:52.984994 kernel: Policy zone: DMA32 Dec 16 09:36:52.985003 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 16 09:36:52.985014 kernel: Memory: 1922056K/2047464K available (12288K kernel code, 2299K rwdata, 22724K rodata, 42844K init, 2348K bss, 125148K reserved, 0K cma-reserved) Dec 16 09:36:52.985024 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 16 09:36:52.985034 kernel: ftrace: allocating 37902 entries in 149 pages Dec 16 09:36:52.985048 kernel: ftrace: allocated 149 pages with 4 groups Dec 16 09:36:52.985057 kernel: Dynamic Preempt: voluntary Dec 16 09:36:52.985067 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 16 09:36:52.985077 kernel: rcu: RCU event tracing is enabled. Dec 16 09:36:52.985087 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 16 09:36:52.985097 kernel: Trampoline variant of Tasks RCU enabled. Dec 16 09:36:52.985107 kernel: Rude variant of Tasks RCU enabled. Dec 16 09:36:52.985116 kernel: Tracing variant of Tasks RCU enabled. Dec 16 09:36:52.986156 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 16 09:36:52.986196 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 16 09:36:52.986211 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 16 09:36:52.986223 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 16 09:36:52.986233 kernel: Console: colour VGA+ 80x25 Dec 16 09:36:52.986243 kernel: printk: console [tty0] enabled Dec 16 09:36:52.986253 kernel: printk: console [ttyS0] enabled Dec 16 09:36:52.986263 kernel: ACPI: Core revision 20230628 Dec 16 09:36:52.986273 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 16 09:36:52.986283 kernel: APIC: Switch to symmetric I/O mode setup Dec 16 09:36:52.986297 kernel: x2apic enabled Dec 16 09:36:52.986308 kernel: APIC: Switched APIC routing to: physical x2apic Dec 16 09:36:52.986318 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 16 09:36:52.986328 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Dec 16 09:36:52.986338 kernel: Calibrating delay loop (skipped) preset value.. 4890.80 BogoMIPS (lpj=2445404) Dec 16 09:36:52.986348 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 16 09:36:52.986359 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 16 09:36:52.986369 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 16 09:36:52.986380 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 16 09:36:52.986437 kernel: Spectre V2 : Mitigation: Retpolines Dec 16 09:36:52.986449 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 16 09:36:52.986459 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 16 09:36:52.986473 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Dec 16 09:36:52.986484 kernel: RETBleed: Mitigation: untrained return thunk Dec 16 09:36:52.986494 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 16 09:36:52.986505 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 16 09:36:52.986515 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Dec 16 09:36:52.986527 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Dec 16 09:36:52.986537 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Dec 16 09:36:52.986549 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 16 09:36:52.986563 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 16 09:36:52.986574 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 16 09:36:52.986584 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 16 09:36:52.986595 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Dec 16 09:36:52.986606 kernel: Freeing SMP alternatives memory: 32K Dec 16 09:36:52.986619 kernel: pid_max: default: 32768 minimum: 301 Dec 16 09:36:52.986629 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 16 09:36:52.986639 kernel: landlock: Up and running. Dec 16 09:36:52.986648 kernel: SELinux: Initializing. Dec 16 09:36:52.986659 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 16 09:36:52.986668 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 16 09:36:52.986678 kernel: smpboot: CPU0: AMD EPYC Processor (family: 0x17, model: 0x31, stepping: 0x0) Dec 16 09:36:52.986689 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 09:36:52.986700 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 09:36:52.986714 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 09:36:52.986723 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 16 09:36:52.986733 kernel: ... version: 0 Dec 16 09:36:52.986743 kernel: ... bit width: 48 Dec 16 09:36:52.986753 kernel: ... generic registers: 6 Dec 16 09:36:52.986763 kernel: ... value mask: 0000ffffffffffff Dec 16 09:36:52.986773 kernel: ... max period: 00007fffffffffff Dec 16 09:36:52.986782 kernel: ... fixed-purpose events: 0 Dec 16 09:36:52.986793 kernel: ... event mask: 000000000000003f Dec 16 09:36:52.986807 kernel: signal: max sigframe size: 1776 Dec 16 09:36:52.986817 kernel: rcu: Hierarchical SRCU implementation. Dec 16 09:36:52.986828 kernel: rcu: Max phase no-delay instances is 400. Dec 16 09:36:52.986838 kernel: smp: Bringing up secondary CPUs ... Dec 16 09:36:52.986848 kernel: smpboot: x86: Booting SMP configuration: Dec 16 09:36:52.986857 kernel: .... node #0, CPUs: #1 Dec 16 09:36:52.986867 kernel: smp: Brought up 1 node, 2 CPUs Dec 16 09:36:52.986877 kernel: smpboot: Max logical packages: 1 Dec 16 09:36:52.986887 kernel: smpboot: Total of 2 processors activated (9781.61 BogoMIPS) Dec 16 09:36:52.986901 kernel: devtmpfs: initialized Dec 16 09:36:52.986911 kernel: x86/mm: Memory block size: 128MB Dec 16 09:36:52.986921 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 16 09:36:52.986932 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 16 09:36:52.986941 kernel: pinctrl core: initialized pinctrl subsystem Dec 16 09:36:52.986952 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 16 09:36:52.986962 kernel: audit: initializing netlink subsys (disabled) Dec 16 09:36:52.986972 kernel: audit: type=2000 audit(1734341812.150:1): state=initialized audit_enabled=0 res=1 Dec 16 09:36:52.986983 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 16 09:36:52.986997 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 16 09:36:52.987006 kernel: cpuidle: using governor menu Dec 16 09:36:52.987017 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 16 09:36:52.987026 kernel: dca service started, version 1.12.1 Dec 16 09:36:52.987037 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Dec 16 09:36:52.987046 kernel: PCI: Using configuration type 1 for base access Dec 16 09:36:52.987056 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 16 09:36:52.987066 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 16 09:36:52.987077 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 16 09:36:52.987091 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 16 09:36:52.987101 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 16 09:36:52.987111 kernel: ACPI: Added _OSI(Module Device) Dec 16 09:36:52.987121 kernel: ACPI: Added _OSI(Processor Device) Dec 16 09:36:52.988232 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 16 09:36:52.988247 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 16 09:36:52.988257 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 16 09:36:52.988267 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 16 09:36:52.988277 kernel: ACPI: Interpreter enabled Dec 16 09:36:52.988293 kernel: ACPI: PM: (supports S0 S5) Dec 16 09:36:52.988303 kernel: ACPI: Using IOAPIC for interrupt routing Dec 16 09:36:52.988313 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 16 09:36:52.988323 kernel: PCI: Using E820 reservations for host bridge windows Dec 16 09:36:52.988333 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 16 09:36:52.988344 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 16 09:36:52.988615 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 16 09:36:52.988774 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 16 09:36:52.988950 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 16 09:36:52.988967 kernel: PCI host bridge to bus 0000:00 Dec 16 09:36:52.990169 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 16 09:36:52.990327 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 16 09:36:52.990463 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 16 09:36:52.990593 kernel: pci_bus 0000:00: root bus resource [mem 0x7d000000-0xafffffff window] Dec 16 09:36:52.990732 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 16 09:36:52.990904 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Dec 16 09:36:52.991041 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 16 09:36:52.994113 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Dec 16 09:36:52.994318 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 Dec 16 09:36:52.994471 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfb800000-0xfbffffff pref] Dec 16 09:36:52.994621 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfd200000-0xfd203fff 64bit pref] Dec 16 09:36:52.994775 kernel: pci 0000:00:01.0: reg 0x20: [mem 0xfea10000-0xfea10fff] Dec 16 09:36:52.994925 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea00000-0xfea0ffff pref] Dec 16 09:36:52.995072 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 16 09:36:52.995277 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Dec 16 09:36:52.995428 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea11000-0xfea11fff] Dec 16 09:36:52.995595 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Dec 16 09:36:52.995745 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea12000-0xfea12fff] Dec 16 09:36:52.995912 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Dec 16 09:36:52.996071 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea13000-0xfea13fff] Dec 16 09:36:52.996293 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Dec 16 09:36:52.996443 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea14000-0xfea14fff] Dec 16 09:36:52.996597 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Dec 16 09:36:52.996745 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea15000-0xfea15fff] Dec 16 09:36:52.996908 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Dec 16 09:36:52.997079 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea16000-0xfea16fff] Dec 16 09:36:52.998351 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Dec 16 09:36:52.998501 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea17000-0xfea17fff] Dec 16 09:36:52.998660 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Dec 16 09:36:52.998808 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea18000-0xfea18fff] Dec 16 09:36:52.998978 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Dec 16 09:36:52.999123 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfea19000-0xfea19fff] Dec 16 09:36:53.000320 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Dec 16 09:36:53.000468 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 16 09:36:53.000622 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Dec 16 09:36:53.000765 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc040-0xc05f] Dec 16 09:36:53.000919 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea1a000-0xfea1afff] Dec 16 09:36:53.001099 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Dec 16 09:36:53.002050 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Dec 16 09:36:53.002233 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Dec 16 09:36:53.002383 kernel: pci 0000:01:00.0: reg 0x14: [mem 0xfe880000-0xfe880fff] Dec 16 09:36:53.002530 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Dec 16 09:36:53.002677 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfe800000-0xfe87ffff pref] Dec 16 09:36:53.002831 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Dec 16 09:36:53.002979 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Dec 16 09:36:53.004149 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Dec 16 09:36:53.004327 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Dec 16 09:36:53.004476 kernel: pci 0000:02:00.0: reg 0x10: [mem 0xfe600000-0xfe603fff 64bit] Dec 16 09:36:53.004623 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Dec 16 09:36:53.004771 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Dec 16 09:36:53.004917 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 16 09:36:53.005097 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Dec 16 09:36:53.005310 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfe400000-0xfe400fff] Dec 16 09:36:53.005458 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xfcc00000-0xfcc03fff 64bit pref] Dec 16 09:36:53.005601 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Dec 16 09:36:53.005744 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Dec 16 09:36:53.005899 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 16 09:36:53.006066 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Dec 16 09:36:53.006237 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Dec 16 09:36:53.006387 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Dec 16 09:36:53.006530 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Dec 16 09:36:53.006672 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 16 09:36:53.006885 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Dec 16 09:36:53.007052 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xfc800000-0xfc803fff 64bit pref] Dec 16 09:36:53.007259 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Dec 16 09:36:53.007403 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Dec 16 09:36:53.007543 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 16 09:36:53.007705 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Dec 16 09:36:53.007860 kernel: pci 0000:06:00.0: reg 0x14: [mem 0xfde00000-0xfde00fff] Dec 16 09:36:53.008027 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xfc600000-0xfc603fff 64bit pref] Dec 16 09:36:53.008208 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Dec 16 09:36:53.008360 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Dec 16 09:36:53.008502 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 16 09:36:53.008516 kernel: acpiphp: Slot [0] registered Dec 16 09:36:53.008677 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Dec 16 09:36:53.008831 kernel: pci 0000:07:00.0: reg 0x14: [mem 0xfdc80000-0xfdc80fff] Dec 16 09:36:53.009016 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xfc400000-0xfc403fff 64bit pref] Dec 16 09:36:53.009299 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfdc00000-0xfdc7ffff pref] Dec 16 09:36:53.009458 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Dec 16 09:36:53.009604 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Dec 16 09:36:53.009745 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 16 09:36:53.009760 kernel: acpiphp: Slot [0-2] registered Dec 16 09:36:53.009910 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Dec 16 09:36:53.010055 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Dec 16 09:36:53.012242 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 16 09:36:53.012262 kernel: acpiphp: Slot [0-3] registered Dec 16 09:36:53.012412 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Dec 16 09:36:53.012610 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 16 09:36:53.012778 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 16 09:36:53.012795 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 16 09:36:53.012807 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 16 09:36:53.012817 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 16 09:36:53.012827 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 16 09:36:53.012838 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 16 09:36:53.012848 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 16 09:36:53.012863 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 16 09:36:53.012873 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 16 09:36:53.012884 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 16 09:36:53.012894 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 16 09:36:53.012905 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 16 09:36:53.012915 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 16 09:36:53.012943 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 16 09:36:53.012955 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 16 09:36:53.012965 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 16 09:36:53.012980 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 16 09:36:53.012990 kernel: iommu: Default domain type: Translated Dec 16 09:36:53.013001 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 16 09:36:53.013011 kernel: PCI: Using ACPI for IRQ routing Dec 16 09:36:53.013022 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 16 09:36:53.013032 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 16 09:36:53.013042 kernel: e820: reserve RAM buffer [mem 0x7cfdc000-0x7fffffff] Dec 16 09:36:53.013237 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 16 09:36:53.013383 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 16 09:36:53.013533 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 16 09:36:53.013549 kernel: vgaarb: loaded Dec 16 09:36:53.013560 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 16 09:36:53.013570 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 16 09:36:53.013581 kernel: clocksource: Switched to clocksource kvm-clock Dec 16 09:36:53.013591 kernel: VFS: Disk quotas dquot_6.6.0 Dec 16 09:36:53.013602 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 16 09:36:53.013612 kernel: pnp: PnP ACPI init Dec 16 09:36:53.013771 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 16 09:36:53.013793 kernel: pnp: PnP ACPI: found 5 devices Dec 16 09:36:53.013804 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 16 09:36:53.013814 kernel: NET: Registered PF_INET protocol family Dec 16 09:36:53.013825 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 16 09:36:53.013836 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 16 09:36:53.013846 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 16 09:36:53.013857 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 16 09:36:53.013867 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 16 09:36:53.013882 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 16 09:36:53.013892 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 16 09:36:53.013903 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 16 09:36:53.013913 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 16 09:36:53.013924 kernel: NET: Registered PF_XDP protocol family Dec 16 09:36:53.014071 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Dec 16 09:36:53.014236 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Dec 16 09:36:53.014383 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Dec 16 09:36:53.014537 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] Dec 16 09:36:53.014692 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] Dec 16 09:36:53.014842 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] Dec 16 09:36:53.014991 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Dec 16 09:36:53.017207 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Dec 16 09:36:53.017373 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Dec 16 09:36:53.017545 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Dec 16 09:36:53.017741 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Dec 16 09:36:53.017887 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 16 09:36:53.018033 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Dec 16 09:36:53.020226 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Dec 16 09:36:53.020395 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 16 09:36:53.020606 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Dec 16 09:36:53.020754 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Dec 16 09:36:53.020901 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 16 09:36:53.021075 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Dec 16 09:36:53.021268 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Dec 16 09:36:53.021419 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 16 09:36:53.021568 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Dec 16 09:36:53.021716 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Dec 16 09:36:53.021866 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 16 09:36:53.022013 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Dec 16 09:36:53.022178 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Dec 16 09:36:53.022330 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Dec 16 09:36:53.022491 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 16 09:36:53.022729 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Dec 16 09:36:53.022880 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Dec 16 09:36:53.023029 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Dec 16 09:36:53.024271 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 16 09:36:53.024427 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Dec 16 09:36:53.024575 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Dec 16 09:36:53.024726 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 16 09:36:53.024887 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 16 09:36:53.025067 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 16 09:36:53.025231 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 16 09:36:53.025376 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 16 09:36:53.025561 kernel: pci_bus 0000:00: resource 7 [mem 0x7d000000-0xafffffff window] Dec 16 09:36:53.025730 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 16 09:36:53.025866 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Dec 16 09:36:53.026056 kernel: pci_bus 0000:01: resource 1 [mem 0xfe800000-0xfe9fffff] Dec 16 09:36:53.026266 kernel: pci_bus 0000:01: resource 2 [mem 0xfd000000-0xfd1fffff 64bit pref] Dec 16 09:36:53.026429 kernel: pci_bus 0000:02: resource 1 [mem 0xfe600000-0xfe7fffff] Dec 16 09:36:53.026585 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Dec 16 09:36:53.026750 kernel: pci_bus 0000:03: resource 1 [mem 0xfe400000-0xfe5fffff] Dec 16 09:36:53.026895 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 16 09:36:53.027048 kernel: pci_bus 0000:04: resource 1 [mem 0xfe200000-0xfe3fffff] Dec 16 09:36:53.027282 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 16 09:36:53.027460 kernel: pci_bus 0000:05: resource 1 [mem 0xfe000000-0xfe1fffff] Dec 16 09:36:53.027612 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 16 09:36:53.027916 kernel: pci_bus 0000:06: resource 1 [mem 0xfde00000-0xfdffffff] Dec 16 09:36:53.028429 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 16 09:36:53.028595 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Dec 16 09:36:53.028773 kernel: pci_bus 0000:07: resource 1 [mem 0xfdc00000-0xfddfffff] Dec 16 09:36:53.028944 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 16 09:36:53.029119 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Dec 16 09:36:53.029354 kernel: pci_bus 0000:08: resource 1 [mem 0xfda00000-0xfdbfffff] Dec 16 09:36:53.029508 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 16 09:36:53.029666 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Dec 16 09:36:53.029816 kernel: pci_bus 0000:09: resource 1 [mem 0xfd800000-0xfd9fffff] Dec 16 09:36:53.029966 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 16 09:36:53.029982 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 16 09:36:53.029999 kernel: PCI: CLS 0 bytes, default 64 Dec 16 09:36:53.030011 kernel: Initialise system trusted keyrings Dec 16 09:36:53.030023 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 16 09:36:53.030034 kernel: Key type asymmetric registered Dec 16 09:36:53.030045 kernel: Asymmetric key parser 'x509' registered Dec 16 09:36:53.030057 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 16 09:36:53.030068 kernel: io scheduler mq-deadline registered Dec 16 09:36:53.030079 kernel: io scheduler kyber registered Dec 16 09:36:53.030091 kernel: io scheduler bfq registered Dec 16 09:36:53.030333 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Dec 16 09:36:53.030497 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Dec 16 09:36:53.030648 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Dec 16 09:36:53.030850 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Dec 16 09:36:53.031004 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Dec 16 09:36:53.031201 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Dec 16 09:36:53.031354 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Dec 16 09:36:53.031529 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Dec 16 09:36:53.031678 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Dec 16 09:36:53.031834 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Dec 16 09:36:53.031983 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Dec 16 09:36:53.032169 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Dec 16 09:36:53.032322 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Dec 16 09:36:53.032470 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Dec 16 09:36:53.032660 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Dec 16 09:36:53.032811 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Dec 16 09:36:53.032829 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 16 09:36:53.033002 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Dec 16 09:36:53.033197 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Dec 16 09:36:53.033215 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 16 09:36:53.033227 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Dec 16 09:36:53.033238 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 16 09:36:53.033249 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 16 09:36:53.033270 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 16 09:36:53.033281 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 16 09:36:53.033292 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 16 09:36:53.033459 kernel: rtc_cmos 00:03: RTC can wake from S4 Dec 16 09:36:53.033476 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 16 09:36:53.033611 kernel: rtc_cmos 00:03: registered as rtc0 Dec 16 09:36:53.033752 kernel: rtc_cmos 00:03: setting system clock to 2024-12-16T09:36:52 UTC (1734341812) Dec 16 09:36:53.033901 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Dec 16 09:36:53.033918 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Dec 16 09:36:53.033929 kernel: NET: Registered PF_INET6 protocol family Dec 16 09:36:53.033940 kernel: Segment Routing with IPv6 Dec 16 09:36:53.033955 kernel: In-situ OAM (IOAM) with IPv6 Dec 16 09:36:53.033967 kernel: NET: Registered PF_PACKET protocol family Dec 16 09:36:53.033979 kernel: Key type dns_resolver registered Dec 16 09:36:53.033990 kernel: IPI shorthand broadcast: enabled Dec 16 09:36:53.034012 kernel: sched_clock: Marking stable (1171008473, 151779041)->(1331984310, -9196796) Dec 16 09:36:53.034023 kernel: registered taskstats version 1 Dec 16 09:36:53.034034 kernel: Loading compiled-in X.509 certificates Dec 16 09:36:53.034046 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: c82d546f528d79a5758dcebbc47fb6daf92836a0' Dec 16 09:36:53.034057 kernel: Key type .fscrypt registered Dec 16 09:36:53.034071 kernel: Key type fscrypt-provisioning registered Dec 16 09:36:53.034083 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 16 09:36:53.034094 kernel: ima: Allocated hash algorithm: sha1 Dec 16 09:36:53.034105 kernel: ima: No architecture policies found Dec 16 09:36:53.034116 kernel: clk: Disabling unused clocks Dec 16 09:36:53.034144 kernel: Freeing unused kernel image (initmem) memory: 42844K Dec 16 09:36:53.034178 kernel: Write protecting the kernel read-only data: 36864k Dec 16 09:36:53.034225 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Dec 16 09:36:53.034255 kernel: Run /init as init process Dec 16 09:36:53.034278 kernel: with arguments: Dec 16 09:36:53.034295 kernel: /init Dec 16 09:36:53.034307 kernel: with environment: Dec 16 09:36:53.034317 kernel: HOME=/ Dec 16 09:36:53.034328 kernel: TERM=linux Dec 16 09:36:53.034339 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 16 09:36:53.034353 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 16 09:36:53.034372 systemd[1]: Detected virtualization kvm. Dec 16 09:36:53.034398 systemd[1]: Detected architecture x86-64. Dec 16 09:36:53.034409 systemd[1]: Running in initrd. Dec 16 09:36:53.034420 systemd[1]: No hostname configured, using default hostname. Dec 16 09:36:53.034432 systemd[1]: Hostname set to . Dec 16 09:36:53.034443 systemd[1]: Initializing machine ID from VM UUID. Dec 16 09:36:53.034455 systemd[1]: Queued start job for default target initrd.target. Dec 16 09:36:53.034466 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 09:36:53.034477 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 09:36:53.034496 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 16 09:36:53.034507 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 09:36:53.034520 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 16 09:36:53.034531 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 16 09:36:53.034544 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 16 09:36:53.034556 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 16 09:36:53.034571 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 09:36:53.034583 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 09:36:53.034595 systemd[1]: Reached target paths.target - Path Units. Dec 16 09:36:53.034606 systemd[1]: Reached target slices.target - Slice Units. Dec 16 09:36:53.034618 systemd[1]: Reached target swap.target - Swaps. Dec 16 09:36:53.034630 systemd[1]: Reached target timers.target - Timer Units. Dec 16 09:36:53.034641 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 09:36:53.034653 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 09:36:53.034665 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 16 09:36:53.034680 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 16 09:36:53.034692 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 09:36:53.034707 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 09:36:53.034719 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 09:36:53.034730 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 09:36:53.034742 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 16 09:36:53.034753 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 09:36:53.034764 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 16 09:36:53.034780 systemd[1]: Starting systemd-fsck-usr.service... Dec 16 09:36:53.034792 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 09:36:53.034804 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 09:36:53.034815 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 09:36:53.034826 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 16 09:36:53.034892 systemd-journald[187]: Collecting audit messages is disabled. Dec 16 09:36:53.034926 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 09:36:53.034938 systemd[1]: Finished systemd-fsck-usr.service. Dec 16 09:36:53.034950 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 09:36:53.034966 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 09:36:53.034977 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 16 09:36:53.034988 kernel: Bridge firewalling registered Dec 16 09:36:53.035000 systemd-journald[187]: Journal started Dec 16 09:36:53.035025 systemd-journald[187]: Runtime Journal (/run/log/journal/01b663c29a554bc6a50f58a3f93cdffe) is 4.8M, max 38.4M, 33.6M free. Dec 16 09:36:52.975692 systemd-modules-load[188]: Inserted module 'overlay' Dec 16 09:36:53.034412 systemd-modules-load[188]: Inserted module 'br_netfilter' Dec 16 09:36:53.073274 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 09:36:53.074277 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 09:36:53.075155 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 09:36:53.082286 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 09:36:53.089284 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 09:36:53.091912 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 09:36:53.097612 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 09:36:53.104589 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 09:36:53.112182 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 09:36:53.114280 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 16 09:36:53.124426 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 09:36:53.125553 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 09:36:53.135907 dracut-cmdline[219]: dracut-dracut-053 Dec 16 09:36:53.135907 dracut-cmdline[219]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 16 09:36:53.135326 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 09:36:53.178945 systemd-resolved[226]: Positive Trust Anchors: Dec 16 09:36:53.179842 systemd-resolved[226]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 09:36:53.179891 systemd-resolved[226]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 09:36:53.182904 systemd-resolved[226]: Defaulting to hostname 'linux'. Dec 16 09:36:53.189592 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 09:36:53.190590 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 09:36:53.218163 kernel: SCSI subsystem initialized Dec 16 09:36:53.226146 kernel: Loading iSCSI transport class v2.0-870. Dec 16 09:36:53.236156 kernel: iscsi: registered transport (tcp) Dec 16 09:36:53.254242 kernel: iscsi: registered transport (qla4xxx) Dec 16 09:36:53.254286 kernel: QLogic iSCSI HBA Driver Dec 16 09:36:53.293702 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 16 09:36:53.306416 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 16 09:36:53.335373 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 16 09:36:53.335451 kernel: device-mapper: uevent: version 1.0.3 Dec 16 09:36:53.338377 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 16 09:36:53.385178 kernel: raid6: avx2x4 gen() 34108 MB/s Dec 16 09:36:53.402190 kernel: raid6: avx2x2 gen() 31023 MB/s Dec 16 09:36:53.419360 kernel: raid6: avx2x1 gen() 26059 MB/s Dec 16 09:36:53.419436 kernel: raid6: using algorithm avx2x4 gen() 34108 MB/s Dec 16 09:36:53.437447 kernel: raid6: .... xor() 4602 MB/s, rmw enabled Dec 16 09:36:53.437509 kernel: raid6: using avx2x2 recovery algorithm Dec 16 09:36:53.457164 kernel: xor: automatically using best checksumming function avx Dec 16 09:36:53.581176 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 16 09:36:53.590982 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 16 09:36:53.597287 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 09:36:53.610739 systemd-udevd[405]: Using default interface naming scheme 'v255'. Dec 16 09:36:53.614528 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 09:36:53.621412 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 16 09:36:53.634956 dracut-pre-trigger[410]: rd.md=0: removing MD RAID activation Dec 16 09:36:53.663766 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 09:36:53.668258 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 09:36:53.732659 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 09:36:53.740606 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 16 09:36:53.750459 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 16 09:36:53.753919 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 09:36:53.755036 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 09:36:53.755495 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 09:36:53.761279 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 16 09:36:53.772245 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 16 09:36:53.892168 kernel: cryptd: max_cpu_qlen set to 1000 Dec 16 09:36:53.892228 kernel: libata version 3.00 loaded. Dec 16 09:36:53.900267 kernel: ACPI: bus type USB registered Dec 16 09:36:53.900323 kernel: usbcore: registered new interface driver usbfs Dec 16 09:36:53.901517 kernel: AVX2 version of gcm_enc/dec engaged. Dec 16 09:36:53.902770 kernel: AES CTR mode by8 optimization enabled Dec 16 09:36:53.913584 kernel: scsi host0: Virtio SCSI HBA Dec 16 09:36:53.923153 kernel: usbcore: registered new interface driver hub Dec 16 09:36:53.926184 kernel: usbcore: registered new device driver usb Dec 16 09:36:53.931183 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Dec 16 09:36:53.936684 kernel: ahci 0000:00:1f.2: version 3.0 Dec 16 09:36:53.959992 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 16 09:36:53.960010 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Dec 16 09:36:53.960179 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 16 09:36:53.960347 kernel: scsi host1: ahci Dec 16 09:36:53.960489 kernel: scsi host2: ahci Dec 16 09:36:53.960640 kernel: scsi host3: ahci Dec 16 09:36:53.960820 kernel: scsi host4: ahci Dec 16 09:36:53.961021 kernel: scsi host5: ahci Dec 16 09:36:53.961180 kernel: scsi host6: ahci Dec 16 09:36:53.961310 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a100 irq 46 Dec 16 09:36:53.961321 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a180 irq 46 Dec 16 09:36:53.961331 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a200 irq 46 Dec 16 09:36:53.961340 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a280 irq 46 Dec 16 09:36:53.961349 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a300 irq 46 Dec 16 09:36:53.961358 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a380 irq 46 Dec 16 09:36:53.937197 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 09:36:53.937314 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 09:36:53.937953 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 09:36:53.938479 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 09:36:53.938586 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 09:36:53.939401 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 09:36:53.945371 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 09:36:54.008914 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 09:36:54.014381 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 09:36:54.031507 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 09:36:54.277791 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Dec 16 09:36:54.277881 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Dec 16 09:36:54.277893 kernel: ata1.00: applying bridge limits Dec 16 09:36:54.277903 kernel: ata3: SATA link down (SStatus 0 SControl 300) Dec 16 09:36:54.277913 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 16 09:36:54.277923 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 16 09:36:54.277932 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 16 09:36:54.280155 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 16 09:36:54.281155 kernel: ata1.00: configured for UDMA/100 Dec 16 09:36:54.282242 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 16 09:36:54.329143 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Dec 16 09:36:54.347554 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 16 09:36:54.347573 kernel: sd 0:0:0:0: Power-on or device reset occurred Dec 16 09:36:54.350224 kernel: sd 0:0:0:0: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Dec 16 09:36:54.350373 kernel: sd 0:0:0:0: [sda] Write Protect is off Dec 16 09:36:54.350509 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Dec 16 09:36:54.364994 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Dec 16 09:36:54.365184 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Dec 16 09:36:54.365324 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Dec 16 09:36:54.365471 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Dec 16 09:36:54.365610 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 16 09:36:54.365620 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Dec 16 09:36:54.365745 kernel: GPT:17805311 != 80003071 Dec 16 09:36:54.365759 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Dec 16 09:36:54.365933 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 16 09:36:54.365952 kernel: GPT:17805311 != 80003071 Dec 16 09:36:54.365966 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Dec 16 09:36:54.367278 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 16 09:36:54.367313 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 16 09:36:54.367328 kernel: hub 1-0:1.0: USB hub found Dec 16 09:36:54.368182 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Dec 16 09:36:54.368382 kernel: hub 1-0:1.0: 4 ports detected Dec 16 09:36:54.368555 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Dec 16 09:36:54.368743 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Dec 16 09:36:54.368999 kernel: hub 2-0:1.0: USB hub found Dec 16 09:36:54.369312 kernel: hub 2-0:1.0: 4 ports detected Dec 16 09:36:54.393786 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (455) Dec 16 09:36:54.393844 kernel: BTRFS: device fsid c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be devid 1 transid 41 /dev/sda3 scanned by (udev-worker) (454) Dec 16 09:36:54.400434 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Dec 16 09:36:54.417318 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Dec 16 09:36:54.429110 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Dec 16 09:36:54.434640 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Dec 16 09:36:54.435346 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Dec 16 09:36:54.444963 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 16 09:36:54.450390 disk-uuid[576]: Primary Header is updated. Dec 16 09:36:54.450390 disk-uuid[576]: Secondary Entries is updated. Dec 16 09:36:54.450390 disk-uuid[576]: Secondary Header is updated. Dec 16 09:36:54.456146 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 16 09:36:54.463146 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 16 09:36:54.589204 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Dec 16 09:36:54.727180 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 16 09:36:54.731298 kernel: usbcore: registered new interface driver usbhid Dec 16 09:36:54.731336 kernel: usbhid: USB HID core driver Dec 16 09:36:54.738493 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Dec 16 09:36:54.738533 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Dec 16 09:36:55.465160 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 16 09:36:55.466193 disk-uuid[577]: The operation has completed successfully. Dec 16 09:36:55.529373 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 16 09:36:55.529542 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 16 09:36:55.545345 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 16 09:36:55.550761 sh[594]: Success Dec 16 09:36:55.566412 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Dec 16 09:36:55.617789 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 16 09:36:55.626237 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 16 09:36:55.627966 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 16 09:36:55.645192 kernel: BTRFS info (device dm-0): first mount of filesystem c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be Dec 16 09:36:55.645246 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 16 09:36:55.645267 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 16 09:36:55.646949 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 16 09:36:55.649293 kernel: BTRFS info (device dm-0): using free space tree Dec 16 09:36:55.659172 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 16 09:36:55.661104 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 16 09:36:55.662262 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 16 09:36:55.669281 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 16 09:36:55.671265 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 16 09:36:55.687092 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 16 09:36:55.687159 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 09:36:55.687170 kernel: BTRFS info (device sda6): using free space tree Dec 16 09:36:55.691605 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 16 09:36:55.691654 kernel: BTRFS info (device sda6): auto enabling async discard Dec 16 09:36:55.703528 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 16 09:36:55.704572 kernel: BTRFS info (device sda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 16 09:36:55.710424 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 16 09:36:55.715400 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 16 09:36:55.797820 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 09:36:55.798331 ignition[693]: Ignition 2.19.0 Dec 16 09:36:55.798341 ignition[693]: Stage: fetch-offline Dec 16 09:36:55.800585 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 09:36:55.798381 ignition[693]: no configs at "/usr/lib/ignition/base.d" Dec 16 09:36:55.798391 ignition[693]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 16 09:36:55.798505 ignition[693]: parsed url from cmdline: "" Dec 16 09:36:55.798509 ignition[693]: no config URL provided Dec 16 09:36:55.798517 ignition[693]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 09:36:55.798526 ignition[693]: no config at "/usr/lib/ignition/user.ign" Dec 16 09:36:55.798532 ignition[693]: failed to fetch config: resource requires networking Dec 16 09:36:55.798917 ignition[693]: Ignition finished successfully Dec 16 09:36:55.806431 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 09:36:55.834358 systemd-networkd[780]: lo: Link UP Dec 16 09:36:55.834369 systemd-networkd[780]: lo: Gained carrier Dec 16 09:36:55.836792 systemd-networkd[780]: Enumeration completed Dec 16 09:36:55.837173 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 09:36:55.837598 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 09:36:55.837602 systemd-networkd[780]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 09:36:55.838394 systemd-networkd[780]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 09:36:55.838398 systemd-networkd[780]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 09:36:55.839984 systemd-networkd[780]: eth0: Link UP Dec 16 09:36:55.839988 systemd-networkd[780]: eth0: Gained carrier Dec 16 09:36:55.839995 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 09:36:55.842311 systemd[1]: Reached target network.target - Network. Dec 16 09:36:55.847771 systemd-networkd[780]: eth1: Link UP Dec 16 09:36:55.847777 systemd-networkd[780]: eth1: Gained carrier Dec 16 09:36:55.847787 systemd-networkd[780]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 09:36:55.859290 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 16 09:36:55.872071 ignition[782]: Ignition 2.19.0 Dec 16 09:36:55.872082 ignition[782]: Stage: fetch Dec 16 09:36:55.872373 ignition[782]: no configs at "/usr/lib/ignition/base.d" Dec 16 09:36:55.872389 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 16 09:36:55.872484 ignition[782]: parsed url from cmdline: "" Dec 16 09:36:55.872488 ignition[782]: no config URL provided Dec 16 09:36:55.872493 ignition[782]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 09:36:55.872502 ignition[782]: no config at "/usr/lib/ignition/user.ign" Dec 16 09:36:55.872518 ignition[782]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Dec 16 09:36:55.872652 ignition[782]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Dec 16 09:36:55.877199 systemd-networkd[780]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 16 09:36:55.999243 systemd-networkd[780]: eth0: DHCPv4 address 138.199.148.223/32, gateway 172.31.1.1 acquired from 172.31.1.1 Dec 16 09:36:56.072782 ignition[782]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Dec 16 09:36:56.076727 ignition[782]: GET result: OK Dec 16 09:36:56.076806 ignition[782]: parsing config with SHA512: b1f9f87ba51f118e5226672f4446422cf72d9d43029f6eaaaee0699e4bcb9de24ccc2f0bfa78426a44cfd6f32077c0b92f195544cd672d48981baedfdf2a4b7b Dec 16 09:36:56.081067 unknown[782]: fetched base config from "system" Dec 16 09:36:56.081081 unknown[782]: fetched base config from "system" Dec 16 09:36:56.081507 ignition[782]: fetch: fetch complete Dec 16 09:36:56.081088 unknown[782]: fetched user config from "hetzner" Dec 16 09:36:56.081513 ignition[782]: fetch: fetch passed Dec 16 09:36:56.081559 ignition[782]: Ignition finished successfully Dec 16 09:36:56.084211 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 16 09:36:56.089324 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 16 09:36:56.115687 ignition[789]: Ignition 2.19.0 Dec 16 09:36:56.115699 ignition[789]: Stage: kargs Dec 16 09:36:56.115848 ignition[789]: no configs at "/usr/lib/ignition/base.d" Dec 16 09:36:56.115858 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 16 09:36:56.118099 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 16 09:36:56.116517 ignition[789]: kargs: kargs passed Dec 16 09:36:56.116558 ignition[789]: Ignition finished successfully Dec 16 09:36:56.125339 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 16 09:36:56.139263 ignition[795]: Ignition 2.19.0 Dec 16 09:36:56.139280 ignition[795]: Stage: disks Dec 16 09:36:56.139474 ignition[795]: no configs at "/usr/lib/ignition/base.d" Dec 16 09:36:56.139488 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 16 09:36:56.142274 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 16 09:36:56.140536 ignition[795]: disks: disks passed Dec 16 09:36:56.143714 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 16 09:36:56.140594 ignition[795]: Ignition finished successfully Dec 16 09:36:56.145367 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 16 09:36:56.147367 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 09:36:56.147927 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 09:36:56.148907 systemd[1]: Reached target basic.target - Basic System. Dec 16 09:36:56.156405 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 16 09:36:56.172197 systemd-fsck[803]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Dec 16 09:36:56.176036 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 16 09:36:56.181230 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 16 09:36:56.262167 kernel: EXT4-fs (sda9): mounted filesystem 390119fa-ab9c-4f50-b046-3b5c76c46193 r/w with ordered data mode. Quota mode: none. Dec 16 09:36:56.262675 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 16 09:36:56.263637 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 16 09:36:56.272219 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 09:36:56.276240 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 16 09:36:56.283583 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (811) Dec 16 09:36:56.286444 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 16 09:36:56.286475 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 09:36:56.286683 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 16 09:36:56.291197 kernel: BTRFS info (device sda6): using free space tree Dec 16 09:36:56.292047 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 16 09:36:56.292082 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 09:36:56.295547 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 16 09:36:56.302230 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 16 09:36:56.302282 kernel: BTRFS info (device sda6): auto enabling async discard Dec 16 09:36:56.306565 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 16 09:36:56.308349 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 09:36:56.348269 coreos-metadata[813]: Dec 16 09:36:56.348 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Dec 16 09:36:56.350047 coreos-metadata[813]: Dec 16 09:36:56.350 INFO Fetch successful Dec 16 09:36:56.352089 coreos-metadata[813]: Dec 16 09:36:56.351 INFO wrote hostname ci-4081-2-1-b-2c3a583fea to /sysroot/etc/hostname Dec 16 09:36:56.354630 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 16 09:36:56.357914 initrd-setup-root[840]: cut: /sysroot/etc/passwd: No such file or directory Dec 16 09:36:56.362214 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory Dec 16 09:36:56.366049 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory Dec 16 09:36:56.370654 initrd-setup-root[861]: cut: /sysroot/etc/gshadow: No such file or directory Dec 16 09:36:56.458555 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 16 09:36:56.465252 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 16 09:36:56.468308 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 16 09:36:56.478186 kernel: BTRFS info (device sda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 16 09:36:56.501314 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 16 09:36:56.511106 ignition[928]: INFO : Ignition 2.19.0 Dec 16 09:36:56.511106 ignition[928]: INFO : Stage: mount Dec 16 09:36:56.513735 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 09:36:56.513735 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 16 09:36:56.513735 ignition[928]: INFO : mount: mount passed Dec 16 09:36:56.513735 ignition[928]: INFO : Ignition finished successfully Dec 16 09:36:56.516160 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 16 09:36:56.522232 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 16 09:36:56.642621 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 16 09:36:56.645291 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 09:36:56.659166 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (940) Dec 16 09:36:56.659235 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 16 09:36:56.662101 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 09:36:56.662142 kernel: BTRFS info (device sda6): using free space tree Dec 16 09:36:56.667386 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 16 09:36:56.667417 kernel: BTRFS info (device sda6): auto enabling async discard Dec 16 09:36:56.670689 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 09:36:56.694715 ignition[956]: INFO : Ignition 2.19.0 Dec 16 09:36:56.695570 ignition[956]: INFO : Stage: files Dec 16 09:36:56.695570 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 09:36:56.695570 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 16 09:36:56.697434 ignition[956]: DEBUG : files: compiled without relabeling support, skipping Dec 16 09:36:56.697434 ignition[956]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 16 09:36:56.697434 ignition[956]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 16 09:36:56.701361 ignition[956]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 16 09:36:56.702245 ignition[956]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 16 09:36:56.703199 unknown[956]: wrote ssh authorized keys file for user: core Dec 16 09:36:56.704059 ignition[956]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 16 09:36:56.706446 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 16 09:36:56.706446 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 16 09:36:56.762566 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 16 09:36:56.924883 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 16 09:36:56.924883 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 16 09:36:56.924883 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Dec 16 09:36:57.368333 systemd-networkd[780]: eth0: Gained IPv6LL Dec 16 09:36:57.485069 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 16 09:36:57.568679 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 16 09:36:57.568679 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 16 09:36:57.571338 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 16 09:36:57.571338 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 16 09:36:57.571338 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 16 09:36:57.571338 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 09:36:57.571338 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 09:36:57.571338 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 09:36:57.571338 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 09:36:57.571338 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 09:36:57.571338 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 09:36:57.571338 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 16 09:36:57.571338 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 16 09:36:57.571338 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 16 09:36:57.571338 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Dec 16 09:36:57.752371 systemd-networkd[780]: eth1: Gained IPv6LL Dec 16 09:36:58.224503 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 16 09:37:01.714152 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 16 09:37:01.714152 ignition[956]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Dec 16 09:37:01.717148 ignition[956]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 09:37:01.717148 ignition[956]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 09:37:01.717148 ignition[956]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Dec 16 09:37:01.717148 ignition[956]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Dec 16 09:37:01.717148 ignition[956]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Dec 16 09:37:01.717148 ignition[956]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Dec 16 09:37:01.717148 ignition[956]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Dec 16 09:37:01.717148 ignition[956]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Dec 16 09:37:01.717148 ignition[956]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Dec 16 09:37:01.717148 ignition[956]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 16 09:37:01.717148 ignition[956]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 16 09:37:01.717148 ignition[956]: INFO : files: files passed Dec 16 09:37:01.717148 ignition[956]: INFO : Ignition finished successfully Dec 16 09:37:01.719046 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 16 09:37:01.729380 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 16 09:37:01.737796 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 16 09:37:01.743782 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 16 09:37:01.743962 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 16 09:37:01.751367 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 09:37:01.751367 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 16 09:37:01.754348 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 09:37:01.755035 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 09:37:01.756531 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 16 09:37:01.763352 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 16 09:37:01.791782 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 16 09:37:01.791925 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 16 09:37:01.793223 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 16 09:37:01.794017 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 16 09:37:01.795084 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 16 09:37:01.797263 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 16 09:37:01.813472 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 09:37:01.820279 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 16 09:37:01.828218 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 16 09:37:01.829429 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 09:37:01.830726 systemd[1]: Stopped target timers.target - Timer Units. Dec 16 09:37:01.831210 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 16 09:37:01.831309 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 09:37:01.832638 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 16 09:37:01.833373 systemd[1]: Stopped target basic.target - Basic System. Dec 16 09:37:01.834333 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 16 09:37:01.835385 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 09:37:01.836356 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 16 09:37:01.837375 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 16 09:37:01.838592 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 09:37:01.839564 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 16 09:37:01.840557 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 16 09:37:01.841626 systemd[1]: Stopped target swap.target - Swaps. Dec 16 09:37:01.842569 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 16 09:37:01.842670 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 16 09:37:01.843776 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 16 09:37:01.844421 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 09:37:01.845336 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 16 09:37:01.845463 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 09:37:01.846422 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 16 09:37:01.846550 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 16 09:37:01.847900 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 16 09:37:01.848005 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 09:37:01.849160 systemd[1]: ignition-files.service: Deactivated successfully. Dec 16 09:37:01.849308 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 16 09:37:01.850225 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 16 09:37:01.850411 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 16 09:37:01.857562 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 16 09:37:01.860312 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 16 09:37:01.860895 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 16 09:37:01.861118 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 09:37:01.862475 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 16 09:37:01.863276 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 09:37:01.872575 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 16 09:37:01.872683 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 16 09:37:01.876080 ignition[1010]: INFO : Ignition 2.19.0 Dec 16 09:37:01.876080 ignition[1010]: INFO : Stage: umount Dec 16 09:37:01.878428 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 09:37:01.878428 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 16 09:37:01.878428 ignition[1010]: INFO : umount: umount passed Dec 16 09:37:01.878428 ignition[1010]: INFO : Ignition finished successfully Dec 16 09:37:01.878818 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 16 09:37:01.878947 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 16 09:37:01.881205 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 16 09:37:01.881286 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 16 09:37:01.882391 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 16 09:37:01.882459 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 16 09:37:01.883790 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 16 09:37:01.883835 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 16 09:37:01.884882 systemd[1]: Stopped target network.target - Network. Dec 16 09:37:01.886931 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 16 09:37:01.886984 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 09:37:01.887882 systemd[1]: Stopped target paths.target - Path Units. Dec 16 09:37:01.888876 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 16 09:37:01.892185 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 09:37:01.894456 systemd[1]: Stopped target slices.target - Slice Units. Dec 16 09:37:01.896419 systemd[1]: Stopped target sockets.target - Socket Units. Dec 16 09:37:01.896888 systemd[1]: iscsid.socket: Deactivated successfully. Dec 16 09:37:01.896936 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 09:37:01.897533 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 16 09:37:01.897575 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 09:37:01.898040 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 16 09:37:01.898093 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 16 09:37:01.900268 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 16 09:37:01.900353 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 16 09:37:01.901516 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 16 09:37:01.903632 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 16 09:37:01.905271 systemd-networkd[780]: eth0: DHCPv6 lease lost Dec 16 09:37:01.906115 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 16 09:37:01.906928 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 16 09:37:01.907062 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 16 09:37:01.908102 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 16 09:37:01.908217 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 16 09:37:01.910238 systemd-networkd[780]: eth1: DHCPv6 lease lost Dec 16 09:37:01.911757 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 16 09:37:01.911923 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 16 09:37:01.913746 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 16 09:37:01.913847 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 16 09:37:01.915741 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 16 09:37:01.915800 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 16 09:37:01.923236 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 16 09:37:01.924013 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 16 09:37:01.924066 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 09:37:01.926809 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 09:37:01.926877 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 09:37:01.927493 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 16 09:37:01.927559 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 16 09:37:01.928630 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 16 09:37:01.928675 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 09:37:01.929678 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 09:37:01.947769 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 16 09:37:01.948450 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 09:37:01.949375 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 16 09:37:01.949469 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 16 09:37:01.952157 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 16 09:37:01.952211 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 16 09:37:01.953295 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 16 09:37:01.953332 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 09:37:01.954287 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 16 09:37:01.954335 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 16 09:37:01.955728 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 16 09:37:01.955794 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 16 09:37:01.956700 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 09:37:01.956777 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 09:37:01.964312 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 16 09:37:01.965640 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 16 09:37:01.965696 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 09:37:01.966843 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 16 09:37:01.966895 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 09:37:01.968102 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 16 09:37:01.968178 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 09:37:01.969364 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 09:37:01.969408 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 09:37:01.970907 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 16 09:37:01.971003 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 16 09:37:01.971903 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 16 09:37:01.977289 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 16 09:37:01.983680 systemd[1]: Switching root. Dec 16 09:37:02.015220 systemd-journald[187]: Journal stopped Dec 16 09:37:02.998824 systemd-journald[187]: Received SIGTERM from PID 1 (systemd). Dec 16 09:37:02.998892 kernel: SELinux: policy capability network_peer_controls=1 Dec 16 09:37:02.998915 kernel: SELinux: policy capability open_perms=1 Dec 16 09:37:02.998927 kernel: SELinux: policy capability extended_socket_class=1 Dec 16 09:37:02.998940 kernel: SELinux: policy capability always_check_network=0 Dec 16 09:37:02.998950 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 16 09:37:02.998974 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 16 09:37:02.998984 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 16 09:37:02.998995 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 16 09:37:02.999005 kernel: audit: type=1403 audit(1734341822.171:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 16 09:37:02.999016 systemd[1]: Successfully loaded SELinux policy in 52.430ms. Dec 16 09:37:02.999032 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.963ms. Dec 16 09:37:02.999046 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 16 09:37:02.999057 systemd[1]: Detected virtualization kvm. Dec 16 09:37:02.999068 systemd[1]: Detected architecture x86-64. Dec 16 09:37:02.999078 systemd[1]: Detected first boot. Dec 16 09:37:02.999090 systemd[1]: Hostname set to . Dec 16 09:37:02.999100 systemd[1]: Initializing machine ID from VM UUID. Dec 16 09:37:02.999111 zram_generator::config[1052]: No configuration found. Dec 16 09:37:02.999123 systemd[1]: Populated /etc with preset unit settings. Dec 16 09:37:02.999286 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 16 09:37:02.999300 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 16 09:37:02.999311 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 16 09:37:02.999323 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 16 09:37:02.999334 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 16 09:37:02.999345 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 16 09:37:02.999356 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 16 09:37:02.999367 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 16 09:37:02.999381 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 16 09:37:02.999392 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 16 09:37:02.999403 systemd[1]: Created slice user.slice - User and Session Slice. Dec 16 09:37:02.999414 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 09:37:02.999425 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 09:37:02.999436 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 16 09:37:02.999447 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 16 09:37:02.999458 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 16 09:37:02.999470 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 09:37:02.999482 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 16 09:37:02.999493 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 09:37:02.999505 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 16 09:37:02.999516 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 16 09:37:02.999527 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 16 09:37:02.999538 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 16 09:37:02.999551 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 09:37:02.999562 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 09:37:02.999573 systemd[1]: Reached target slices.target - Slice Units. Dec 16 09:37:02.999583 systemd[1]: Reached target swap.target - Swaps. Dec 16 09:37:02.999594 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 16 09:37:02.999607 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 16 09:37:02.999618 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 09:37:02.999628 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 09:37:02.999641 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 09:37:02.999660 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 16 09:37:02.999677 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 16 09:37:02.999688 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 16 09:37:02.999699 systemd[1]: Mounting media.mount - External Media Directory... Dec 16 09:37:02.999709 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 09:37:02.999720 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 16 09:37:02.999740 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 16 09:37:02.999752 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 16 09:37:02.999763 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 16 09:37:02.999774 systemd[1]: Reached target machines.target - Containers. Dec 16 09:37:02.999784 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 16 09:37:02.999795 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 09:37:02.999806 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 09:37:02.999816 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 16 09:37:02.999827 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 09:37:02.999840 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 09:37:02.999850 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 09:37:02.999862 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 16 09:37:02.999873 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 09:37:02.999883 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 16 09:37:02.999894 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 16 09:37:02.999904 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 16 09:37:02.999915 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 16 09:37:02.999928 systemd[1]: Stopped systemd-fsck-usr.service. Dec 16 09:37:02.999939 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 09:37:02.999949 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 09:37:02.999959 kernel: loop: module loaded Dec 16 09:37:02.999970 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 09:37:02.999981 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 16 09:37:03.000009 systemd-journald[1128]: Collecting audit messages is disabled. Dec 16 09:37:03.000030 systemd-journald[1128]: Journal started Dec 16 09:37:03.000052 systemd-journald[1128]: Runtime Journal (/run/log/journal/01b663c29a554bc6a50f58a3f93cdffe) is 4.8M, max 38.4M, 33.6M free. Dec 16 09:37:02.733361 systemd[1]: Queued start job for default target multi-user.target. Dec 16 09:37:02.752052 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Dec 16 09:37:02.752529 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 16 09:37:03.003601 kernel: ACPI: bus type drm_connector registered Dec 16 09:37:03.018161 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 09:37:03.018251 systemd[1]: verity-setup.service: Deactivated successfully. Dec 16 09:37:03.018275 systemd[1]: Stopped verity-setup.service. Dec 16 09:37:03.018296 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 09:37:03.024638 kernel: fuse: init (API version 7.39) Dec 16 09:37:03.024692 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 09:37:03.031871 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 16 09:37:03.033109 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 16 09:37:03.034694 systemd[1]: Mounted media.mount - External Media Directory. Dec 16 09:37:03.035605 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 16 09:37:03.036440 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 16 09:37:03.038570 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 16 09:37:03.042399 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 09:37:03.043441 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 16 09:37:03.043627 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 16 09:37:03.044862 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 09:37:03.045083 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 09:37:03.046014 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 09:37:03.046308 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 09:37:03.047424 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 09:37:03.047604 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 09:37:03.048795 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 16 09:37:03.049388 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 16 09:37:03.050520 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 09:37:03.050700 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 09:37:03.051800 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 09:37:03.052788 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 09:37:03.054180 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 16 09:37:03.055508 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 16 09:37:03.074858 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 09:37:03.083344 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 16 09:37:03.091877 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 16 09:37:03.092698 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 16 09:37:03.092811 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 09:37:03.094694 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 16 09:37:03.102325 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 16 09:37:03.106306 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 16 09:37:03.107021 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 09:37:03.116822 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 16 09:37:03.119230 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 16 09:37:03.119753 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 09:37:03.124698 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 16 09:37:03.125400 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 09:37:03.134383 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 09:37:03.144494 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 16 09:37:03.153316 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 09:37:03.158242 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 16 09:37:03.162617 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 16 09:37:03.165792 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 16 09:37:03.193810 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 16 09:37:03.196785 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 16 09:37:03.204219 systemd-tmpfiles[1173]: ACLs are not supported, ignoring. Dec 16 09:37:03.204237 systemd-tmpfiles[1173]: ACLs are not supported, ignoring. Dec 16 09:37:03.207389 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 16 09:37:03.215865 systemd-journald[1128]: Time spent on flushing to /var/log/journal/01b663c29a554bc6a50f58a3f93cdffe is 63.679ms for 1144 entries. Dec 16 09:37:03.215865 systemd-journald[1128]: System Journal (/var/log/journal/01b663c29a554bc6a50f58a3f93cdffe) is 8.0M, max 584.8M, 576.8M free. Dec 16 09:37:03.312123 systemd-journald[1128]: Received client request to flush runtime journal. Dec 16 09:37:03.312206 kernel: loop0: detected capacity change from 0 to 8 Dec 16 09:37:03.312235 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 16 09:37:03.312250 kernel: loop1: detected capacity change from 0 to 210664 Dec 16 09:37:03.220061 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 09:37:03.224720 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 09:37:03.235304 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 16 09:37:03.244385 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 16 09:37:03.249621 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 09:37:03.295664 udevadm[1185]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 16 09:37:03.310178 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 16 09:37:03.321347 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 09:37:03.323094 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 16 09:37:03.333736 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 16 09:37:03.337282 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 16 09:37:03.360797 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Dec 16 09:37:03.360824 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Dec 16 09:37:03.366189 kernel: loop2: detected capacity change from 0 to 140768 Dec 16 09:37:03.367646 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 09:37:03.425181 kernel: loop3: detected capacity change from 0 to 142488 Dec 16 09:37:03.472165 kernel: loop4: detected capacity change from 0 to 8 Dec 16 09:37:03.478157 kernel: loop5: detected capacity change from 0 to 210664 Dec 16 09:37:03.509176 kernel: loop6: detected capacity change from 0 to 140768 Dec 16 09:37:03.535242 kernel: loop7: detected capacity change from 0 to 142488 Dec 16 09:37:03.558517 (sd-merge)[1202]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Dec 16 09:37:03.559271 (sd-merge)[1202]: Merged extensions into '/usr'. Dec 16 09:37:03.565638 systemd[1]: Reloading requested from client PID 1172 ('systemd-sysext') (unit systemd-sysext.service)... Dec 16 09:37:03.565660 systemd[1]: Reloading... Dec 16 09:37:03.653174 zram_generator::config[1229]: No configuration found. Dec 16 09:37:03.717617 ldconfig[1167]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 16 09:37:03.808003 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 16 09:37:03.852794 systemd[1]: Reloading finished in 286 ms. Dec 16 09:37:03.904527 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 16 09:37:03.905433 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 16 09:37:03.906303 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 16 09:37:03.916256 systemd[1]: Starting ensure-sysext.service... Dec 16 09:37:03.917922 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 09:37:03.920272 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 09:37:03.933281 systemd[1]: Reloading requested from client PID 1272 ('systemctl') (unit ensure-sysext.service)... Dec 16 09:37:03.933389 systemd[1]: Reloading... Dec 16 09:37:03.954044 systemd-tmpfiles[1273]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 16 09:37:03.954397 systemd-tmpfiles[1273]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 16 09:37:03.955240 systemd-tmpfiles[1273]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 16 09:37:03.955484 systemd-tmpfiles[1273]: ACLs are not supported, ignoring. Dec 16 09:37:03.955550 systemd-tmpfiles[1273]: ACLs are not supported, ignoring. Dec 16 09:37:03.960765 systemd-tmpfiles[1273]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 09:37:03.960778 systemd-tmpfiles[1273]: Skipping /boot Dec 16 09:37:03.968441 systemd-udevd[1274]: Using default interface naming scheme 'v255'. Dec 16 09:37:03.978294 systemd-tmpfiles[1273]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 09:37:03.978305 systemd-tmpfiles[1273]: Skipping /boot Dec 16 09:37:04.029164 zram_generator::config[1305]: No configuration found. Dec 16 09:37:04.100153 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1331) Dec 16 09:37:04.109745 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1331) Dec 16 09:37:04.152152 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1308) Dec 16 09:37:04.186631 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 16 09:37:04.239169 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 16 09:37:04.255164 kernel: mousedev: PS/2 mouse device common for all mice Dec 16 09:37:04.260625 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 16 09:37:04.261573 systemd[1]: Reloading finished in 327 ms. Dec 16 09:37:04.262150 kernel: ACPI: button: Power Button [PWRF] Dec 16 09:37:04.278846 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 09:37:04.280142 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 09:37:04.309681 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Dec 16 09:37:04.311916 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 09:37:04.320280 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 16 09:37:04.327355 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Dec 16 09:37:04.329326 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 16 09:37:04.329931 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 09:37:04.337934 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 16 09:37:04.341119 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Dec 16 09:37:04.341653 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 16 09:37:04.346880 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 09:37:04.357389 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 09:37:04.364451 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 09:37:04.365199 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 09:37:04.370195 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 16 09:37:04.384103 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 09:37:04.389154 kernel: EDAC MC: Ver: 3.0.0 Dec 16 09:37:04.394550 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 09:37:04.397967 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 16 09:37:04.398475 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 09:37:04.401037 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 09:37:04.402046 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 09:37:04.403588 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 09:37:04.405246 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 09:37:04.406110 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 09:37:04.406750 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 09:37:04.417585 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 09:37:04.417737 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 09:37:04.425212 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 09:37:04.432346 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 09:37:04.435485 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 09:37:04.436719 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 09:37:04.441273 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 16 09:37:04.442151 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Dec 16 09:37:04.442189 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Dec 16 09:37:04.455169 kernel: Console: switching to colour dummy device 80x25 Dec 16 09:37:04.455303 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Dec 16 09:37:04.455333 kernel: [drm] features: -context_init Dec 16 09:37:04.455358 kernel: [drm] number of scanouts: 1 Dec 16 09:37:04.455396 kernel: [drm] number of cap sets: 0 Dec 16 09:37:04.460178 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Dec 16 09:37:04.457088 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 09:37:04.471151 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Dec 16 09:37:04.471211 kernel: Console: switching to colour frame buffer device 160x50 Dec 16 09:37:04.483636 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Dec 16 09:37:04.475882 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Dec 16 09:37:04.491472 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 09:37:04.491686 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 09:37:04.492440 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 09:37:04.492641 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 09:37:04.493704 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 09:37:04.493877 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 09:37:04.503755 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 16 09:37:04.516207 systemd[1]: Finished ensure-sysext.service. Dec 16 09:37:04.521146 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 16 09:37:04.524888 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 09:37:04.525267 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 09:37:04.534033 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 09:37:04.536148 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 09:37:04.537677 augenrules[1416]: No rules Dec 16 09:37:04.539342 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 16 09:37:04.542535 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 09:37:04.542662 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 09:37:04.545450 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 16 09:37:04.548316 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 09:37:04.549957 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 09:37:04.550052 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 09:37:04.552897 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 16 09:37:04.553744 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 16 09:37:04.555242 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 16 09:37:04.555703 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 09:37:04.555844 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 09:37:04.572464 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 16 09:37:04.586908 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 09:37:04.588193 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 09:37:04.597414 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 09:37:04.603605 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 16 09:37:04.620683 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 16 09:37:04.621195 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 16 09:37:04.626501 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 16 09:37:04.647712 lvm[1443]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 16 09:37:04.684154 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 16 09:37:04.686899 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 09:37:04.700310 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 16 09:37:04.705772 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 16 09:37:04.706908 systemd[1]: Reached target time-set.target - System Time Set. Dec 16 09:37:04.713872 systemd-networkd[1390]: lo: Link UP Dec 16 09:37:04.713880 systemd-networkd[1390]: lo: Gained carrier Dec 16 09:37:04.717142 lvm[1447]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 16 09:37:04.718590 systemd-networkd[1390]: Enumeration completed Dec 16 09:37:04.718691 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 09:37:04.727012 systemd-networkd[1390]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 09:37:04.727025 systemd-networkd[1390]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 09:37:04.727781 systemd-networkd[1390]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 09:37:04.727791 systemd-networkd[1390]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 09:37:04.728292 systemd-networkd[1390]: eth0: Link UP Dec 16 09:37:04.728301 systemd-networkd[1390]: eth0: Gained carrier Dec 16 09:37:04.728312 systemd-networkd[1390]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 09:37:04.729053 systemd-resolved[1391]: Positive Trust Anchors: Dec 16 09:37:04.729841 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 16 09:37:04.730509 systemd-resolved[1391]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 09:37:04.730540 systemd-resolved[1391]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 09:37:04.736552 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 09:37:04.737794 systemd-networkd[1390]: eth1: Link UP Dec 16 09:37:04.737805 systemd-networkd[1390]: eth1: Gained carrier Dec 16 09:37:04.737824 systemd-networkd[1390]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 09:37:04.738197 systemd-resolved[1391]: Using system hostname 'ci-4081-2-1-b-2c3a583fea'. Dec 16 09:37:04.744155 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 09:37:04.745640 systemd[1]: Reached target network.target - Network. Dec 16 09:37:04.746387 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 09:37:04.747085 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 09:37:04.747906 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 16 09:37:04.748649 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 16 09:37:04.749661 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 16 09:37:04.750590 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 16 09:37:04.751310 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 16 09:37:04.752019 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 16 09:37:04.752161 systemd[1]: Reached target paths.target - Path Units. Dec 16 09:37:04.752844 systemd[1]: Reached target timers.target - Timer Units. Dec 16 09:37:04.759172 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 16 09:37:04.761802 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 16 09:37:04.771576 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 16 09:37:04.772898 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 16 09:37:04.774204 systemd-networkd[1390]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 16 09:37:04.775264 systemd-timesyncd[1427]: Network configuration changed, trying to establish connection. Dec 16 09:37:04.775885 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 16 09:37:04.777627 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 09:37:04.778171 systemd[1]: Reached target basic.target - Basic System. Dec 16 09:37:04.778898 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 16 09:37:04.779015 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 16 09:37:04.787322 systemd[1]: Starting containerd.service - containerd container runtime... Dec 16 09:37:04.790248 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 16 09:37:04.792555 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 16 09:37:04.803275 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 16 09:37:04.809681 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 16 09:37:04.812197 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 16 09:37:04.819833 jq[1459]: false Dec 16 09:37:04.820308 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 16 09:37:04.827225 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 16 09:37:04.837707 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Dec 16 09:37:04.849208 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 16 09:37:04.854855 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 16 09:37:04.861574 extend-filesystems[1462]: Found loop4 Dec 16 09:37:04.861574 extend-filesystems[1462]: Found loop5 Dec 16 09:37:04.861574 extend-filesystems[1462]: Found loop6 Dec 16 09:37:04.861574 extend-filesystems[1462]: Found loop7 Dec 16 09:37:04.861574 extend-filesystems[1462]: Found sda Dec 16 09:37:04.861574 extend-filesystems[1462]: Found sda1 Dec 16 09:37:04.861574 extend-filesystems[1462]: Found sda2 Dec 16 09:37:04.861574 extend-filesystems[1462]: Found sda3 Dec 16 09:37:04.861574 extend-filesystems[1462]: Found usr Dec 16 09:37:04.861574 extend-filesystems[1462]: Found sda4 Dec 16 09:37:04.861574 extend-filesystems[1462]: Found sda6 Dec 16 09:37:04.861574 extend-filesystems[1462]: Found sda7 Dec 16 09:37:04.861574 extend-filesystems[1462]: Found sda9 Dec 16 09:37:04.861574 extend-filesystems[1462]: Checking size of /dev/sda9 Dec 16 09:37:04.912414 coreos-metadata[1457]: Dec 16 09:37:04.908 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Dec 16 09:37:04.912414 coreos-metadata[1457]: Dec 16 09:37:04.910 INFO Fetch successful Dec 16 09:37:04.912414 coreos-metadata[1457]: Dec 16 09:37:04.911 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Dec 16 09:37:04.890552 dbus-daemon[1458]: [system] SELinux support is enabled Dec 16 09:37:04.865394 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 16 09:37:04.879005 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 16 09:37:04.879588 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 16 09:37:04.879710 systemd-networkd[1390]: eth0: DHCPv4 address 138.199.148.223/32, gateway 172.31.1.1 acquired from 172.31.1.1 Dec 16 09:37:04.916422 jq[1481]: true Dec 16 09:37:04.884548 systemd-timesyncd[1427]: Network configuration changed, trying to establish connection. Dec 16 09:37:04.886037 systemd[1]: Starting update-engine.service - Update Engine... Dec 16 09:37:04.903888 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 16 09:37:04.907558 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 16 09:37:04.919429 coreos-metadata[1457]: Dec 16 09:37:04.917 INFO Fetch successful Dec 16 09:37:04.919477 extend-filesystems[1462]: Resized partition /dev/sda9 Dec 16 09:37:04.923680 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 16 09:37:04.923883 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 16 09:37:04.925392 systemd[1]: motdgen.service: Deactivated successfully. Dec 16 09:37:04.925615 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 16 09:37:04.932164 extend-filesystems[1485]: resize2fs 1.47.1 (20-May-2024) Dec 16 09:37:04.933493 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 16 09:37:04.933721 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 16 09:37:04.943283 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Dec 16 09:37:04.955353 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 16 09:37:04.955404 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 16 09:37:04.955951 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 16 09:37:04.955982 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 16 09:37:04.961003 update_engine[1479]: I20241216 09:37:04.959353 1479 main.cc:92] Flatcar Update Engine starting Dec 16 09:37:04.963519 update_engine[1479]: I20241216 09:37:04.963399 1479 update_check_scheduler.cc:74] Next update check in 9m1s Dec 16 09:37:04.964199 systemd[1]: Started update-engine.service - Update Engine. Dec 16 09:37:04.971271 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 16 09:37:04.975386 (ntainerd)[1491]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 16 09:37:04.994666 tar[1487]: linux-amd64/helm Dec 16 09:37:05.004375 jq[1490]: true Dec 16 09:37:05.019164 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1307) Dec 16 09:37:05.048504 systemd-logind[1475]: New seat seat0. Dec 16 09:37:05.055762 systemd-logind[1475]: Watching system buttons on /dev/input/event2 (Power Button) Dec 16 09:37:05.055790 systemd-logind[1475]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 16 09:37:05.056713 systemd[1]: Started systemd-logind.service - User Login Management. Dec 16 09:37:05.108458 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 16 09:37:05.111811 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 16 09:37:05.131161 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Dec 16 09:37:05.151598 extend-filesystems[1485]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Dec 16 09:37:05.151598 extend-filesystems[1485]: old_desc_blocks = 1, new_desc_blocks = 5 Dec 16 09:37:05.151598 extend-filesystems[1485]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Dec 16 09:37:05.173603 extend-filesystems[1462]: Resized filesystem in /dev/sda9 Dec 16 09:37:05.173603 extend-filesystems[1462]: Found sr0 Dec 16 09:37:05.152099 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 16 09:37:05.155361 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 16 09:37:05.195668 bash[1527]: Updated "/home/core/.ssh/authorized_keys" Dec 16 09:37:05.195518 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 16 09:37:05.208370 systemd[1]: Starting sshkeys.service... Dec 16 09:37:05.238059 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 16 09:37:05.249721 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 16 09:37:05.287857 containerd[1491]: time="2024-12-16T09:37:05.287762077Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 16 09:37:05.296640 coreos-metadata[1540]: Dec 16 09:37:05.296 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Dec 16 09:37:05.298526 coreos-metadata[1540]: Dec 16 09:37:05.298 INFO Fetch successful Dec 16 09:37:05.300343 unknown[1540]: wrote ssh authorized keys file for user: core Dec 16 09:37:05.322858 update-ssh-keys[1546]: Updated "/home/core/.ssh/authorized_keys" Dec 16 09:37:05.324214 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 16 09:37:05.332604 systemd[1]: Finished sshkeys.service. Dec 16 09:37:05.341168 sshd_keygen[1496]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 16 09:37:05.341180 locksmithd[1502]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 16 09:37:05.341781 containerd[1491]: time="2024-12-16T09:37:05.341738310Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 16 09:37:05.344153 containerd[1491]: time="2024-12-16T09:37:05.343387103Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 16 09:37:05.344153 containerd[1491]: time="2024-12-16T09:37:05.343412370Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 16 09:37:05.344153 containerd[1491]: time="2024-12-16T09:37:05.343426146Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 16 09:37:05.344153 containerd[1491]: time="2024-12-16T09:37:05.343581187Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 16 09:37:05.344153 containerd[1491]: time="2024-12-16T09:37:05.343600963Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 16 09:37:05.344153 containerd[1491]: time="2024-12-16T09:37:05.343672888Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 16 09:37:05.344153 containerd[1491]: time="2024-12-16T09:37:05.343685071Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 16 09:37:05.344153 containerd[1491]: time="2024-12-16T09:37:05.343851053Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 16 09:37:05.344153 containerd[1491]: time="2024-12-16T09:37:05.343866551Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 16 09:37:05.344153 containerd[1491]: time="2024-12-16T09:37:05.343879506Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 16 09:37:05.344153 containerd[1491]: time="2024-12-16T09:37:05.343888012Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 16 09:37:05.344361 containerd[1491]: time="2024-12-16T09:37:05.343971949Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 16 09:37:05.344361 containerd[1491]: time="2024-12-16T09:37:05.344207661Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 16 09:37:05.344361 containerd[1491]: time="2024-12-16T09:37:05.344306687Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 16 09:37:05.344361 containerd[1491]: time="2024-12-16T09:37:05.344318349Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 16 09:37:05.344429 containerd[1491]: time="2024-12-16T09:37:05.344403108Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 16 09:37:05.344596 containerd[1491]: time="2024-12-16T09:37:05.344454344Z" level=info msg="metadata content store policy set" policy=shared Dec 16 09:37:05.347820 containerd[1491]: time="2024-12-16T09:37:05.347791503Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 16 09:37:05.347863 containerd[1491]: time="2024-12-16T09:37:05.347840515Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 16 09:37:05.347863 containerd[1491]: time="2024-12-16T09:37:05.347857106Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 16 09:37:05.347911 containerd[1491]: time="2024-12-16T09:37:05.347870511Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 16 09:37:05.347911 containerd[1491]: time="2024-12-16T09:37:05.347884627Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 16 09:37:05.348041 containerd[1491]: time="2024-12-16T09:37:05.348004512Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 16 09:37:05.349425 containerd[1491]: time="2024-12-16T09:37:05.349340969Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 16 09:37:05.349482 containerd[1491]: time="2024-12-16T09:37:05.349450715Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 16 09:37:05.349482 containerd[1491]: time="2024-12-16T09:37:05.349465302Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 16 09:37:05.349526 containerd[1491]: time="2024-12-16T09:37:05.349481422Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 16 09:37:05.349526 containerd[1491]: time="2024-12-16T09:37:05.349494487Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 16 09:37:05.349526 containerd[1491]: time="2024-12-16T09:37:05.349505407Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 16 09:37:05.349526 containerd[1491]: time="2024-12-16T09:37:05.349515777Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 16 09:37:05.349602 containerd[1491]: time="2024-12-16T09:37:05.349526427Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 16 09:37:05.349602 containerd[1491]: time="2024-12-16T09:37:05.349537959Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 16 09:37:05.349602 containerd[1491]: time="2024-12-16T09:37:05.349548518Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 16 09:37:05.349602 containerd[1491]: time="2024-12-16T09:37:05.349559368Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 16 09:37:05.349602 containerd[1491]: time="2024-12-16T09:37:05.349568876Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 16 09:37:05.349602 containerd[1491]: time="2024-12-16T09:37:05.349585587Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 16 09:37:05.349602 containerd[1491]: time="2024-12-16T09:37:05.349596037Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 16 09:37:05.349781 containerd[1491]: time="2024-12-16T09:37:05.349606176Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 16 09:37:05.349781 containerd[1491]: time="2024-12-16T09:37:05.349616726Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 16 09:37:05.349781 containerd[1491]: time="2024-12-16T09:37:05.349626945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 16 09:37:05.349781 containerd[1491]: time="2024-12-16T09:37:05.349637966Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 16 09:37:05.349781 containerd[1491]: time="2024-12-16T09:37:05.349647684Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 16 09:37:05.349781 containerd[1491]: time="2024-12-16T09:37:05.349657623Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 16 09:37:05.349781 containerd[1491]: time="2024-12-16T09:37:05.349668012Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 16 09:37:05.349781 containerd[1491]: time="2024-12-16T09:37:05.349680436Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 16 09:37:05.349781 containerd[1491]: time="2024-12-16T09:37:05.349691096Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 16 09:37:05.349781 containerd[1491]: time="2024-12-16T09:37:05.349702838Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 16 09:37:05.349781 containerd[1491]: time="2024-12-16T09:37:05.349722164Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 16 09:37:05.349781 containerd[1491]: time="2024-12-16T09:37:05.349736290Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 16 09:37:05.349781 containerd[1491]: time="2024-12-16T09:37:05.349753111Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 16 09:37:05.349781 containerd[1491]: time="2024-12-16T09:37:05.349765014Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 16 09:37:05.349781 containerd[1491]: time="2024-12-16T09:37:05.349781485Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 16 09:37:05.350067 containerd[1491]: time="2024-12-16T09:37:05.349853730Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 16 09:37:05.350067 containerd[1491]: time="2024-12-16T09:37:05.349876423Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 16 09:37:05.350067 containerd[1491]: time="2024-12-16T09:37:05.349886452Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 16 09:37:05.350067 containerd[1491]: time="2024-12-16T09:37:05.349977051Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 16 09:37:05.350067 containerd[1491]: time="2024-12-16T09:37:05.349992611Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 16 09:37:05.350067 containerd[1491]: time="2024-12-16T09:37:05.350010855Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 16 09:37:05.350067 containerd[1491]: time="2024-12-16T09:37:05.350024752Z" level=info msg="NRI interface is disabled by configuration." Dec 16 09:37:05.350067 containerd[1491]: time="2024-12-16T09:37:05.350034540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 16 09:37:05.351400 containerd[1491]: time="2024-12-16T09:37:05.350415434Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 16 09:37:05.351400 containerd[1491]: time="2024-12-16T09:37:05.350482229Z" level=info msg="Connect containerd service" Dec 16 09:37:05.351400 containerd[1491]: time="2024-12-16T09:37:05.350513648Z" level=info msg="using legacy CRI server" Dec 16 09:37:05.351400 containerd[1491]: time="2024-12-16T09:37:05.350520000Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 16 09:37:05.351400 containerd[1491]: time="2024-12-16T09:37:05.350622262Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 16 09:37:05.352821 containerd[1491]: time="2024-12-16T09:37:05.352615701Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 09:37:05.352821 containerd[1491]: time="2024-12-16T09:37:05.352739422Z" level=info msg="Start subscribing containerd event" Dec 16 09:37:05.352821 containerd[1491]: time="2024-12-16T09:37:05.352782323Z" level=info msg="Start recovering state" Dec 16 09:37:05.352899 containerd[1491]: time="2024-12-16T09:37:05.352835583Z" level=info msg="Start event monitor" Dec 16 09:37:05.352899 containerd[1491]: time="2024-12-16T09:37:05.352857364Z" level=info msg="Start snapshots syncer" Dec 16 09:37:05.352899 containerd[1491]: time="2024-12-16T09:37:05.352864827Z" level=info msg="Start cni network conf syncer for default" Dec 16 09:37:05.352899 containerd[1491]: time="2024-12-16T09:37:05.352871691Z" level=info msg="Start streaming server" Dec 16 09:37:05.358842 containerd[1491]: time="2024-12-16T09:37:05.353321845Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 16 09:37:05.358842 containerd[1491]: time="2024-12-16T09:37:05.353385745Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 16 09:37:05.358842 containerd[1491]: time="2024-12-16T09:37:05.353434857Z" level=info msg="containerd successfully booted in 0.069351s" Dec 16 09:37:05.353754 systemd[1]: Started containerd.service - containerd container runtime. Dec 16 09:37:05.378148 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 16 09:37:05.388325 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 16 09:37:05.395715 systemd[1]: issuegen.service: Deactivated successfully. Dec 16 09:37:05.395929 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 16 09:37:05.409243 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 16 09:37:05.426825 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 16 09:37:05.439404 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 16 09:37:05.449463 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 16 09:37:05.451919 systemd[1]: Reached target getty.target - Login Prompts. Dec 16 09:37:05.643198 tar[1487]: linux-amd64/LICENSE Dec 16 09:37:05.643312 tar[1487]: linux-amd64/README.md Dec 16 09:37:05.654352 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 16 09:37:06.008348 systemd-networkd[1390]: eth1: Gained IPv6LL Dec 16 09:37:06.009448 systemd-timesyncd[1427]: Network configuration changed, trying to establish connection. Dec 16 09:37:06.012214 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 16 09:37:06.013862 systemd[1]: Reached target network-online.target - Network is Online. Dec 16 09:37:06.026374 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 09:37:06.029262 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 16 09:37:06.058975 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 16 09:37:06.072844 systemd-networkd[1390]: eth0: Gained IPv6LL Dec 16 09:37:06.073456 systemd-timesyncd[1427]: Network configuration changed, trying to establish connection. Dec 16 09:37:06.897269 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 09:37:06.903735 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 16 09:37:06.904425 (kubelet)[1588]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 09:37:06.907240 systemd[1]: Startup finished in 1.324s (kernel) + 9.440s (initrd) + 4.783s (userspace) = 15.549s. Dec 16 09:37:07.536570 kubelet[1588]: E1216 09:37:07.536486 1588 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 09:37:07.540742 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 09:37:07.541055 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 09:37:17.791669 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 16 09:37:17.802662 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 09:37:17.957923 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 09:37:17.962693 (kubelet)[1608]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 09:37:18.004573 kubelet[1608]: E1216 09:37:18.004483 1608 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 09:37:18.012097 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 09:37:18.012319 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 09:37:28.263079 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 16 09:37:28.270412 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 09:37:28.399417 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 09:37:28.410524 (kubelet)[1624]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 09:37:28.456442 kubelet[1624]: E1216 09:37:28.456369 1624 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 09:37:28.460194 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 09:37:28.460449 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 09:37:37.109633 systemd-timesyncd[1427]: Contacted time server 185.168.228.58:123 (2.flatcar.pool.ntp.org). Dec 16 09:37:37.109718 systemd-timesyncd[1427]: Initial clock synchronization to Mon 2024-12-16 09:37:37.109341 UTC. Dec 16 09:37:37.110127 systemd-resolved[1391]: Clock change detected. Flushing caches. Dec 16 09:37:39.436095 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 16 09:37:39.442709 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 09:37:39.585171 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 09:37:39.598814 (kubelet)[1640]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 09:37:39.636976 kubelet[1640]: E1216 09:37:39.636903 1640 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 09:37:39.640724 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 09:37:39.640966 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 09:37:49.846146 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 16 09:37:49.851588 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 09:37:49.983657 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 09:37:49.987801 (kubelet)[1656]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 09:37:50.028174 kubelet[1656]: E1216 09:37:50.028130 1656 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 09:37:50.032061 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 09:37:50.032263 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 09:37:51.104130 update_engine[1479]: I20241216 09:37:51.104040 1479 update_attempter.cc:509] Updating boot flags... Dec 16 09:37:51.145475 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1672) Dec 16 09:37:51.199856 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1668) Dec 16 09:37:51.241511 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1668) Dec 16 09:38:00.096216 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Dec 16 09:38:00.102966 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 09:38:00.239705 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 09:38:00.244081 (kubelet)[1692]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 09:38:00.282891 kubelet[1692]: E1216 09:38:00.282846 1692 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 09:38:00.287116 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 09:38:00.287341 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 09:38:10.346209 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Dec 16 09:38:10.351625 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 09:38:10.490921 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 09:38:10.498749 (kubelet)[1708]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 09:38:10.534808 kubelet[1708]: E1216 09:38:10.534714 1708 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 09:38:10.538885 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 09:38:10.539082 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 09:38:20.596339 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Dec 16 09:38:20.603925 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 09:38:20.765184 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 09:38:20.788910 (kubelet)[1724]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 09:38:20.837136 kubelet[1724]: E1216 09:38:20.837046 1724 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 09:38:20.841527 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 09:38:20.841797 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 09:38:30.846251 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Dec 16 09:38:30.852676 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 09:38:31.023750 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 09:38:31.025607 (kubelet)[1740]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 09:38:31.068168 kubelet[1740]: E1216 09:38:31.068082 1740 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 09:38:31.072399 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 09:38:31.072609 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 09:38:41.096236 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Dec 16 09:38:41.101650 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 09:38:41.234608 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 09:38:41.235466 (kubelet)[1757]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 09:38:41.268755 kubelet[1757]: E1216 09:38:41.268687 1757 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 09:38:41.272648 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 09:38:41.272836 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 09:38:51.346076 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Dec 16 09:38:51.351590 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 09:38:51.484082 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 09:38:51.488096 (kubelet)[1773]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 09:38:51.525664 kubelet[1773]: E1216 09:38:51.525602 1773 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 09:38:51.529182 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 09:38:51.529397 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 09:39:01.304958 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 16 09:39:01.309687 systemd[1]: Started sshd@0-138.199.148.223:22-147.75.109.163:57818.service - OpenSSH per-connection server daemon (147.75.109.163:57818). Dec 16 09:39:01.595979 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Dec 16 09:39:01.611740 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 09:39:01.777405 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 09:39:01.784809 (kubelet)[1792]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 09:39:01.833575 kubelet[1792]: E1216 09:39:01.833475 1792 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 09:39:01.838008 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 09:39:01.838278 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 09:39:02.291332 sshd[1782]: Accepted publickey for core from 147.75.109.163 port 57818 ssh2: RSA SHA256:zB/zPQRxUCFkkFdvDftk99JQqA6bP3NHPa7FnaDUxKk Dec 16 09:39:02.293742 sshd[1782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 09:39:02.302896 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 16 09:39:02.307817 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 16 09:39:02.309954 systemd-logind[1475]: New session 1 of user core. Dec 16 09:39:02.339031 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 16 09:39:02.350828 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 16 09:39:02.354270 (systemd)[1802]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 16 09:39:02.464673 systemd[1802]: Queued start job for default target default.target. Dec 16 09:39:02.475368 systemd[1802]: Created slice app.slice - User Application Slice. Dec 16 09:39:02.475409 systemd[1802]: Reached target paths.target - Paths. Dec 16 09:39:02.475447 systemd[1802]: Reached target timers.target - Timers. Dec 16 09:39:02.477529 systemd[1802]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 16 09:39:02.490181 systemd[1802]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 16 09:39:02.490466 systemd[1802]: Reached target sockets.target - Sockets. Dec 16 09:39:02.490485 systemd[1802]: Reached target basic.target - Basic System. Dec 16 09:39:02.490531 systemd[1802]: Reached target default.target - Main User Target. Dec 16 09:39:02.490565 systemd[1802]: Startup finished in 129ms. Dec 16 09:39:02.490697 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 16 09:39:02.504712 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 16 09:39:03.195963 systemd[1]: Started sshd@1-138.199.148.223:22-147.75.109.163:57822.service - OpenSSH per-connection server daemon (147.75.109.163:57822). Dec 16 09:39:04.185697 sshd[1813]: Accepted publickey for core from 147.75.109.163 port 57822 ssh2: RSA SHA256:zB/zPQRxUCFkkFdvDftk99JQqA6bP3NHPa7FnaDUxKk Dec 16 09:39:04.187250 sshd[1813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 09:39:04.191537 systemd-logind[1475]: New session 2 of user core. Dec 16 09:39:04.197538 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 16 09:39:04.868203 sshd[1813]: pam_unix(sshd:session): session closed for user core Dec 16 09:39:04.871605 systemd-logind[1475]: Session 2 logged out. Waiting for processes to exit. Dec 16 09:39:04.872715 systemd[1]: sshd@1-138.199.148.223:22-147.75.109.163:57822.service: Deactivated successfully. Dec 16 09:39:04.874686 systemd[1]: session-2.scope: Deactivated successfully. Dec 16 09:39:04.875648 systemd-logind[1475]: Removed session 2. Dec 16 09:39:05.035336 systemd[1]: Started sshd@2-138.199.148.223:22-147.75.109.163:57838.service - OpenSSH per-connection server daemon (147.75.109.163:57838). Dec 16 09:39:06.012174 sshd[1820]: Accepted publickey for core from 147.75.109.163 port 57838 ssh2: RSA SHA256:zB/zPQRxUCFkkFdvDftk99JQqA6bP3NHPa7FnaDUxKk Dec 16 09:39:06.014071 sshd[1820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 09:39:06.019673 systemd-logind[1475]: New session 3 of user core. Dec 16 09:39:06.028693 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 16 09:39:06.687112 sshd[1820]: pam_unix(sshd:session): session closed for user core Dec 16 09:39:06.691663 systemd[1]: sshd@2-138.199.148.223:22-147.75.109.163:57838.service: Deactivated successfully. Dec 16 09:39:06.693866 systemd[1]: session-3.scope: Deactivated successfully. Dec 16 09:39:06.694544 systemd-logind[1475]: Session 3 logged out. Waiting for processes to exit. Dec 16 09:39:06.695894 systemd-logind[1475]: Removed session 3. Dec 16 09:39:06.859799 systemd[1]: Started sshd@3-138.199.148.223:22-147.75.109.163:45530.service - OpenSSH per-connection server daemon (147.75.109.163:45530). Dec 16 09:39:07.837362 sshd[1827]: Accepted publickey for core from 147.75.109.163 port 45530 ssh2: RSA SHA256:zB/zPQRxUCFkkFdvDftk99JQqA6bP3NHPa7FnaDUxKk Dec 16 09:39:07.839053 sshd[1827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 09:39:07.843950 systemd-logind[1475]: New session 4 of user core. Dec 16 09:39:07.854671 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 16 09:39:08.517591 sshd[1827]: pam_unix(sshd:session): session closed for user core Dec 16 09:39:08.524172 systemd[1]: sshd@3-138.199.148.223:22-147.75.109.163:45530.service: Deactivated successfully. Dec 16 09:39:08.526117 systemd[1]: session-4.scope: Deactivated successfully. Dec 16 09:39:08.527884 systemd-logind[1475]: Session 4 logged out. Waiting for processes to exit. Dec 16 09:39:08.529569 systemd-logind[1475]: Removed session 4. Dec 16 09:39:08.691038 systemd[1]: Started sshd@4-138.199.148.223:22-147.75.109.163:45542.service - OpenSSH per-connection server daemon (147.75.109.163:45542). Dec 16 09:39:09.663369 sshd[1834]: Accepted publickey for core from 147.75.109.163 port 45542 ssh2: RSA SHA256:zB/zPQRxUCFkkFdvDftk99JQqA6bP3NHPa7FnaDUxKk Dec 16 09:39:09.665319 sshd[1834]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 09:39:09.671037 systemd-logind[1475]: New session 5 of user core. Dec 16 09:39:09.678663 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 16 09:39:10.193319 sudo[1837]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 16 09:39:10.193664 sudo[1837]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 09:39:10.212054 sudo[1837]: pam_unix(sudo:session): session closed for user root Dec 16 09:39:10.371195 sshd[1834]: pam_unix(sshd:session): session closed for user core Dec 16 09:39:10.374078 systemd[1]: sshd@4-138.199.148.223:22-147.75.109.163:45542.service: Deactivated successfully. Dec 16 09:39:10.376030 systemd[1]: session-5.scope: Deactivated successfully. Dec 16 09:39:10.377357 systemd-logind[1475]: Session 5 logged out. Waiting for processes to exit. Dec 16 09:39:10.378855 systemd-logind[1475]: Removed session 5. Dec 16 09:39:10.540920 systemd[1]: Started sshd@5-138.199.148.223:22-147.75.109.163:45544.service - OpenSSH per-connection server daemon (147.75.109.163:45544). Dec 16 09:39:11.526415 sshd[1842]: Accepted publickey for core from 147.75.109.163 port 45544 ssh2: RSA SHA256:zB/zPQRxUCFkkFdvDftk99JQqA6bP3NHPa7FnaDUxKk Dec 16 09:39:11.528782 sshd[1842]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 09:39:11.534840 systemd-logind[1475]: New session 6 of user core. Dec 16 09:39:11.541660 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 16 09:39:11.846213 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Dec 16 09:39:11.852681 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 09:39:11.982852 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 09:39:11.993767 (kubelet)[1853]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 09:39:12.028942 kubelet[1853]: E1216 09:39:12.028883 1853 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 09:39:12.032566 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 09:39:12.032753 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 09:39:12.052485 sudo[1863]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 16 09:39:12.052793 sudo[1863]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 09:39:12.056287 sudo[1863]: pam_unix(sudo:session): session closed for user root Dec 16 09:39:12.061807 sudo[1862]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 16 09:39:12.062100 sudo[1862]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 09:39:12.074710 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 16 09:39:12.077293 auditctl[1866]: No rules Dec 16 09:39:12.077672 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 09:39:12.077914 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 16 09:39:12.083780 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 16 09:39:12.111372 augenrules[1884]: No rules Dec 16 09:39:12.112578 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 16 09:39:12.114042 sudo[1862]: pam_unix(sudo:session): session closed for user root Dec 16 09:39:12.275320 sshd[1842]: pam_unix(sshd:session): session closed for user core Dec 16 09:39:12.278999 systemd[1]: sshd@5-138.199.148.223:22-147.75.109.163:45544.service: Deactivated successfully. Dec 16 09:39:12.281736 systemd[1]: session-6.scope: Deactivated successfully. Dec 16 09:39:12.283538 systemd-logind[1475]: Session 6 logged out. Waiting for processes to exit. Dec 16 09:39:12.284917 systemd-logind[1475]: Removed session 6. Dec 16 09:39:12.441708 systemd[1]: Started sshd@6-138.199.148.223:22-147.75.109.163:45546.service - OpenSSH per-connection server daemon (147.75.109.163:45546). Dec 16 09:39:13.416857 sshd[1892]: Accepted publickey for core from 147.75.109.163 port 45546 ssh2: RSA SHA256:zB/zPQRxUCFkkFdvDftk99JQqA6bP3NHPa7FnaDUxKk Dec 16 09:39:13.418504 sshd[1892]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 09:39:13.423393 systemd-logind[1475]: New session 7 of user core. Dec 16 09:39:13.429605 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 16 09:39:13.937953 sudo[1895]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 16 09:39:13.938257 sudo[1895]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 09:39:14.193756 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 16 09:39:14.193818 (dockerd)[1911]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 16 09:39:14.441936 dockerd[1911]: time="2024-12-16T09:39:14.441851686Z" level=info msg="Starting up" Dec 16 09:39:14.510540 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1053720135-merged.mount: Deactivated successfully. Dec 16 09:39:14.547414 dockerd[1911]: time="2024-12-16T09:39:14.547374481Z" level=info msg="Loading containers: start." Dec 16 09:39:14.657585 kernel: Initializing XFRM netlink socket Dec 16 09:39:14.736750 systemd-networkd[1390]: docker0: Link UP Dec 16 09:39:14.755852 dockerd[1911]: time="2024-12-16T09:39:14.755803238Z" level=info msg="Loading containers: done." Dec 16 09:39:14.770071 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck379026813-merged.mount: Deactivated successfully. Dec 16 09:39:14.773596 dockerd[1911]: time="2024-12-16T09:39:14.773557060Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 16 09:39:14.773665 dockerd[1911]: time="2024-12-16T09:39:14.773643611Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Dec 16 09:39:14.773771 dockerd[1911]: time="2024-12-16T09:39:14.773748144Z" level=info msg="Daemon has completed initialization" Dec 16 09:39:14.803866 dockerd[1911]: time="2024-12-16T09:39:14.803745403Z" level=info msg="API listen on /run/docker.sock" Dec 16 09:39:14.804142 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 16 09:39:15.870446 containerd[1491]: time="2024-12-16T09:39:15.870214689Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\"" Dec 16 09:39:16.428162 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount647450635.mount: Deactivated successfully. Dec 16 09:39:17.789865 containerd[1491]: time="2024-12-16T09:39:17.789796723Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:39:17.791048 containerd[1491]: time="2024-12-16T09:39:17.791009265Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.8: active requests=0, bytes read=32675734" Dec 16 09:39:17.791713 containerd[1491]: time="2024-12-16T09:39:17.791660424Z" level=info msg="ImageCreate event name:\"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:39:17.795073 containerd[1491]: time="2024-12-16T09:39:17.794671977Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:39:17.796304 containerd[1491]: time="2024-12-16T09:39:17.796106111Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.8\" with image id \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\", size \"32672442\" in 1.925848461s" Dec 16 09:39:17.796304 containerd[1491]: time="2024-12-16T09:39:17.796147057Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\" returns image reference \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\"" Dec 16 09:39:17.818757 containerd[1491]: time="2024-12-16T09:39:17.818713585Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\"" Dec 16 09:39:19.959096 containerd[1491]: time="2024-12-16T09:39:19.959007510Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:39:19.960335 containerd[1491]: time="2024-12-16T09:39:19.960282809Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.8: active requests=0, bytes read=29606429" Dec 16 09:39:19.961122 containerd[1491]: time="2024-12-16T09:39:19.961080200Z" level=info msg="ImageCreate event name:\"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:39:19.964095 containerd[1491]: time="2024-12-16T09:39:19.964043087Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:39:19.966056 containerd[1491]: time="2024-12-16T09:39:19.965209112Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.8\" with image id \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\", size \"31051521\" in 2.146292432s" Dec 16 09:39:19.966056 containerd[1491]: time="2024-12-16T09:39:19.965249027Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\" returns image reference \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\"" Dec 16 09:39:19.992455 containerd[1491]: time="2024-12-16T09:39:19.992383946Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\"" Dec 16 09:39:20.960656 containerd[1491]: time="2024-12-16T09:39:20.960591212Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:39:20.961645 containerd[1491]: time="2024-12-16T09:39:20.961608172Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.8: active requests=0, bytes read=17783055" Dec 16 09:39:20.962826 containerd[1491]: time="2024-12-16T09:39:20.962794206Z" level=info msg="ImageCreate event name:\"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:39:20.965282 containerd[1491]: time="2024-12-16T09:39:20.965038398Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:39:20.965831 containerd[1491]: time="2024-12-16T09:39:20.965804071Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.8\" with image id \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\", size \"19228165\" in 973.383707ms" Dec 16 09:39:20.965874 containerd[1491]: time="2024-12-16T09:39:20.965831140Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\" returns image reference \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\"" Dec 16 09:39:20.989876 containerd[1491]: time="2024-12-16T09:39:20.989839061Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Dec 16 09:39:21.953196 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3414832125.mount: Deactivated successfully. Dec 16 09:39:22.096383 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Dec 16 09:39:22.102721 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 09:39:22.307604 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 09:39:22.311310 (kubelet)[2148]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 09:39:22.353080 kubelet[2148]: E1216 09:39:22.352669 2148 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 09:39:22.356403 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 09:39:22.356625 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 09:39:22.382955 containerd[1491]: time="2024-12-16T09:39:22.382868719Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:39:22.383832 containerd[1491]: time="2024-12-16T09:39:22.383712898Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.8: active requests=0, bytes read=29057496" Dec 16 09:39:22.384572 containerd[1491]: time="2024-12-16T09:39:22.384494763Z" level=info msg="ImageCreate event name:\"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:39:22.386651 containerd[1491]: time="2024-12-16T09:39:22.386607932Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:39:22.387547 containerd[1491]: time="2024-12-16T09:39:22.387357396Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.8\" with image id \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\", size \"29056489\" in 1.397295991s" Dec 16 09:39:22.387547 containerd[1491]: time="2024-12-16T09:39:22.387397771Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\"" Dec 16 09:39:22.408802 containerd[1491]: time="2024-12-16T09:39:22.408735327Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 16 09:39:22.939775 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1019295477.mount: Deactivated successfully. Dec 16 09:39:23.584048 containerd[1491]: time="2024-12-16T09:39:23.582783480Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:39:23.584048 containerd[1491]: time="2024-12-16T09:39:23.584001265Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185841" Dec 16 09:39:23.584631 containerd[1491]: time="2024-12-16T09:39:23.584608946Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:39:23.587503 containerd[1491]: time="2024-12-16T09:39:23.587478553Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:39:23.588577 containerd[1491]: time="2024-12-16T09:39:23.588528205Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.179752974s" Dec 16 09:39:23.588577 containerd[1491]: time="2024-12-16T09:39:23.588573760Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 16 09:39:23.608773 containerd[1491]: time="2024-12-16T09:39:23.608724008Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 16 09:39:24.091492 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3098942033.mount: Deactivated successfully. Dec 16 09:39:24.098335 containerd[1491]: time="2024-12-16T09:39:24.098251552Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:39:24.099607 containerd[1491]: time="2024-12-16T09:39:24.099540310Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322310" Dec 16 09:39:24.100230 containerd[1491]: time="2024-12-16T09:39:24.100192002Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:39:24.102788 containerd[1491]: time="2024-12-16T09:39:24.102736346Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:39:24.104568 containerd[1491]: time="2024-12-16T09:39:24.103658892Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 494.896013ms" Dec 16 09:39:24.104568 containerd[1491]: time="2024-12-16T09:39:24.103691122Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 16 09:39:24.133218 containerd[1491]: time="2024-12-16T09:39:24.133178669Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Dec 16 09:39:24.622104 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1902775263.mount: Deactivated successfully. Dec 16 09:39:28.040157 containerd[1491]: time="2024-12-16T09:39:28.040029658Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:39:28.041672 containerd[1491]: time="2024-12-16T09:39:28.041620941Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238651" Dec 16 09:39:28.042508 containerd[1491]: time="2024-12-16T09:39:28.042458631Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:39:28.045354 containerd[1491]: time="2024-12-16T09:39:28.045308357Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:39:28.046459 containerd[1491]: time="2024-12-16T09:39:28.046372738Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 3.913155868s" Dec 16 09:39:28.046459 containerd[1491]: time="2024-12-16T09:39:28.046405740Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Dec 16 09:39:30.416482 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 09:39:30.423743 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 09:39:30.448631 systemd[1]: Reloading requested from client PID 2334 ('systemctl') (unit session-7.scope)... Dec 16 09:39:30.448815 systemd[1]: Reloading... Dec 16 09:39:30.608458 zram_generator::config[2375]: No configuration found. Dec 16 09:39:30.717143 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 16 09:39:30.784127 systemd[1]: Reloading finished in 334 ms. Dec 16 09:39:30.835955 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 16 09:39:30.836079 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 16 09:39:30.836745 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 09:39:30.840808 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 09:39:30.965798 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 09:39:30.974772 (kubelet)[2428]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 09:39:31.011325 kubelet[2428]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 09:39:31.011325 kubelet[2428]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 16 09:39:31.011325 kubelet[2428]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 09:39:31.011722 kubelet[2428]: I1216 09:39:31.011368 2428 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 09:39:31.222885 kubelet[2428]: I1216 09:39:31.222841 2428 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 16 09:39:31.222885 kubelet[2428]: I1216 09:39:31.222867 2428 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 09:39:31.223108 kubelet[2428]: I1216 09:39:31.223076 2428 server.go:927] "Client rotation is on, will bootstrap in background" Dec 16 09:39:31.245871 kubelet[2428]: I1216 09:39:31.245701 2428 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 09:39:31.247351 kubelet[2428]: E1216 09:39:31.247292 2428 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://138.199.148.223:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 138.199.148.223:6443: connect: connection refused Dec 16 09:39:31.260830 kubelet[2428]: I1216 09:39:31.260785 2428 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 09:39:31.262971 kubelet[2428]: I1216 09:39:31.262920 2428 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 09:39:31.264194 kubelet[2428]: I1216 09:39:31.262956 2428 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-2-1-b-2c3a583fea","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 16 09:39:31.264701 kubelet[2428]: I1216 09:39:31.264675 2428 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 09:39:31.264701 kubelet[2428]: I1216 09:39:31.264698 2428 container_manager_linux.go:301] "Creating device plugin manager" Dec 16 09:39:31.267104 kubelet[2428]: I1216 09:39:31.267083 2428 state_mem.go:36] "Initialized new in-memory state store" Dec 16 09:39:31.268069 kubelet[2428]: I1216 09:39:31.267852 2428 kubelet.go:400] "Attempting to sync node with API server" Dec 16 09:39:31.268069 kubelet[2428]: I1216 09:39:31.267871 2428 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 09:39:31.268069 kubelet[2428]: I1216 09:39:31.267893 2428 kubelet.go:312] "Adding apiserver pod source" Dec 16 09:39:31.268069 kubelet[2428]: I1216 09:39:31.267914 2428 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 09:39:31.268268 kubelet[2428]: W1216 09:39:31.268215 2428 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://138.199.148.223:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-b-2c3a583fea&limit=500&resourceVersion=0": dial tcp 138.199.148.223:6443: connect: connection refused Dec 16 09:39:31.268315 kubelet[2428]: E1216 09:39:31.268270 2428 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://138.199.148.223:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-b-2c3a583fea&limit=500&resourceVersion=0": dial tcp 138.199.148.223:6443: connect: connection refused Dec 16 09:39:31.270419 kubelet[2428]: W1216 09:39:31.270304 2428 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://138.199.148.223:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 138.199.148.223:6443: connect: connection refused Dec 16 09:39:31.270419 kubelet[2428]: E1216 09:39:31.270337 2428 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://138.199.148.223:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 138.199.148.223:6443: connect: connection refused Dec 16 09:39:31.270801 kubelet[2428]: I1216 09:39:31.270774 2428 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 16 09:39:31.272519 kubelet[2428]: I1216 09:39:31.272330 2428 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 16 09:39:31.272519 kubelet[2428]: W1216 09:39:31.272397 2428 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 16 09:39:31.273233 kubelet[2428]: I1216 09:39:31.273202 2428 server.go:1264] "Started kubelet" Dec 16 09:39:31.278462 kubelet[2428]: I1216 09:39:31.277965 2428 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 09:39:31.281069 kubelet[2428]: I1216 09:39:31.280240 2428 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 09:39:31.281069 kubelet[2428]: I1216 09:39:31.280523 2428 server.go:455] "Adding debug handlers to kubelet server" Dec 16 09:39:31.281069 kubelet[2428]: I1216 09:39:31.280649 2428 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 09:39:31.281069 kubelet[2428]: E1216 09:39:31.280813 2428 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://138.199.148.223:6443/api/v1/namespaces/default/events\": dial tcp 138.199.148.223:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-2-1-b-2c3a583fea.18119ed57cc9f5c1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-2-1-b-2c3a583fea,UID:ci-4081-2-1-b-2c3a583fea,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-2-1-b-2c3a583fea,},FirstTimestamp:2024-12-16 09:39:31.273184705 +0000 UTC m=+0.294920784,LastTimestamp:2024-12-16 09:39:31.273184705 +0000 UTC m=+0.294920784,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-2-1-b-2c3a583fea,}" Dec 16 09:39:31.282575 kubelet[2428]: I1216 09:39:31.282031 2428 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 09:39:31.289700 kubelet[2428]: E1216 09:39:31.289218 2428 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-2-1-b-2c3a583fea\" not found" Dec 16 09:39:31.289700 kubelet[2428]: I1216 09:39:31.289258 2428 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 16 09:39:31.290398 kubelet[2428]: I1216 09:39:31.290383 2428 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 16 09:39:31.290538 kubelet[2428]: I1216 09:39:31.290523 2428 reconciler.go:26] "Reconciler: start to sync state" Dec 16 09:39:31.290933 kubelet[2428]: W1216 09:39:31.290901 2428 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://138.199.148.223:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 138.199.148.223:6443: connect: connection refused Dec 16 09:39:31.291042 kubelet[2428]: E1216 09:39:31.291029 2428 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://138.199.148.223:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 138.199.148.223:6443: connect: connection refused Dec 16 09:39:31.291992 kubelet[2428]: E1216 09:39:31.291970 2428 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 09:39:31.292143 kubelet[2428]: E1216 09:39:31.292116 2428 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.148.223:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-b-2c3a583fea?timeout=10s\": dial tcp 138.199.148.223:6443: connect: connection refused" interval="200ms" Dec 16 09:39:31.292387 kubelet[2428]: I1216 09:39:31.292369 2428 factory.go:221] Registration of the systemd container factory successfully Dec 16 09:39:31.292601 kubelet[2428]: I1216 09:39:31.292522 2428 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 09:39:31.294270 kubelet[2428]: I1216 09:39:31.293858 2428 factory.go:221] Registration of the containerd container factory successfully Dec 16 09:39:31.308756 kubelet[2428]: I1216 09:39:31.308611 2428 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 16 09:39:31.316468 kubelet[2428]: I1216 09:39:31.316326 2428 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 16 09:39:31.316468 kubelet[2428]: I1216 09:39:31.316356 2428 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 16 09:39:31.316468 kubelet[2428]: I1216 09:39:31.316371 2428 kubelet.go:2337] "Starting kubelet main sync loop" Dec 16 09:39:31.316468 kubelet[2428]: E1216 09:39:31.316406 2428 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 09:39:31.320596 kubelet[2428]: W1216 09:39:31.320548 2428 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://138.199.148.223:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 138.199.148.223:6443: connect: connection refused Dec 16 09:39:31.320596 kubelet[2428]: E1216 09:39:31.320595 2428 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://138.199.148.223:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 138.199.148.223:6443: connect: connection refused Dec 16 09:39:31.329068 kubelet[2428]: I1216 09:39:31.328988 2428 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 16 09:39:31.329068 kubelet[2428]: I1216 09:39:31.329034 2428 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 16 09:39:31.329068 kubelet[2428]: I1216 09:39:31.329050 2428 state_mem.go:36] "Initialized new in-memory state store" Dec 16 09:39:31.331203 kubelet[2428]: I1216 09:39:31.331176 2428 policy_none.go:49] "None policy: Start" Dec 16 09:39:31.331772 kubelet[2428]: I1216 09:39:31.331729 2428 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 16 09:39:31.331772 kubelet[2428]: I1216 09:39:31.331753 2428 state_mem.go:35] "Initializing new in-memory state store" Dec 16 09:39:31.338324 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 16 09:39:31.350644 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 16 09:39:31.354352 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 16 09:39:31.365493 kubelet[2428]: I1216 09:39:31.365289 2428 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 16 09:39:31.365856 kubelet[2428]: I1216 09:39:31.365498 2428 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 09:39:31.365856 kubelet[2428]: I1216 09:39:31.365624 2428 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 09:39:31.367709 kubelet[2428]: E1216 09:39:31.367692 2428 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-2-1-b-2c3a583fea\" not found" Dec 16 09:39:31.392287 kubelet[2428]: I1216 09:39:31.392257 2428 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-b-2c3a583fea" Dec 16 09:39:31.392642 kubelet[2428]: E1216 09:39:31.392596 2428 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://138.199.148.223:6443/api/v1/nodes\": dial tcp 138.199.148.223:6443: connect: connection refused" node="ci-4081-2-1-b-2c3a583fea" Dec 16 09:39:31.416999 kubelet[2428]: I1216 09:39:31.416942 2428 topology_manager.go:215] "Topology Admit Handler" podUID="007695315bc95d4e50154860454250e8" podNamespace="kube-system" podName="kube-scheduler-ci-4081-2-1-b-2c3a583fea" Dec 16 09:39:31.418835 kubelet[2428]: I1216 09:39:31.418807 2428 topology_manager.go:215] "Topology Admit Handler" podUID="da199e9d38e55dead070c209624f631b" podNamespace="kube-system" podName="kube-apiserver-ci-4081-2-1-b-2c3a583fea" Dec 16 09:39:31.420263 kubelet[2428]: I1216 09:39:31.420183 2428 topology_manager.go:215] "Topology Admit Handler" podUID="6f06a0bae4f5a6fa7aebbf278d519fa4" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-2-1-b-2c3a583fea" Dec 16 09:39:31.425993 systemd[1]: Created slice kubepods-burstable-pod007695315bc95d4e50154860454250e8.slice - libcontainer container kubepods-burstable-pod007695315bc95d4e50154860454250e8.slice. Dec 16 09:39:31.446246 systemd[1]: Created slice kubepods-burstable-podda199e9d38e55dead070c209624f631b.slice - libcontainer container kubepods-burstable-podda199e9d38e55dead070c209624f631b.slice. Dec 16 09:39:31.462069 systemd[1]: Created slice kubepods-burstable-pod6f06a0bae4f5a6fa7aebbf278d519fa4.slice - libcontainer container kubepods-burstable-pod6f06a0bae4f5a6fa7aebbf278d519fa4.slice. Dec 16 09:39:31.492088 kubelet[2428]: I1216 09:39:31.492020 2428 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/da199e9d38e55dead070c209624f631b-k8s-certs\") pod \"kube-apiserver-ci-4081-2-1-b-2c3a583fea\" (UID: \"da199e9d38e55dead070c209624f631b\") " pod="kube-system/kube-apiserver-ci-4081-2-1-b-2c3a583fea" Dec 16 09:39:31.492088 kubelet[2428]: I1216 09:39:31.492074 2428 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6f06a0bae4f5a6fa7aebbf278d519fa4-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-2-1-b-2c3a583fea\" (UID: \"6f06a0bae4f5a6fa7aebbf278d519fa4\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-b-2c3a583fea" Dec 16 09:39:31.492365 kubelet[2428]: I1216 09:39:31.492105 2428 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6f06a0bae4f5a6fa7aebbf278d519fa4-k8s-certs\") pod \"kube-controller-manager-ci-4081-2-1-b-2c3a583fea\" (UID: \"6f06a0bae4f5a6fa7aebbf278d519fa4\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-b-2c3a583fea" Dec 16 09:39:31.492365 kubelet[2428]: I1216 09:39:31.492137 2428 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6f06a0bae4f5a6fa7aebbf278d519fa4-kubeconfig\") pod \"kube-controller-manager-ci-4081-2-1-b-2c3a583fea\" (UID: \"6f06a0bae4f5a6fa7aebbf278d519fa4\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-b-2c3a583fea" Dec 16 09:39:31.492365 kubelet[2428]: I1216 09:39:31.492176 2428 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/da199e9d38e55dead070c209624f631b-ca-certs\") pod \"kube-apiserver-ci-4081-2-1-b-2c3a583fea\" (UID: \"da199e9d38e55dead070c209624f631b\") " pod="kube-system/kube-apiserver-ci-4081-2-1-b-2c3a583fea" Dec 16 09:39:31.492365 kubelet[2428]: I1216 09:39:31.492213 2428 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/da199e9d38e55dead070c209624f631b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-2-1-b-2c3a583fea\" (UID: \"da199e9d38e55dead070c209624f631b\") " pod="kube-system/kube-apiserver-ci-4081-2-1-b-2c3a583fea" Dec 16 09:39:31.492365 kubelet[2428]: I1216 09:39:31.492242 2428 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6f06a0bae4f5a6fa7aebbf278d519fa4-ca-certs\") pod \"kube-controller-manager-ci-4081-2-1-b-2c3a583fea\" (UID: \"6f06a0bae4f5a6fa7aebbf278d519fa4\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-b-2c3a583fea" Dec 16 09:39:31.492743 kubelet[2428]: I1216 09:39:31.492269 2428 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6f06a0bae4f5a6fa7aebbf278d519fa4-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-2-1-b-2c3a583fea\" (UID: \"6f06a0bae4f5a6fa7aebbf278d519fa4\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-b-2c3a583fea" Dec 16 09:39:31.492743 kubelet[2428]: I1216 09:39:31.492295 2428 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/007695315bc95d4e50154860454250e8-kubeconfig\") pod \"kube-scheduler-ci-4081-2-1-b-2c3a583fea\" (UID: \"007695315bc95d4e50154860454250e8\") " pod="kube-system/kube-scheduler-ci-4081-2-1-b-2c3a583fea" Dec 16 09:39:31.492743 kubelet[2428]: E1216 09:39:31.492614 2428 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.148.223:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-b-2c3a583fea?timeout=10s\": dial tcp 138.199.148.223:6443: connect: connection refused" interval="400ms" Dec 16 09:39:31.595261 kubelet[2428]: I1216 09:39:31.595122 2428 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-b-2c3a583fea" Dec 16 09:39:31.595496 kubelet[2428]: E1216 09:39:31.595460 2428 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://138.199.148.223:6443/api/v1/nodes\": dial tcp 138.199.148.223:6443: connect: connection refused" node="ci-4081-2-1-b-2c3a583fea" Dec 16 09:39:31.745841 containerd[1491]: time="2024-12-16T09:39:31.745786347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-2-1-b-2c3a583fea,Uid:007695315bc95d4e50154860454250e8,Namespace:kube-system,Attempt:0,}" Dec 16 09:39:31.765817 containerd[1491]: time="2024-12-16T09:39:31.765609066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-2-1-b-2c3a583fea,Uid:6f06a0bae4f5a6fa7aebbf278d519fa4,Namespace:kube-system,Attempt:0,}" Dec 16 09:39:31.765817 containerd[1491]: time="2024-12-16T09:39:31.765632298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-2-1-b-2c3a583fea,Uid:da199e9d38e55dead070c209624f631b,Namespace:kube-system,Attempt:0,}" Dec 16 09:39:31.893539 kubelet[2428]: E1216 09:39:31.893473 2428 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.148.223:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-b-2c3a583fea?timeout=10s\": dial tcp 138.199.148.223:6443: connect: connection refused" interval="800ms" Dec 16 09:39:31.998389 kubelet[2428]: I1216 09:39:31.998360 2428 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-b-2c3a583fea" Dec 16 09:39:31.999212 kubelet[2428]: E1216 09:39:31.999155 2428 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://138.199.148.223:6443/api/v1/nodes\": dial tcp 138.199.148.223:6443: connect: connection refused" node="ci-4081-2-1-b-2c3a583fea" Dec 16 09:39:32.183704 kubelet[2428]: W1216 09:39:32.183551 2428 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://138.199.148.223:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 138.199.148.223:6443: connect: connection refused Dec 16 09:39:32.183704 kubelet[2428]: E1216 09:39:32.183604 2428 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://138.199.148.223:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 138.199.148.223:6443: connect: connection refused Dec 16 09:39:32.227481 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4287638968.mount: Deactivated successfully. Dec 16 09:39:32.236223 containerd[1491]: time="2024-12-16T09:39:32.236180081Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 09:39:32.237042 containerd[1491]: time="2024-12-16T09:39:32.237015708Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 09:39:32.237991 containerd[1491]: time="2024-12-16T09:39:32.237958695Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 16 09:39:32.240820 containerd[1491]: time="2024-12-16T09:39:32.240774382Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312076" Dec 16 09:39:32.241458 containerd[1491]: time="2024-12-16T09:39:32.241397222Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 09:39:32.243374 containerd[1491]: time="2024-12-16T09:39:32.243194882Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 09:39:32.243374 containerd[1491]: time="2024-12-16T09:39:32.243240446Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 16 09:39:32.246923 containerd[1491]: time="2024-12-16T09:39:32.246889326Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 09:39:32.248980 containerd[1491]: time="2024-12-16T09:39:32.248948452Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 503.069772ms" Dec 16 09:39:32.250822 containerd[1491]: time="2024-12-16T09:39:32.250720104Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 484.999802ms" Dec 16 09:39:32.251909 containerd[1491]: time="2024-12-16T09:39:32.251868622Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 486.19077ms" Dec 16 09:39:32.380752 containerd[1491]: time="2024-12-16T09:39:32.380103464Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 16 09:39:32.380752 containerd[1491]: time="2024-12-16T09:39:32.380170609Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 16 09:39:32.380752 containerd[1491]: time="2024-12-16T09:39:32.380187881Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 16 09:39:32.380752 containerd[1491]: time="2024-12-16T09:39:32.380278410Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 16 09:39:32.381394 containerd[1491]: time="2024-12-16T09:39:32.381161455Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 16 09:39:32.381394 containerd[1491]: time="2024-12-16T09:39:32.381236906Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 16 09:39:32.381394 containerd[1491]: time="2024-12-16T09:39:32.381246705Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 16 09:39:32.382062 containerd[1491]: time="2024-12-16T09:39:32.381582801Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 16 09:39:32.393396 containerd[1491]: time="2024-12-16T09:39:32.393184227Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 16 09:39:32.393396 containerd[1491]: time="2024-12-16T09:39:32.393229761Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 16 09:39:32.393396 containerd[1491]: time="2024-12-16T09:39:32.393240291Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 16 09:39:32.393396 containerd[1491]: time="2024-12-16T09:39:32.393309229Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 16 09:39:32.410676 kubelet[2428]: W1216 09:39:32.410381 2428 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://138.199.148.223:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 138.199.148.223:6443: connect: connection refused Dec 16 09:39:32.410676 kubelet[2428]: E1216 09:39:32.410451 2428 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://138.199.148.223:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 138.199.148.223:6443: connect: connection refused Dec 16 09:39:32.416635 systemd[1]: Started cri-containerd-048ee07b282a9aed0b100997fdb4b1edc81690bd5fedf9a7c23a7c80d47eec8d.scope - libcontainer container 048ee07b282a9aed0b100997fdb4b1edc81690bd5fedf9a7c23a7c80d47eec8d. Dec 16 09:39:32.423653 systemd[1]: Started cri-containerd-b73da414ccb81e6a652e6c95367e5fb1672340692c7b22b4e7ee18eea2a86835.scope - libcontainer container b73da414ccb81e6a652e6c95367e5fb1672340692c7b22b4e7ee18eea2a86835. Dec 16 09:39:32.429786 systemd[1]: Started cri-containerd-bca855697465f407eb060a7a5ff21d68322e635ac24345446e6546f68f58f6dc.scope - libcontainer container bca855697465f407eb060a7a5ff21d68322e635ac24345446e6546f68f58f6dc. Dec 16 09:39:32.482275 containerd[1491]: time="2024-12-16T09:39:32.482014167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-2-1-b-2c3a583fea,Uid:da199e9d38e55dead070c209624f631b,Namespace:kube-system,Attempt:0,} returns sandbox id \"048ee07b282a9aed0b100997fdb4b1edc81690bd5fedf9a7c23a7c80d47eec8d\"" Dec 16 09:39:32.490620 containerd[1491]: time="2024-12-16T09:39:32.490563136Z" level=info msg="CreateContainer within sandbox \"048ee07b282a9aed0b100997fdb4b1edc81690bd5fedf9a7c23a7c80d47eec8d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 16 09:39:32.508539 containerd[1491]: time="2024-12-16T09:39:32.508145456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-2-1-b-2c3a583fea,Uid:007695315bc95d4e50154860454250e8,Namespace:kube-system,Attempt:0,} returns sandbox id \"b73da414ccb81e6a652e6c95367e5fb1672340692c7b22b4e7ee18eea2a86835\"" Dec 16 09:39:32.510098 containerd[1491]: time="2024-12-16T09:39:32.510065614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-2-1-b-2c3a583fea,Uid:6f06a0bae4f5a6fa7aebbf278d519fa4,Namespace:kube-system,Attempt:0,} returns sandbox id \"bca855697465f407eb060a7a5ff21d68322e635ac24345446e6546f68f58f6dc\"" Dec 16 09:39:32.512726 containerd[1491]: time="2024-12-16T09:39:32.512356082Z" level=info msg="CreateContainer within sandbox \"b73da414ccb81e6a652e6c95367e5fb1672340692c7b22b4e7ee18eea2a86835\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 16 09:39:32.515227 containerd[1491]: time="2024-12-16T09:39:32.515204669Z" level=info msg="CreateContainer within sandbox \"bca855697465f407eb060a7a5ff21d68322e635ac24345446e6546f68f58f6dc\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 16 09:39:32.518637 containerd[1491]: time="2024-12-16T09:39:32.518596088Z" level=info msg="CreateContainer within sandbox \"048ee07b282a9aed0b100997fdb4b1edc81690bd5fedf9a7c23a7c80d47eec8d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"aa98bde4bd1543c2dc957dd9e1e5fe3fe2fd3df4acbeb8f1b725b26ba6200cd4\"" Dec 16 09:39:32.519350 containerd[1491]: time="2024-12-16T09:39:32.519309277Z" level=info msg="StartContainer for \"aa98bde4bd1543c2dc957dd9e1e5fe3fe2fd3df4acbeb8f1b725b26ba6200cd4\"" Dec 16 09:39:32.530312 containerd[1491]: time="2024-12-16T09:39:32.530266202Z" level=info msg="CreateContainer within sandbox \"b73da414ccb81e6a652e6c95367e5fb1672340692c7b22b4e7ee18eea2a86835\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9c42544dd956d2d4c013e6d495e211ddfbd2c17c2d53373f7e623d317b46435f\"" Dec 16 09:39:32.533832 containerd[1491]: time="2024-12-16T09:39:32.532773564Z" level=info msg="StartContainer for \"9c42544dd956d2d4c013e6d495e211ddfbd2c17c2d53373f7e623d317b46435f\"" Dec 16 09:39:32.540559 containerd[1491]: time="2024-12-16T09:39:32.540516079Z" level=info msg="CreateContainer within sandbox \"bca855697465f407eb060a7a5ff21d68322e635ac24345446e6546f68f58f6dc\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"928dc52209634a558dce2a4305f2d373c6841a3ef2be2b49382c565b6692bc54\"" Dec 16 09:39:32.541141 containerd[1491]: time="2024-12-16T09:39:32.541105187Z" level=info msg="StartContainer for \"928dc52209634a558dce2a4305f2d373c6841a3ef2be2b49382c565b6692bc54\"" Dec 16 09:39:32.554553 systemd[1]: Started cri-containerd-aa98bde4bd1543c2dc957dd9e1e5fe3fe2fd3df4acbeb8f1b725b26ba6200cd4.scope - libcontainer container aa98bde4bd1543c2dc957dd9e1e5fe3fe2fd3df4acbeb8f1b725b26ba6200cd4. Dec 16 09:39:32.584799 systemd[1]: Started cri-containerd-928dc52209634a558dce2a4305f2d373c6841a3ef2be2b49382c565b6692bc54.scope - libcontainer container 928dc52209634a558dce2a4305f2d373c6841a3ef2be2b49382c565b6692bc54. Dec 16 09:39:32.587446 systemd[1]: Started cri-containerd-9c42544dd956d2d4c013e6d495e211ddfbd2c17c2d53373f7e623d317b46435f.scope - libcontainer container 9c42544dd956d2d4c013e6d495e211ddfbd2c17c2d53373f7e623d317b46435f. Dec 16 09:39:32.624595 containerd[1491]: time="2024-12-16T09:39:32.624549875Z" level=info msg="StartContainer for \"aa98bde4bd1543c2dc957dd9e1e5fe3fe2fd3df4acbeb8f1b725b26ba6200cd4\" returns successfully" Dec 16 09:39:32.643299 containerd[1491]: time="2024-12-16T09:39:32.643193241Z" level=info msg="StartContainer for \"9c42544dd956d2d4c013e6d495e211ddfbd2c17c2d53373f7e623d317b46435f\" returns successfully" Dec 16 09:39:32.661974 containerd[1491]: time="2024-12-16T09:39:32.661921276Z" level=info msg="StartContainer for \"928dc52209634a558dce2a4305f2d373c6841a3ef2be2b49382c565b6692bc54\" returns successfully" Dec 16 09:39:32.672725 kubelet[2428]: W1216 09:39:32.672661 2428 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://138.199.148.223:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-b-2c3a583fea&limit=500&resourceVersion=0": dial tcp 138.199.148.223:6443: connect: connection refused Dec 16 09:39:32.672725 kubelet[2428]: E1216 09:39:32.672736 2428 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://138.199.148.223:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-b-2c3a583fea&limit=500&resourceVersion=0": dial tcp 138.199.148.223:6443: connect: connection refused Dec 16 09:39:32.694886 kubelet[2428]: E1216 09:39:32.694323 2428 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.148.223:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-b-2c3a583fea?timeout=10s\": dial tcp 138.199.148.223:6443: connect: connection refused" interval="1.6s" Dec 16 09:39:32.702551 kubelet[2428]: W1216 09:39:32.702341 2428 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://138.199.148.223:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 138.199.148.223:6443: connect: connection refused Dec 16 09:39:32.702823 kubelet[2428]: E1216 09:39:32.702559 2428 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://138.199.148.223:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 138.199.148.223:6443: connect: connection refused Dec 16 09:39:32.802834 kubelet[2428]: I1216 09:39:32.802688 2428 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-b-2c3a583fea" Dec 16 09:39:32.803205 kubelet[2428]: E1216 09:39:32.803051 2428 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://138.199.148.223:6443/api/v1/nodes\": dial tcp 138.199.148.223:6443: connect: connection refused" node="ci-4081-2-1-b-2c3a583fea" Dec 16 09:39:34.300596 kubelet[2428]: E1216 09:39:34.300544 2428 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-2-1-b-2c3a583fea\" not found" node="ci-4081-2-1-b-2c3a583fea" Dec 16 09:39:34.405171 kubelet[2428]: I1216 09:39:34.405095 2428 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-b-2c3a583fea" Dec 16 09:39:34.421437 kubelet[2428]: I1216 09:39:34.421399 2428 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-2-1-b-2c3a583fea" Dec 16 09:39:34.428641 kubelet[2428]: E1216 09:39:34.428604 2428 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-2-1-b-2c3a583fea\" not found" Dec 16 09:39:34.529110 kubelet[2428]: E1216 09:39:34.529065 2428 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-2-1-b-2c3a583fea\" not found" Dec 16 09:39:34.629291 kubelet[2428]: E1216 09:39:34.629218 2428 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-2-1-b-2c3a583fea\" not found" Dec 16 09:39:34.730107 kubelet[2428]: E1216 09:39:34.730060 2428 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-2-1-b-2c3a583fea\" not found" Dec 16 09:39:34.831197 kubelet[2428]: E1216 09:39:34.831141 2428 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-2-1-b-2c3a583fea\" not found" Dec 16 09:39:34.932024 kubelet[2428]: E1216 09:39:34.931846 2428 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-2-1-b-2c3a583fea\" not found" Dec 16 09:39:35.032635 kubelet[2428]: E1216 09:39:35.032576 2428 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-2-1-b-2c3a583fea\" not found" Dec 16 09:39:35.133508 kubelet[2428]: E1216 09:39:35.133417 2428 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-2-1-b-2c3a583fea\" not found" Dec 16 09:39:35.234623 kubelet[2428]: E1216 09:39:35.234294 2428 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-2-1-b-2c3a583fea\" not found" Dec 16 09:39:35.334498 kubelet[2428]: E1216 09:39:35.334414 2428 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-2-1-b-2c3a583fea\" not found" Dec 16 09:39:36.274094 kubelet[2428]: I1216 09:39:36.274051 2428 apiserver.go:52] "Watching apiserver" Dec 16 09:39:36.290629 kubelet[2428]: I1216 09:39:36.290586 2428 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 16 09:39:36.414782 systemd[1]: Reloading requested from client PID 2703 ('systemctl') (unit session-7.scope)... Dec 16 09:39:36.414799 systemd[1]: Reloading... Dec 16 09:39:36.529522 zram_generator::config[2744]: No configuration found. Dec 16 09:39:36.642282 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 16 09:39:36.722714 systemd[1]: Reloading finished in 307 ms. Dec 16 09:39:36.770139 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 09:39:36.778758 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 09:39:36.779075 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 09:39:36.784713 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 09:39:36.915038 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 09:39:36.927064 (kubelet)[2794]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 09:39:36.994137 kubelet[2794]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 09:39:36.994137 kubelet[2794]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 16 09:39:36.994137 kubelet[2794]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 09:39:36.994638 kubelet[2794]: I1216 09:39:36.994156 2794 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 09:39:37.001798 kubelet[2794]: I1216 09:39:37.001753 2794 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 16 09:39:37.001798 kubelet[2794]: I1216 09:39:37.001779 2794 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 09:39:37.001990 kubelet[2794]: I1216 09:39:37.001968 2794 server.go:927] "Client rotation is on, will bootstrap in background" Dec 16 09:39:37.003275 kubelet[2794]: I1216 09:39:37.003254 2794 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 16 09:39:37.010518 kubelet[2794]: I1216 09:39:37.009832 2794 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 09:39:37.016015 kubelet[2794]: I1216 09:39:37.015985 2794 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 09:39:37.016217 kubelet[2794]: I1216 09:39:37.016168 2794 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 09:39:37.016370 kubelet[2794]: I1216 09:39:37.016206 2794 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-2-1-b-2c3a583fea","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 16 09:39:37.016370 kubelet[2794]: I1216 09:39:37.016371 2794 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 09:39:37.016557 kubelet[2794]: I1216 09:39:37.016380 2794 container_manager_linux.go:301] "Creating device plugin manager" Dec 16 09:39:37.016557 kubelet[2794]: I1216 09:39:37.016450 2794 state_mem.go:36] "Initialized new in-memory state store" Dec 16 09:39:37.016622 kubelet[2794]: I1216 09:39:37.016584 2794 kubelet.go:400] "Attempting to sync node with API server" Dec 16 09:39:37.016622 kubelet[2794]: I1216 09:39:37.016596 2794 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 09:39:37.018310 kubelet[2794]: I1216 09:39:37.018243 2794 kubelet.go:312] "Adding apiserver pod source" Dec 16 09:39:37.018310 kubelet[2794]: I1216 09:39:37.018268 2794 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 09:39:37.023388 kubelet[2794]: I1216 09:39:37.023351 2794 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 16 09:39:37.026314 kubelet[2794]: I1216 09:39:37.025634 2794 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 16 09:39:37.026314 kubelet[2794]: I1216 09:39:37.026120 2794 server.go:1264] "Started kubelet" Dec 16 09:39:37.036155 kubelet[2794]: I1216 09:39:37.035111 2794 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 09:39:37.044338 kubelet[2794]: I1216 09:39:37.042897 2794 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 09:39:37.044338 kubelet[2794]: I1216 09:39:37.043718 2794 server.go:455] "Adding debug handlers to kubelet server" Dec 16 09:39:37.045678 kubelet[2794]: I1216 09:39:37.045602 2794 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 09:39:37.046212 kubelet[2794]: I1216 09:39:37.046142 2794 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 09:39:37.049055 kubelet[2794]: E1216 09:39:37.048994 2794 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 09:39:37.050097 kubelet[2794]: I1216 09:39:37.050061 2794 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 16 09:39:37.050183 kubelet[2794]: I1216 09:39:37.050169 2794 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 16 09:39:37.050311 kubelet[2794]: I1216 09:39:37.050280 2794 reconciler.go:26] "Reconciler: start to sync state" Dec 16 09:39:37.051001 kubelet[2794]: I1216 09:39:37.050936 2794 factory.go:221] Registration of the systemd container factory successfully Dec 16 09:39:37.051058 kubelet[2794]: I1216 09:39:37.051032 2794 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 09:39:37.060158 kubelet[2794]: I1216 09:39:37.060105 2794 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 16 09:39:37.063637 kubelet[2794]: I1216 09:39:37.062070 2794 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 16 09:39:37.063637 kubelet[2794]: I1216 09:39:37.062102 2794 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 16 09:39:37.063637 kubelet[2794]: I1216 09:39:37.062122 2794 kubelet.go:2337] "Starting kubelet main sync loop" Dec 16 09:39:37.063637 kubelet[2794]: E1216 09:39:37.062160 2794 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 09:39:37.067701 kubelet[2794]: I1216 09:39:37.067673 2794 factory.go:221] Registration of the containerd container factory successfully Dec 16 09:39:37.107946 kubelet[2794]: I1216 09:39:37.107921 2794 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 16 09:39:37.108112 kubelet[2794]: I1216 09:39:37.108100 2794 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 16 09:39:37.108201 kubelet[2794]: I1216 09:39:37.108191 2794 state_mem.go:36] "Initialized new in-memory state store" Dec 16 09:39:37.108488 kubelet[2794]: I1216 09:39:37.108469 2794 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 16 09:39:37.108582 kubelet[2794]: I1216 09:39:37.108559 2794 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 16 09:39:37.108645 kubelet[2794]: I1216 09:39:37.108637 2794 policy_none.go:49] "None policy: Start" Dec 16 09:39:37.109404 kubelet[2794]: I1216 09:39:37.109380 2794 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 16 09:39:37.109532 kubelet[2794]: I1216 09:39:37.109523 2794 state_mem.go:35] "Initializing new in-memory state store" Dec 16 09:39:37.109799 kubelet[2794]: I1216 09:39:37.109787 2794 state_mem.go:75] "Updated machine memory state" Dec 16 09:39:37.114191 kubelet[2794]: I1216 09:39:37.114159 2794 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 16 09:39:37.114352 kubelet[2794]: I1216 09:39:37.114318 2794 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 09:39:37.114450 kubelet[2794]: I1216 09:39:37.114413 2794 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 09:39:37.153502 kubelet[2794]: I1216 09:39:37.153418 2794 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-b-2c3a583fea" Dec 16 09:39:37.162767 kubelet[2794]: I1216 09:39:37.162608 2794 topology_manager.go:215] "Topology Admit Handler" podUID="da199e9d38e55dead070c209624f631b" podNamespace="kube-system" podName="kube-apiserver-ci-4081-2-1-b-2c3a583fea" Dec 16 09:39:37.162767 kubelet[2794]: I1216 09:39:37.162759 2794 topology_manager.go:215] "Topology Admit Handler" podUID="6f06a0bae4f5a6fa7aebbf278d519fa4" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-2-1-b-2c3a583fea" Dec 16 09:39:37.163011 kubelet[2794]: I1216 09:39:37.162826 2794 topology_manager.go:215] "Topology Admit Handler" podUID="007695315bc95d4e50154860454250e8" podNamespace="kube-system" podName="kube-scheduler-ci-4081-2-1-b-2c3a583fea" Dec 16 09:39:37.167656 kubelet[2794]: I1216 09:39:37.166289 2794 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081-2-1-b-2c3a583fea" Dec 16 09:39:37.167656 kubelet[2794]: I1216 09:39:37.166371 2794 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-2-1-b-2c3a583fea" Dec 16 09:39:37.351314 kubelet[2794]: I1216 09:39:37.351262 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/da199e9d38e55dead070c209624f631b-k8s-certs\") pod \"kube-apiserver-ci-4081-2-1-b-2c3a583fea\" (UID: \"da199e9d38e55dead070c209624f631b\") " pod="kube-system/kube-apiserver-ci-4081-2-1-b-2c3a583fea" Dec 16 09:39:37.351314 kubelet[2794]: I1216 09:39:37.351314 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/da199e9d38e55dead070c209624f631b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-2-1-b-2c3a583fea\" (UID: \"da199e9d38e55dead070c209624f631b\") " pod="kube-system/kube-apiserver-ci-4081-2-1-b-2c3a583fea" Dec 16 09:39:37.351314 kubelet[2794]: I1216 09:39:37.351366 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6f06a0bae4f5a6fa7aebbf278d519fa4-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-2-1-b-2c3a583fea\" (UID: \"6f06a0bae4f5a6fa7aebbf278d519fa4\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-b-2c3a583fea" Dec 16 09:39:37.351314 kubelet[2794]: I1216 09:39:37.351446 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6f06a0bae4f5a6fa7aebbf278d519fa4-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-2-1-b-2c3a583fea\" (UID: \"6f06a0bae4f5a6fa7aebbf278d519fa4\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-b-2c3a583fea" Dec 16 09:39:37.351314 kubelet[2794]: I1216 09:39:37.351477 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/007695315bc95d4e50154860454250e8-kubeconfig\") pod \"kube-scheduler-ci-4081-2-1-b-2c3a583fea\" (UID: \"007695315bc95d4e50154860454250e8\") " pod="kube-system/kube-scheduler-ci-4081-2-1-b-2c3a583fea" Dec 16 09:39:37.351875 kubelet[2794]: I1216 09:39:37.351498 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/da199e9d38e55dead070c209624f631b-ca-certs\") pod \"kube-apiserver-ci-4081-2-1-b-2c3a583fea\" (UID: \"da199e9d38e55dead070c209624f631b\") " pod="kube-system/kube-apiserver-ci-4081-2-1-b-2c3a583fea" Dec 16 09:39:37.351875 kubelet[2794]: I1216 09:39:37.351517 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6f06a0bae4f5a6fa7aebbf278d519fa4-ca-certs\") pod \"kube-controller-manager-ci-4081-2-1-b-2c3a583fea\" (UID: \"6f06a0bae4f5a6fa7aebbf278d519fa4\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-b-2c3a583fea" Dec 16 09:39:37.351875 kubelet[2794]: I1216 09:39:37.351536 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6f06a0bae4f5a6fa7aebbf278d519fa4-k8s-certs\") pod \"kube-controller-manager-ci-4081-2-1-b-2c3a583fea\" (UID: \"6f06a0bae4f5a6fa7aebbf278d519fa4\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-b-2c3a583fea" Dec 16 09:39:37.351875 kubelet[2794]: I1216 09:39:37.351554 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6f06a0bae4f5a6fa7aebbf278d519fa4-kubeconfig\") pod \"kube-controller-manager-ci-4081-2-1-b-2c3a583fea\" (UID: \"6f06a0bae4f5a6fa7aebbf278d519fa4\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-b-2c3a583fea" Dec 16 09:39:37.422406 sudo[2827]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 16 09:39:37.422864 sudo[2827]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Dec 16 09:39:38.018230 sudo[2827]: pam_unix(sudo:session): session closed for user root Dec 16 09:39:38.025953 kubelet[2794]: I1216 09:39:38.025697 2794 apiserver.go:52] "Watching apiserver" Dec 16 09:39:38.051302 kubelet[2794]: I1216 09:39:38.051215 2794 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 16 09:39:38.107339 kubelet[2794]: E1216 09:39:38.107207 2794 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-2-1-b-2c3a583fea\" already exists" pod="kube-system/kube-apiserver-ci-4081-2-1-b-2c3a583fea" Dec 16 09:39:38.126701 kubelet[2794]: I1216 09:39:38.126620 2794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-2-1-b-2c3a583fea" podStartSLOduration=1.126603795 podStartE2EDuration="1.126603795s" podCreationTimestamp="2024-12-16 09:39:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-16 09:39:38.126391499 +0000 UTC m=+1.192801591" watchObservedRunningTime="2024-12-16 09:39:38.126603795 +0000 UTC m=+1.193013888" Dec 16 09:39:38.152938 kubelet[2794]: I1216 09:39:38.150645 2794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-2-1-b-2c3a583fea" podStartSLOduration=1.150626166 podStartE2EDuration="1.150626166s" podCreationTimestamp="2024-12-16 09:39:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-16 09:39:38.138871264 +0000 UTC m=+1.205305110" watchObservedRunningTime="2024-12-16 09:39:38.150626166 +0000 UTC m=+1.217036259" Dec 16 09:39:38.162994 kubelet[2794]: I1216 09:39:38.162948 2794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-2-1-b-2c3a583fea" podStartSLOduration=1.162930173 podStartE2EDuration="1.162930173s" podCreationTimestamp="2024-12-16 09:39:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-16 09:39:38.152828302 +0000 UTC m=+1.219238396" watchObservedRunningTime="2024-12-16 09:39:38.162930173 +0000 UTC m=+1.229340266" Dec 16 09:39:39.992256 sudo[1895]: pam_unix(sudo:session): session closed for user root Dec 16 09:39:40.151996 sshd[1892]: pam_unix(sshd:session): session closed for user core Dec 16 09:39:40.155732 systemd[1]: sshd@6-138.199.148.223:22-147.75.109.163:45546.service: Deactivated successfully. Dec 16 09:39:40.158118 systemd[1]: session-7.scope: Deactivated successfully. Dec 16 09:39:40.158460 systemd[1]: session-7.scope: Consumed 4.757s CPU time, 188.9M memory peak, 0B memory swap peak. Dec 16 09:39:40.160337 systemd-logind[1475]: Session 7 logged out. Waiting for processes to exit. Dec 16 09:39:40.161967 systemd-logind[1475]: Removed session 7. Dec 16 09:39:50.527239 kubelet[2794]: I1216 09:39:50.527101 2794 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 16 09:39:50.528324 kubelet[2794]: I1216 09:39:50.527815 2794 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 16 09:39:50.528357 containerd[1491]: time="2024-12-16T09:39:50.527617827Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 16 09:39:51.413211 kubelet[2794]: I1216 09:39:51.413159 2794 topology_manager.go:215] "Topology Admit Handler" podUID="89bb4259-461b-4f83-bcde-f5eb1f51ebda" podNamespace="kube-system" podName="kube-proxy-hm5sk" Dec 16 09:39:51.415529 kubelet[2794]: I1216 09:39:51.415495 2794 topology_manager.go:215] "Topology Admit Handler" podUID="ac98d646-15f1-4d2c-9ee6-19650962f029" podNamespace="kube-system" podName="cilium-686qt" Dec 16 09:39:51.428378 systemd[1]: Created slice kubepods-besteffort-pod89bb4259_461b_4f83_bcde_f5eb1f51ebda.slice - libcontainer container kubepods-besteffort-pod89bb4259_461b_4f83_bcde_f5eb1f51ebda.slice. Dec 16 09:39:51.433211 systemd[1]: Created slice kubepods-burstable-podac98d646_15f1_4d2c_9ee6_19650962f029.slice - libcontainer container kubepods-burstable-podac98d646_15f1_4d2c_9ee6_19650962f029.slice. Dec 16 09:39:51.535112 kubelet[2794]: I1216 09:39:51.535065 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ac98d646-15f1-4d2c-9ee6-19650962f029-cni-path\") pod \"cilium-686qt\" (UID: \"ac98d646-15f1-4d2c-9ee6-19650962f029\") " pod="kube-system/cilium-686qt" Dec 16 09:39:51.535761 kubelet[2794]: I1216 09:39:51.535722 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/89bb4259-461b-4f83-bcde-f5eb1f51ebda-xtables-lock\") pod \"kube-proxy-hm5sk\" (UID: \"89bb4259-461b-4f83-bcde-f5eb1f51ebda\") " pod="kube-system/kube-proxy-hm5sk" Dec 16 09:39:51.535853 kubelet[2794]: I1216 09:39:51.535785 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/89bb4259-461b-4f83-bcde-f5eb1f51ebda-lib-modules\") pod \"kube-proxy-hm5sk\" (UID: \"89bb4259-461b-4f83-bcde-f5eb1f51ebda\") " pod="kube-system/kube-proxy-hm5sk" Dec 16 09:39:51.535853 kubelet[2794]: I1216 09:39:51.535817 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ac98d646-15f1-4d2c-9ee6-19650962f029-cilium-cgroup\") pod \"cilium-686qt\" (UID: \"ac98d646-15f1-4d2c-9ee6-19650962f029\") " pod="kube-system/cilium-686qt" Dec 16 09:39:51.535919 kubelet[2794]: I1216 09:39:51.535840 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ac98d646-15f1-4d2c-9ee6-19650962f029-clustermesh-secrets\") pod \"cilium-686qt\" (UID: \"ac98d646-15f1-4d2c-9ee6-19650962f029\") " pod="kube-system/cilium-686qt" Dec 16 09:39:51.535919 kubelet[2794]: I1216 09:39:51.535871 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ac98d646-15f1-4d2c-9ee6-19650962f029-cilium-run\") pod \"cilium-686qt\" (UID: \"ac98d646-15f1-4d2c-9ee6-19650962f029\") " pod="kube-system/cilium-686qt" Dec 16 09:39:51.535919 kubelet[2794]: I1216 09:39:51.535886 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ac98d646-15f1-4d2c-9ee6-19650962f029-host-proc-sys-net\") pod \"cilium-686qt\" (UID: \"ac98d646-15f1-4d2c-9ee6-19650962f029\") " pod="kube-system/cilium-686qt" Dec 16 09:39:51.535919 kubelet[2794]: I1216 09:39:51.535902 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/89bb4259-461b-4f83-bcde-f5eb1f51ebda-kube-proxy\") pod \"kube-proxy-hm5sk\" (UID: \"89bb4259-461b-4f83-bcde-f5eb1f51ebda\") " pod="kube-system/kube-proxy-hm5sk" Dec 16 09:39:51.536090 kubelet[2794]: I1216 09:39:51.535922 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ac98d646-15f1-4d2c-9ee6-19650962f029-xtables-lock\") pod \"cilium-686qt\" (UID: \"ac98d646-15f1-4d2c-9ee6-19650962f029\") " pod="kube-system/cilium-686qt" Dec 16 09:39:51.536090 kubelet[2794]: I1216 09:39:51.535936 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ac98d646-15f1-4d2c-9ee6-19650962f029-hubble-tls\") pod \"cilium-686qt\" (UID: \"ac98d646-15f1-4d2c-9ee6-19650962f029\") " pod="kube-system/cilium-686qt" Dec 16 09:39:51.536090 kubelet[2794]: I1216 09:39:51.535952 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ac98d646-15f1-4d2c-9ee6-19650962f029-bpf-maps\") pod \"cilium-686qt\" (UID: \"ac98d646-15f1-4d2c-9ee6-19650962f029\") " pod="kube-system/cilium-686qt" Dec 16 09:39:51.536090 kubelet[2794]: I1216 09:39:51.535967 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ac98d646-15f1-4d2c-9ee6-19650962f029-etc-cni-netd\") pod \"cilium-686qt\" (UID: \"ac98d646-15f1-4d2c-9ee6-19650962f029\") " pod="kube-system/cilium-686qt" Dec 16 09:39:51.536090 kubelet[2794]: I1216 09:39:51.535982 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ac98d646-15f1-4d2c-9ee6-19650962f029-hostproc\") pod \"cilium-686qt\" (UID: \"ac98d646-15f1-4d2c-9ee6-19650962f029\") " pod="kube-system/cilium-686qt" Dec 16 09:39:51.536090 kubelet[2794]: I1216 09:39:51.536008 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ac98d646-15f1-4d2c-9ee6-19650962f029-lib-modules\") pod \"cilium-686qt\" (UID: \"ac98d646-15f1-4d2c-9ee6-19650962f029\") " pod="kube-system/cilium-686qt" Dec 16 09:39:51.536257 kubelet[2794]: I1216 09:39:51.536023 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ac98d646-15f1-4d2c-9ee6-19650962f029-cilium-config-path\") pod \"cilium-686qt\" (UID: \"ac98d646-15f1-4d2c-9ee6-19650962f029\") " pod="kube-system/cilium-686qt" Dec 16 09:39:51.536257 kubelet[2794]: I1216 09:39:51.536041 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qr2r\" (UniqueName: \"kubernetes.io/projected/ac98d646-15f1-4d2c-9ee6-19650962f029-kube-api-access-5qr2r\") pod \"cilium-686qt\" (UID: \"ac98d646-15f1-4d2c-9ee6-19650962f029\") " pod="kube-system/cilium-686qt" Dec 16 09:39:51.536257 kubelet[2794]: I1216 09:39:51.536057 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ac98d646-15f1-4d2c-9ee6-19650962f029-host-proc-sys-kernel\") pod \"cilium-686qt\" (UID: \"ac98d646-15f1-4d2c-9ee6-19650962f029\") " pod="kube-system/cilium-686qt" Dec 16 09:39:51.536257 kubelet[2794]: I1216 09:39:51.536073 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4k7nm\" (UniqueName: \"kubernetes.io/projected/89bb4259-461b-4f83-bcde-f5eb1f51ebda-kube-api-access-4k7nm\") pod \"kube-proxy-hm5sk\" (UID: \"89bb4259-461b-4f83-bcde-f5eb1f51ebda\") " pod="kube-system/kube-proxy-hm5sk" Dec 16 09:39:51.618649 kubelet[2794]: I1216 09:39:51.616568 2794 topology_manager.go:215] "Topology Admit Handler" podUID="f4ec3fce-c376-4ad4-90a5-5fe1d3df7028" podNamespace="kube-system" podName="cilium-operator-599987898-kd4fx" Dec 16 09:39:51.626622 systemd[1]: Created slice kubepods-besteffort-podf4ec3fce_c376_4ad4_90a5_5fe1d3df7028.slice - libcontainer container kubepods-besteffort-podf4ec3fce_c376_4ad4_90a5_5fe1d3df7028.slice. Dec 16 09:39:51.637914 kubelet[2794]: I1216 09:39:51.636971 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfqg9\" (UniqueName: \"kubernetes.io/projected/f4ec3fce-c376-4ad4-90a5-5fe1d3df7028-kube-api-access-sfqg9\") pod \"cilium-operator-599987898-kd4fx\" (UID: \"f4ec3fce-c376-4ad4-90a5-5fe1d3df7028\") " pod="kube-system/cilium-operator-599987898-kd4fx" Dec 16 09:39:51.637914 kubelet[2794]: I1216 09:39:51.637174 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f4ec3fce-c376-4ad4-90a5-5fe1d3df7028-cilium-config-path\") pod \"cilium-operator-599987898-kd4fx\" (UID: \"f4ec3fce-c376-4ad4-90a5-5fe1d3df7028\") " pod="kube-system/cilium-operator-599987898-kd4fx" Dec 16 09:39:51.740734 containerd[1491]: time="2024-12-16T09:39:51.740583743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hm5sk,Uid:89bb4259-461b-4f83-bcde-f5eb1f51ebda,Namespace:kube-system,Attempt:0,}" Dec 16 09:39:51.742801 containerd[1491]: time="2024-12-16T09:39:51.741534048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-686qt,Uid:ac98d646-15f1-4d2c-9ee6-19650962f029,Namespace:kube-system,Attempt:0,}" Dec 16 09:39:51.775938 containerd[1491]: time="2024-12-16T09:39:51.774656965Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 16 09:39:51.775938 containerd[1491]: time="2024-12-16T09:39:51.775150737Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 16 09:39:51.775938 containerd[1491]: time="2024-12-16T09:39:51.775195882Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 16 09:39:51.776314 containerd[1491]: time="2024-12-16T09:39:51.776198724Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 16 09:39:51.792700 containerd[1491]: time="2024-12-16T09:39:51.792551007Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 16 09:39:51.792700 containerd[1491]: time="2024-12-16T09:39:51.792616239Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 16 09:39:51.792700 containerd[1491]: time="2024-12-16T09:39:51.792650503Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 16 09:39:51.795029 containerd[1491]: time="2024-12-16T09:39:51.794344507Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 16 09:39:51.802694 systemd[1]: Started cri-containerd-e7e37930a21add5d2403f103fed40658afce7ecc15f8947f44065b76b7568d38.scope - libcontainer container e7e37930a21add5d2403f103fed40658afce7ecc15f8947f44065b76b7568d38. Dec 16 09:39:51.828559 systemd[1]: Started cri-containerd-a7941ef23a3a17ce05de7bc108dcc8f43afb3975d65db03340b8fa61d10f175a.scope - libcontainer container a7941ef23a3a17ce05de7bc108dcc8f43afb3975d65db03340b8fa61d10f175a. Dec 16 09:39:51.846208 containerd[1491]: time="2024-12-16T09:39:51.845680942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-686qt,Uid:ac98d646-15f1-4d2c-9ee6-19650962f029,Namespace:kube-system,Attempt:0,} returns sandbox id \"e7e37930a21add5d2403f103fed40658afce7ecc15f8947f44065b76b7568d38\"" Dec 16 09:39:51.850279 containerd[1491]: time="2024-12-16T09:39:51.850246269Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 16 09:39:51.864100 containerd[1491]: time="2024-12-16T09:39:51.864056795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hm5sk,Uid:89bb4259-461b-4f83-bcde-f5eb1f51ebda,Namespace:kube-system,Attempt:0,} returns sandbox id \"a7941ef23a3a17ce05de7bc108dcc8f43afb3975d65db03340b8fa61d10f175a\"" Dec 16 09:39:51.869583 containerd[1491]: time="2024-12-16T09:39:51.869467461Z" level=info msg="CreateContainer within sandbox \"a7941ef23a3a17ce05de7bc108dcc8f43afb3975d65db03340b8fa61d10f175a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 16 09:39:51.884589 containerd[1491]: time="2024-12-16T09:39:51.884482606Z" level=info msg="CreateContainer within sandbox \"a7941ef23a3a17ce05de7bc108dcc8f43afb3975d65db03340b8fa61d10f175a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a7c81a8cf0365d2ce576cb2e4198e98d550ed093b989a8ccc15f4821a91a2b25\"" Dec 16 09:39:51.885555 containerd[1491]: time="2024-12-16T09:39:51.885300162Z" level=info msg="StartContainer for \"a7c81a8cf0365d2ce576cb2e4198e98d550ed093b989a8ccc15f4821a91a2b25\"" Dec 16 09:39:51.915578 systemd[1]: Started cri-containerd-a7c81a8cf0365d2ce576cb2e4198e98d550ed093b989a8ccc15f4821a91a2b25.scope - libcontainer container a7c81a8cf0365d2ce576cb2e4198e98d550ed093b989a8ccc15f4821a91a2b25. Dec 16 09:39:51.932422 containerd[1491]: time="2024-12-16T09:39:51.931009588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-kd4fx,Uid:f4ec3fce-c376-4ad4-90a5-5fe1d3df7028,Namespace:kube-system,Attempt:0,}" Dec 16 09:39:51.949382 containerd[1491]: time="2024-12-16T09:39:51.949331910Z" level=info msg="StartContainer for \"a7c81a8cf0365d2ce576cb2e4198e98d550ed093b989a8ccc15f4821a91a2b25\" returns successfully" Dec 16 09:39:51.970037 containerd[1491]: time="2024-12-16T09:39:51.969703560Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 16 09:39:51.970037 containerd[1491]: time="2024-12-16T09:39:51.969787366Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 16 09:39:51.970037 containerd[1491]: time="2024-12-16T09:39:51.969807143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 16 09:39:51.970980 containerd[1491]: time="2024-12-16T09:39:51.969952414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 16 09:39:51.995606 systemd[1]: Started cri-containerd-1b76df4b39e88ef8730401f8edc20a1374d882bc59b8a8d98f2fdc260fdc07c8.scope - libcontainer container 1b76df4b39e88ef8730401f8edc20a1374d882bc59b8a8d98f2fdc260fdc07c8. Dec 16 09:39:52.046593 containerd[1491]: time="2024-12-16T09:39:52.046374543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-kd4fx,Uid:f4ec3fce-c376-4ad4-90a5-5fe1d3df7028,Namespace:kube-system,Attempt:0,} returns sandbox id \"1b76df4b39e88ef8730401f8edc20a1374d882bc59b8a8d98f2fdc260fdc07c8\"" Dec 16 09:39:57.118128 kubelet[2794]: I1216 09:39:57.116563 2794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hm5sk" podStartSLOduration=6.116544365 podStartE2EDuration="6.116544365s" podCreationTimestamp="2024-12-16 09:39:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-16 09:39:52.134064153 +0000 UTC m=+15.200474286" watchObservedRunningTime="2024-12-16 09:39:57.116544365 +0000 UTC m=+20.182954459" Dec 16 09:39:58.112464 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount76815210.mount: Deactivated successfully. Dec 16 09:39:59.702638 containerd[1491]: time="2024-12-16T09:39:59.702551447Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:39:59.704565 containerd[1491]: time="2024-12-16T09:39:59.704515407Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735303" Dec 16 09:39:59.704883 containerd[1491]: time="2024-12-16T09:39:59.704848970Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:39:59.706516 containerd[1491]: time="2024-12-16T09:39:59.706349885Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 7.856067079s" Dec 16 09:39:59.706516 containerd[1491]: time="2024-12-16T09:39:59.706382777Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 16 09:39:59.708540 containerd[1491]: time="2024-12-16T09:39:59.708483554Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 16 09:39:59.709659 containerd[1491]: time="2024-12-16T09:39:59.709614397Z" level=info msg="CreateContainer within sandbox \"e7e37930a21add5d2403f103fed40658afce7ecc15f8947f44065b76b7568d38\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 16 09:39:59.793552 containerd[1491]: time="2024-12-16T09:39:59.793500603Z" level=info msg="CreateContainer within sandbox \"e7e37930a21add5d2403f103fed40658afce7ecc15f8947f44065b76b7568d38\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"aa00de5d068855d643232bbb4fecc0b93fcf0303ca1d4fe7e75e552181818591\"" Dec 16 09:39:59.795871 containerd[1491]: time="2024-12-16T09:39:59.794610787Z" level=info msg="StartContainer for \"aa00de5d068855d643232bbb4fecc0b93fcf0303ca1d4fe7e75e552181818591\"" Dec 16 09:39:59.895315 systemd[1]: run-containerd-runc-k8s.io-aa00de5d068855d643232bbb4fecc0b93fcf0303ca1d4fe7e75e552181818591-runc.WKHy3k.mount: Deactivated successfully. Dec 16 09:39:59.903640 systemd[1]: Started cri-containerd-aa00de5d068855d643232bbb4fecc0b93fcf0303ca1d4fe7e75e552181818591.scope - libcontainer container aa00de5d068855d643232bbb4fecc0b93fcf0303ca1d4fe7e75e552181818591. Dec 16 09:39:59.935093 containerd[1491]: time="2024-12-16T09:39:59.935058287Z" level=info msg="StartContainer for \"aa00de5d068855d643232bbb4fecc0b93fcf0303ca1d4fe7e75e552181818591\" returns successfully" Dec 16 09:39:59.945356 systemd[1]: cri-containerd-aa00de5d068855d643232bbb4fecc0b93fcf0303ca1d4fe7e75e552181818591.scope: Deactivated successfully. Dec 16 09:40:00.046532 containerd[1491]: time="2024-12-16T09:40:00.039043792Z" level=info msg="shim disconnected" id=aa00de5d068855d643232bbb4fecc0b93fcf0303ca1d4fe7e75e552181818591 namespace=k8s.io Dec 16 09:40:00.046532 containerd[1491]: time="2024-12-16T09:40:00.046458842Z" level=warning msg="cleaning up after shim disconnected" id=aa00de5d068855d643232bbb4fecc0b93fcf0303ca1d4fe7e75e552181818591 namespace=k8s.io Dec 16 09:40:00.046532 containerd[1491]: time="2024-12-16T09:40:00.046475473Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 16 09:40:00.164870 containerd[1491]: time="2024-12-16T09:40:00.164735389Z" level=info msg="CreateContainer within sandbox \"e7e37930a21add5d2403f103fed40658afce7ecc15f8947f44065b76b7568d38\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 16 09:40:00.177367 containerd[1491]: time="2024-12-16T09:40:00.177237207Z" level=info msg="CreateContainer within sandbox \"e7e37930a21add5d2403f103fed40658afce7ecc15f8947f44065b76b7568d38\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"54ed152e29d8b83fb5a6275d06669e9fe0d218e6bd0a99c0920d7e748a8120c7\"" Dec 16 09:40:00.179638 containerd[1491]: time="2024-12-16T09:40:00.177813043Z" level=info msg="StartContainer for \"54ed152e29d8b83fb5a6275d06669e9fe0d218e6bd0a99c0920d7e748a8120c7\"" Dec 16 09:40:00.211589 systemd[1]: Started cri-containerd-54ed152e29d8b83fb5a6275d06669e9fe0d218e6bd0a99c0920d7e748a8120c7.scope - libcontainer container 54ed152e29d8b83fb5a6275d06669e9fe0d218e6bd0a99c0920d7e748a8120c7. Dec 16 09:40:00.242267 containerd[1491]: time="2024-12-16T09:40:00.242193139Z" level=info msg="StartContainer for \"54ed152e29d8b83fb5a6275d06669e9fe0d218e6bd0a99c0920d7e748a8120c7\" returns successfully" Dec 16 09:40:00.259763 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 09:40:00.260914 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 09:40:00.261130 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Dec 16 09:40:00.269817 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 09:40:00.270133 systemd[1]: cri-containerd-54ed152e29d8b83fb5a6275d06669e9fe0d218e6bd0a99c0920d7e748a8120c7.scope: Deactivated successfully. Dec 16 09:40:00.299502 containerd[1491]: time="2024-12-16T09:40:00.298796950Z" level=info msg="shim disconnected" id=54ed152e29d8b83fb5a6275d06669e9fe0d218e6bd0a99c0920d7e748a8120c7 namespace=k8s.io Dec 16 09:40:00.299502 containerd[1491]: time="2024-12-16T09:40:00.298863956Z" level=warning msg="cleaning up after shim disconnected" id=54ed152e29d8b83fb5a6275d06669e9fe0d218e6bd0a99c0920d7e748a8120c7 namespace=k8s.io Dec 16 09:40:00.299502 containerd[1491]: time="2024-12-16T09:40:00.298873533Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 16 09:40:00.316261 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 09:40:00.780549 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aa00de5d068855d643232bbb4fecc0b93fcf0303ca1d4fe7e75e552181818591-rootfs.mount: Deactivated successfully. Dec 16 09:40:01.169139 containerd[1491]: time="2024-12-16T09:40:01.169071669Z" level=info msg="CreateContainer within sandbox \"e7e37930a21add5d2403f103fed40658afce7ecc15f8947f44065b76b7568d38\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 16 09:40:01.194066 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1158739209.mount: Deactivated successfully. Dec 16 09:40:01.197403 containerd[1491]: time="2024-12-16T09:40:01.197351431Z" level=info msg="CreateContainer within sandbox \"e7e37930a21add5d2403f103fed40658afce7ecc15f8947f44065b76b7568d38\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"095069aa854070cf9b4790d8c4a64646d573356b8c39d6b95490511473df64d9\"" Dec 16 09:40:01.197937 containerd[1491]: time="2024-12-16T09:40:01.197890820Z" level=info msg="StartContainer for \"095069aa854070cf9b4790d8c4a64646d573356b8c39d6b95490511473df64d9\"" Dec 16 09:40:01.241066 systemd[1]: Started cri-containerd-095069aa854070cf9b4790d8c4a64646d573356b8c39d6b95490511473df64d9.scope - libcontainer container 095069aa854070cf9b4790d8c4a64646d573356b8c39d6b95490511473df64d9. Dec 16 09:40:01.276791 containerd[1491]: time="2024-12-16T09:40:01.276565815Z" level=info msg="StartContainer for \"095069aa854070cf9b4790d8c4a64646d573356b8c39d6b95490511473df64d9\" returns successfully" Dec 16 09:40:01.281777 systemd[1]: cri-containerd-095069aa854070cf9b4790d8c4a64646d573356b8c39d6b95490511473df64d9.scope: Deactivated successfully. Dec 16 09:40:01.308047 containerd[1491]: time="2024-12-16T09:40:01.307984065Z" level=info msg="shim disconnected" id=095069aa854070cf9b4790d8c4a64646d573356b8c39d6b95490511473df64d9 namespace=k8s.io Dec 16 09:40:01.308370 containerd[1491]: time="2024-12-16T09:40:01.308329080Z" level=warning msg="cleaning up after shim disconnected" id=095069aa854070cf9b4790d8c4a64646d573356b8c39d6b95490511473df64d9 namespace=k8s.io Dec 16 09:40:01.308370 containerd[1491]: time="2024-12-16T09:40:01.308347274Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 16 09:40:01.323223 containerd[1491]: time="2024-12-16T09:40:01.323159632Z" level=warning msg="cleanup warnings time=\"2024-12-16T09:40:01Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 16 09:40:01.779590 systemd[1]: run-containerd-runc-k8s.io-095069aa854070cf9b4790d8c4a64646d573356b8c39d6b95490511473df64d9-runc.4N2X9K.mount: Deactivated successfully. Dec 16 09:40:01.779708 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-095069aa854070cf9b4790d8c4a64646d573356b8c39d6b95490511473df64d9-rootfs.mount: Deactivated successfully. Dec 16 09:40:02.174070 containerd[1491]: time="2024-12-16T09:40:02.174022084Z" level=info msg="CreateContainer within sandbox \"e7e37930a21add5d2403f103fed40658afce7ecc15f8947f44065b76b7568d38\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 16 09:40:02.198384 containerd[1491]: time="2024-12-16T09:40:02.198327952Z" level=info msg="CreateContainer within sandbox \"e7e37930a21add5d2403f103fed40658afce7ecc15f8947f44065b76b7568d38\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"65a531edc927cf7cdfe8957249c9eae0325c574e88e66d95a4f489207f0c368c\"" Dec 16 09:40:02.199465 containerd[1491]: time="2024-12-16T09:40:02.199130542Z" level=info msg="StartContainer for \"65a531edc927cf7cdfe8957249c9eae0325c574e88e66d95a4f489207f0c368c\"" Dec 16 09:40:02.238773 systemd[1]: Started cri-containerd-65a531edc927cf7cdfe8957249c9eae0325c574e88e66d95a4f489207f0c368c.scope - libcontainer container 65a531edc927cf7cdfe8957249c9eae0325c574e88e66d95a4f489207f0c368c. Dec 16 09:40:02.268229 systemd[1]: cri-containerd-65a531edc927cf7cdfe8957249c9eae0325c574e88e66d95a4f489207f0c368c.scope: Deactivated successfully. Dec 16 09:40:02.270137 containerd[1491]: time="2024-12-16T09:40:02.270080026Z" level=info msg="StartContainer for \"65a531edc927cf7cdfe8957249c9eae0325c574e88e66d95a4f489207f0c368c\" returns successfully" Dec 16 09:40:02.293796 containerd[1491]: time="2024-12-16T09:40:02.293718736Z" level=info msg="shim disconnected" id=65a531edc927cf7cdfe8957249c9eae0325c574e88e66d95a4f489207f0c368c namespace=k8s.io Dec 16 09:40:02.293796 containerd[1491]: time="2024-12-16T09:40:02.293779319Z" level=warning msg="cleaning up after shim disconnected" id=65a531edc927cf7cdfe8957249c9eae0325c574e88e66d95a4f489207f0c368c namespace=k8s.io Dec 16 09:40:02.293796 containerd[1491]: time="2024-12-16T09:40:02.293792033Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 16 09:40:02.312320 containerd[1491]: time="2024-12-16T09:40:02.312040687Z" level=warning msg="cleanup warnings time=\"2024-12-16T09:40:02Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 16 09:40:02.779778 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-65a531edc927cf7cdfe8957249c9eae0325c574e88e66d95a4f489207f0c368c-rootfs.mount: Deactivated successfully. Dec 16 09:40:03.179784 containerd[1491]: time="2024-12-16T09:40:03.179394185Z" level=info msg="CreateContainer within sandbox \"e7e37930a21add5d2403f103fed40658afce7ecc15f8947f44065b76b7568d38\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 16 09:40:03.202644 containerd[1491]: time="2024-12-16T09:40:03.202483641Z" level=info msg="CreateContainer within sandbox \"e7e37930a21add5d2403f103fed40658afce7ecc15f8947f44065b76b7568d38\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a1e32699cd6bb265a0f2c1fc8b0fcfa60b92db0b163e66b31666721ee2232c67\"" Dec 16 09:40:03.205102 containerd[1491]: time="2024-12-16T09:40:03.204004695Z" level=info msg="StartContainer for \"a1e32699cd6bb265a0f2c1fc8b0fcfa60b92db0b163e66b31666721ee2232c67\"" Dec 16 09:40:03.244642 systemd[1]: Started cri-containerd-a1e32699cd6bb265a0f2c1fc8b0fcfa60b92db0b163e66b31666721ee2232c67.scope - libcontainer container a1e32699cd6bb265a0f2c1fc8b0fcfa60b92db0b163e66b31666721ee2232c67. Dec 16 09:40:03.290421 containerd[1491]: time="2024-12-16T09:40:03.290373584Z" level=info msg="StartContainer for \"a1e32699cd6bb265a0f2c1fc8b0fcfa60b92db0b163e66b31666721ee2232c67\" returns successfully" Dec 16 09:40:03.528752 kubelet[2794]: I1216 09:40:03.528636 2794 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 16 09:40:03.562820 kubelet[2794]: I1216 09:40:03.562758 2794 topology_manager.go:215] "Topology Admit Handler" podUID="3a875c8e-aa51-4237-8861-e92199c7c129" podNamespace="kube-system" podName="coredns-7db6d8ff4d-qltvv" Dec 16 09:40:03.567301 kubelet[2794]: I1216 09:40:03.567208 2794 topology_manager.go:215] "Topology Admit Handler" podUID="f8cba869-f656-4889-a98e-6ccc86df0593" podNamespace="kube-system" podName="coredns-7db6d8ff4d-v667m" Dec 16 09:40:03.574604 systemd[1]: Created slice kubepods-burstable-pod3a875c8e_aa51_4237_8861_e92199c7c129.slice - libcontainer container kubepods-burstable-pod3a875c8e_aa51_4237_8861_e92199c7c129.slice. Dec 16 09:40:03.585927 systemd[1]: Created slice kubepods-burstable-podf8cba869_f656_4889_a98e_6ccc86df0593.slice - libcontainer container kubepods-burstable-podf8cba869_f656_4889_a98e_6ccc86df0593.slice. Dec 16 09:40:03.718469 kubelet[2794]: I1216 09:40:03.718372 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zh6sv\" (UniqueName: \"kubernetes.io/projected/3a875c8e-aa51-4237-8861-e92199c7c129-kube-api-access-zh6sv\") pod \"coredns-7db6d8ff4d-qltvv\" (UID: \"3a875c8e-aa51-4237-8861-e92199c7c129\") " pod="kube-system/coredns-7db6d8ff4d-qltvv" Dec 16 09:40:03.718469 kubelet[2794]: I1216 09:40:03.718469 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f8cba869-f656-4889-a98e-6ccc86df0593-config-volume\") pod \"coredns-7db6d8ff4d-v667m\" (UID: \"f8cba869-f656-4889-a98e-6ccc86df0593\") " pod="kube-system/coredns-7db6d8ff4d-v667m" Dec 16 09:40:03.718662 kubelet[2794]: I1216 09:40:03.718493 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3a875c8e-aa51-4237-8861-e92199c7c129-config-volume\") pod \"coredns-7db6d8ff4d-qltvv\" (UID: \"3a875c8e-aa51-4237-8861-e92199c7c129\") " pod="kube-system/coredns-7db6d8ff4d-qltvv" Dec 16 09:40:03.718662 kubelet[2794]: I1216 09:40:03.718510 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hhjz\" (UniqueName: \"kubernetes.io/projected/f8cba869-f656-4889-a98e-6ccc86df0593-kube-api-access-5hhjz\") pod \"coredns-7db6d8ff4d-v667m\" (UID: \"f8cba869-f656-4889-a98e-6ccc86df0593\") " pod="kube-system/coredns-7db6d8ff4d-v667m" Dec 16 09:40:03.882138 containerd[1491]: time="2024-12-16T09:40:03.881761329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qltvv,Uid:3a875c8e-aa51-4237-8861-e92199c7c129,Namespace:kube-system,Attempt:0,}" Dec 16 09:40:03.897620 containerd[1491]: time="2024-12-16T09:40:03.895857232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-v667m,Uid:f8cba869-f656-4889-a98e-6ccc86df0593,Namespace:kube-system,Attempt:0,}" Dec 16 09:40:04.195927 kubelet[2794]: I1216 09:40:04.195739 2794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-686qt" podStartSLOduration=5.335269049 podStartE2EDuration="13.195693958s" podCreationTimestamp="2024-12-16 09:39:51 +0000 UTC" firstStartedPulling="2024-12-16 09:39:51.847359627 +0000 UTC m=+14.913769721" lastFinishedPulling="2024-12-16 09:39:59.707784537 +0000 UTC m=+22.774194630" observedRunningTime="2024-12-16 09:40:04.195286796 +0000 UTC m=+27.261696900" watchObservedRunningTime="2024-12-16 09:40:04.195693958 +0000 UTC m=+27.262104051" Dec 16 09:40:05.377034 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount184385594.mount: Deactivated successfully. Dec 16 09:40:06.016624 containerd[1491]: time="2024-12-16T09:40:06.016570258Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:40:06.017604 containerd[1491]: time="2024-12-16T09:40:06.017444472Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907217" Dec 16 09:40:06.018642 containerd[1491]: time="2024-12-16T09:40:06.018604972Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:40:06.020322 containerd[1491]: time="2024-12-16T09:40:06.019939628Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 6.311411983s" Dec 16 09:40:06.020322 containerd[1491]: time="2024-12-16T09:40:06.019978881Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 16 09:40:06.022774 containerd[1491]: time="2024-12-16T09:40:06.022636540Z" level=info msg="CreateContainer within sandbox \"1b76df4b39e88ef8730401f8edc20a1374d882bc59b8a8d98f2fdc260fdc07c8\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 16 09:40:06.039871 containerd[1491]: time="2024-12-16T09:40:06.039818829Z" level=info msg="CreateContainer within sandbox \"1b76df4b39e88ef8730401f8edc20a1374d882bc59b8a8d98f2fdc260fdc07c8\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"11aa104d8b71ba9151f12797e2162c017d69cc0c33715fedfb4ec2d27800e4e9\"" Dec 16 09:40:06.041670 containerd[1491]: time="2024-12-16T09:40:06.040536120Z" level=info msg="StartContainer for \"11aa104d8b71ba9151f12797e2162c017d69cc0c33715fedfb4ec2d27800e4e9\"" Dec 16 09:40:06.079673 systemd[1]: Started cri-containerd-11aa104d8b71ba9151f12797e2162c017d69cc0c33715fedfb4ec2d27800e4e9.scope - libcontainer container 11aa104d8b71ba9151f12797e2162c017d69cc0c33715fedfb4ec2d27800e4e9. Dec 16 09:40:06.121246 containerd[1491]: time="2024-12-16T09:40:06.121155339Z" level=info msg="StartContainer for \"11aa104d8b71ba9151f12797e2162c017d69cc0c33715fedfb4ec2d27800e4e9\" returns successfully" Dec 16 09:40:09.604240 systemd-networkd[1390]: cilium_host: Link UP Dec 16 09:40:09.605016 systemd-networkd[1390]: cilium_net: Link UP Dec 16 09:40:09.605691 systemd-networkd[1390]: cilium_net: Gained carrier Dec 16 09:40:09.606421 systemd-networkd[1390]: cilium_host: Gained carrier Dec 16 09:40:09.733289 systemd-networkd[1390]: cilium_vxlan: Link UP Dec 16 09:40:09.733300 systemd-networkd[1390]: cilium_vxlan: Gained carrier Dec 16 09:40:09.933652 systemd-networkd[1390]: cilium_net: Gained IPv6LL Dec 16 09:40:10.101817 systemd-networkd[1390]: cilium_host: Gained IPv6LL Dec 16 09:40:10.184609 kernel: NET: Registered PF_ALG protocol family Dec 16 09:40:10.851775 systemd-networkd[1390]: lxc_health: Link UP Dec 16 09:40:10.852077 systemd-networkd[1390]: lxc_health: Gained carrier Dec 16 09:40:10.927642 systemd-networkd[1390]: cilium_vxlan: Gained IPv6LL Dec 16 09:40:10.991265 systemd-networkd[1390]: lxc0fea0e91b221: Link UP Dec 16 09:40:10.995457 kernel: eth0: renamed from tmp9e583 Dec 16 09:40:11.007036 systemd-networkd[1390]: lxc0fea0e91b221: Gained carrier Dec 16 09:40:11.010179 systemd-networkd[1390]: lxc9a8dddcf38a6: Link UP Dec 16 09:40:11.019468 kernel: eth0: renamed from tmpb64c9 Dec 16 09:40:11.023854 systemd-networkd[1390]: lxc9a8dddcf38a6: Gained carrier Dec 16 09:40:11.767720 kubelet[2794]: I1216 09:40:11.767459 2794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-kd4fx" podStartSLOduration=6.79529212 podStartE2EDuration="20.767439018s" podCreationTimestamp="2024-12-16 09:39:51 +0000 UTC" firstStartedPulling="2024-12-16 09:39:52.048724042 +0000 UTC m=+15.115134134" lastFinishedPulling="2024-12-16 09:40:06.020870939 +0000 UTC m=+29.087281032" observedRunningTime="2024-12-16 09:40:06.199870165 +0000 UTC m=+29.266280257" watchObservedRunningTime="2024-12-16 09:40:11.767439018 +0000 UTC m=+34.833849111" Dec 16 09:40:12.205567 systemd-networkd[1390]: lxc_health: Gained IPv6LL Dec 16 09:40:12.333595 systemd-networkd[1390]: lxc9a8dddcf38a6: Gained IPv6LL Dec 16 09:40:12.461585 systemd-networkd[1390]: lxc0fea0e91b221: Gained IPv6LL Dec 16 09:40:14.526161 containerd[1491]: time="2024-12-16T09:40:14.525918536Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 16 09:40:14.527724 containerd[1491]: time="2024-12-16T09:40:14.526167591Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 16 09:40:14.527724 containerd[1491]: time="2024-12-16T09:40:14.526244355Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 16 09:40:14.527724 containerd[1491]: time="2024-12-16T09:40:14.526660834Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 16 09:40:14.533452 containerd[1491]: time="2024-12-16T09:40:14.530705721Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 16 09:40:14.533452 containerd[1491]: time="2024-12-16T09:40:14.530781182Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 16 09:40:14.533452 containerd[1491]: time="2024-12-16T09:40:14.530795329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 16 09:40:14.533452 containerd[1491]: time="2024-12-16T09:40:14.530893863Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 16 09:40:14.571202 systemd[1]: Started cri-containerd-b64c9ecc11ab71f44129fa3f35c3aecdf4d61cab04c67da3b15ab710baaebfc4.scope - libcontainer container b64c9ecc11ab71f44129fa3f35c3aecdf4d61cab04c67da3b15ab710baaebfc4. Dec 16 09:40:14.583095 systemd[1]: Started cri-containerd-9e58348e302dc925db431ed60900632dd21e8f8e47e7d2bcf2cda80a0cef9303.scope - libcontainer container 9e58348e302dc925db431ed60900632dd21e8f8e47e7d2bcf2cda80a0cef9303. Dec 16 09:40:14.639655 containerd[1491]: time="2024-12-16T09:40:14.639618797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-v667m,Uid:f8cba869-f656-4889-a98e-6ccc86df0593,Namespace:kube-system,Attempt:0,} returns sandbox id \"b64c9ecc11ab71f44129fa3f35c3aecdf4d61cab04c67da3b15ab710baaebfc4\"" Dec 16 09:40:14.646643 containerd[1491]: time="2024-12-16T09:40:14.646592001Z" level=info msg="CreateContainer within sandbox \"b64c9ecc11ab71f44129fa3f35c3aecdf4d61cab04c67da3b15ab710baaebfc4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 09:40:14.688737 containerd[1491]: time="2024-12-16T09:40:14.688691565Z" level=info msg="CreateContainer within sandbox \"b64c9ecc11ab71f44129fa3f35c3aecdf4d61cab04c67da3b15ab710baaebfc4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a55a0f5ff294c1f902aef3ba16c27ad685e9347fd10288f73b43d250164feb3a\"" Dec 16 09:40:14.691832 containerd[1491]: time="2024-12-16T09:40:14.691637625Z" level=info msg="StartContainer for \"a55a0f5ff294c1f902aef3ba16c27ad685e9347fd10288f73b43d250164feb3a\"" Dec 16 09:40:14.722870 containerd[1491]: time="2024-12-16T09:40:14.722806571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qltvv,Uid:3a875c8e-aa51-4237-8861-e92199c7c129,Namespace:kube-system,Attempt:0,} returns sandbox id \"9e58348e302dc925db431ed60900632dd21e8f8e47e7d2bcf2cda80a0cef9303\"" Dec 16 09:40:14.729273 containerd[1491]: time="2024-12-16T09:40:14.729232843Z" level=info msg="CreateContainer within sandbox \"9e58348e302dc925db431ed60900632dd21e8f8e47e7d2bcf2cda80a0cef9303\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 09:40:14.744246 containerd[1491]: time="2024-12-16T09:40:14.744081158Z" level=info msg="CreateContainer within sandbox \"9e58348e302dc925db431ed60900632dd21e8f8e47e7d2bcf2cda80a0cef9303\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3d1334f36a3f9e376dec8380c073ade5ff2b18251a765372459da34ebec2b1e7\"" Dec 16 09:40:14.745638 containerd[1491]: time="2024-12-16T09:40:14.745619096Z" level=info msg="StartContainer for \"3d1334f36a3f9e376dec8380c073ade5ff2b18251a765372459da34ebec2b1e7\"" Dec 16 09:40:14.754142 systemd[1]: Started cri-containerd-a55a0f5ff294c1f902aef3ba16c27ad685e9347fd10288f73b43d250164feb3a.scope - libcontainer container a55a0f5ff294c1f902aef3ba16c27ad685e9347fd10288f73b43d250164feb3a. Dec 16 09:40:14.784801 systemd[1]: Started cri-containerd-3d1334f36a3f9e376dec8380c073ade5ff2b18251a765372459da34ebec2b1e7.scope - libcontainer container 3d1334f36a3f9e376dec8380c073ade5ff2b18251a765372459da34ebec2b1e7. Dec 16 09:40:14.809883 containerd[1491]: time="2024-12-16T09:40:14.809815163Z" level=info msg="StartContainer for \"a55a0f5ff294c1f902aef3ba16c27ad685e9347fd10288f73b43d250164feb3a\" returns successfully" Dec 16 09:40:14.827884 containerd[1491]: time="2024-12-16T09:40:14.827833298Z" level=info msg="StartContainer for \"3d1334f36a3f9e376dec8380c073ade5ff2b18251a765372459da34ebec2b1e7\" returns successfully" Dec 16 09:40:15.217693 kubelet[2794]: I1216 09:40:15.217638 2794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-v667m" podStartSLOduration=24.217624313 podStartE2EDuration="24.217624313s" podCreationTimestamp="2024-12-16 09:39:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-16 09:40:15.216805571 +0000 UTC m=+38.283215674" watchObservedRunningTime="2024-12-16 09:40:15.217624313 +0000 UTC m=+38.284034406" Dec 16 09:40:15.242231 kubelet[2794]: I1216 09:40:15.241735 2794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-qltvv" podStartSLOduration=24.241719708 podStartE2EDuration="24.241719708s" podCreationTimestamp="2024-12-16 09:39:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-16 09:40:15.229513734 +0000 UTC m=+38.295923867" watchObservedRunningTime="2024-12-16 09:40:15.241719708 +0000 UTC m=+38.308129801" Dec 16 09:40:15.534707 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1411871574.mount: Deactivated successfully. Dec 16 09:44:26.751700 systemd[1]: Started sshd@7-138.199.148.223:22-147.75.109.163:58498.service - OpenSSH per-connection server daemon (147.75.109.163:58498). Dec 16 09:44:27.762089 sshd[4189]: Accepted publickey for core from 147.75.109.163 port 58498 ssh2: RSA SHA256:zB/zPQRxUCFkkFdvDftk99JQqA6bP3NHPa7FnaDUxKk Dec 16 09:44:27.764757 sshd[4189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 09:44:27.771457 systemd-logind[1475]: New session 8 of user core. Dec 16 09:44:27.776625 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 16 09:44:28.875807 sshd[4189]: pam_unix(sshd:session): session closed for user core Dec 16 09:44:28.880373 systemd[1]: sshd@7-138.199.148.223:22-147.75.109.163:58498.service: Deactivated successfully. Dec 16 09:44:28.882645 systemd[1]: session-8.scope: Deactivated successfully. Dec 16 09:44:28.883465 systemd-logind[1475]: Session 8 logged out. Waiting for processes to exit. Dec 16 09:44:28.885208 systemd-logind[1475]: Removed session 8. Dec 16 09:44:34.044295 systemd[1]: Started sshd@8-138.199.148.223:22-147.75.109.163:58500.service - OpenSSH per-connection server daemon (147.75.109.163:58500). Dec 16 09:44:35.023616 sshd[4203]: Accepted publickey for core from 147.75.109.163 port 58500 ssh2: RSA SHA256:zB/zPQRxUCFkkFdvDftk99JQqA6bP3NHPa7FnaDUxKk Dec 16 09:44:35.025303 sshd[4203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 09:44:35.029945 systemd-logind[1475]: New session 9 of user core. Dec 16 09:44:35.034653 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 16 09:44:35.774595 sshd[4203]: pam_unix(sshd:session): session closed for user core Dec 16 09:44:35.777295 systemd[1]: sshd@8-138.199.148.223:22-147.75.109.163:58500.service: Deactivated successfully. Dec 16 09:44:35.779454 systemd[1]: session-9.scope: Deactivated successfully. Dec 16 09:44:35.781084 systemd-logind[1475]: Session 9 logged out. Waiting for processes to exit. Dec 16 09:44:35.782353 systemd-logind[1475]: Removed session 9. Dec 16 09:44:40.951970 systemd[1]: Started sshd@9-138.199.148.223:22-147.75.109.163:40544.service - OpenSSH per-connection server daemon (147.75.109.163:40544). Dec 16 09:44:41.925111 sshd[4219]: Accepted publickey for core from 147.75.109.163 port 40544 ssh2: RSA SHA256:zB/zPQRxUCFkkFdvDftk99JQqA6bP3NHPa7FnaDUxKk Dec 16 09:44:41.927061 sshd[4219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 09:44:41.932037 systemd-logind[1475]: New session 10 of user core. Dec 16 09:44:41.939616 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 16 09:44:42.663642 sshd[4219]: pam_unix(sshd:session): session closed for user core Dec 16 09:44:42.667242 systemd[1]: sshd@9-138.199.148.223:22-147.75.109.163:40544.service: Deactivated successfully. Dec 16 09:44:42.669519 systemd[1]: session-10.scope: Deactivated successfully. Dec 16 09:44:42.671765 systemd-logind[1475]: Session 10 logged out. Waiting for processes to exit. Dec 16 09:44:42.673469 systemd-logind[1475]: Removed session 10. Dec 16 09:44:42.833733 systemd[1]: Started sshd@10-138.199.148.223:22-147.75.109.163:40550.service - OpenSSH per-connection server daemon (147.75.109.163:40550). Dec 16 09:44:43.797361 sshd[4233]: Accepted publickey for core from 147.75.109.163 port 40550 ssh2: RSA SHA256:zB/zPQRxUCFkkFdvDftk99JQqA6bP3NHPa7FnaDUxKk Dec 16 09:44:43.799081 sshd[4233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 09:44:43.803084 systemd-logind[1475]: New session 11 of user core. Dec 16 09:44:43.808600 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 16 09:44:44.600076 sshd[4233]: pam_unix(sshd:session): session closed for user core Dec 16 09:44:44.605134 systemd[1]: sshd@10-138.199.148.223:22-147.75.109.163:40550.service: Deactivated successfully. Dec 16 09:44:44.607028 systemd[1]: session-11.scope: Deactivated successfully. Dec 16 09:44:44.608344 systemd-logind[1475]: Session 11 logged out. Waiting for processes to exit. Dec 16 09:44:44.609805 systemd-logind[1475]: Removed session 11. Dec 16 09:44:44.770605 systemd[1]: Started sshd@11-138.199.148.223:22-147.75.109.163:40566.service - OpenSSH per-connection server daemon (147.75.109.163:40566). Dec 16 09:44:45.762501 sshd[4244]: Accepted publickey for core from 147.75.109.163 port 40566 ssh2: RSA SHA256:zB/zPQRxUCFkkFdvDftk99JQqA6bP3NHPa7FnaDUxKk Dec 16 09:44:45.764642 sshd[4244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 09:44:45.768950 systemd-logind[1475]: New session 12 of user core. Dec 16 09:44:45.774585 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 16 09:44:46.524079 sshd[4244]: pam_unix(sshd:session): session closed for user core Dec 16 09:44:46.527676 systemd[1]: sshd@11-138.199.148.223:22-147.75.109.163:40566.service: Deactivated successfully. Dec 16 09:44:46.530020 systemd[1]: session-12.scope: Deactivated successfully. Dec 16 09:44:46.531992 systemd-logind[1475]: Session 12 logged out. Waiting for processes to exit. Dec 16 09:44:46.533192 systemd-logind[1475]: Removed session 12. Dec 16 09:44:51.690600 systemd[1]: Started sshd@12-138.199.148.223:22-147.75.109.163:46370.service - OpenSSH per-connection server daemon (147.75.109.163:46370). Dec 16 09:44:52.677735 sshd[4256]: Accepted publickey for core from 147.75.109.163 port 46370 ssh2: RSA SHA256:zB/zPQRxUCFkkFdvDftk99JQqA6bP3NHPa7FnaDUxKk Dec 16 09:44:52.680551 sshd[4256]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 09:44:52.686675 systemd-logind[1475]: New session 13 of user core. Dec 16 09:44:52.691701 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 16 09:44:53.425070 sshd[4256]: pam_unix(sshd:session): session closed for user core Dec 16 09:44:53.429705 systemd-logind[1475]: Session 13 logged out. Waiting for processes to exit. Dec 16 09:44:53.430897 systemd[1]: sshd@12-138.199.148.223:22-147.75.109.163:46370.service: Deactivated successfully. Dec 16 09:44:53.433519 systemd[1]: session-13.scope: Deactivated successfully. Dec 16 09:44:53.434813 systemd-logind[1475]: Removed session 13. Dec 16 09:44:58.593474 systemd[1]: Started sshd@13-138.199.148.223:22-147.75.109.163:49762.service - OpenSSH per-connection server daemon (147.75.109.163:49762). Dec 16 09:44:59.571150 sshd[4270]: Accepted publickey for core from 147.75.109.163 port 49762 ssh2: RSA SHA256:zB/zPQRxUCFkkFdvDftk99JQqA6bP3NHPa7FnaDUxKk Dec 16 09:44:59.572769 sshd[4270]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 09:44:59.578576 systemd-logind[1475]: New session 14 of user core. Dec 16 09:44:59.583683 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 16 09:45:00.299028 sshd[4270]: pam_unix(sshd:session): session closed for user core Dec 16 09:45:00.303181 systemd[1]: sshd@13-138.199.148.223:22-147.75.109.163:49762.service: Deactivated successfully. Dec 16 09:45:00.305274 systemd[1]: session-14.scope: Deactivated successfully. Dec 16 09:45:00.305968 systemd-logind[1475]: Session 14 logged out. Waiting for processes to exit. Dec 16 09:45:00.307339 systemd-logind[1475]: Removed session 14. Dec 16 09:45:00.469774 systemd[1]: Started sshd@14-138.199.148.223:22-147.75.109.163:49778.service - OpenSSH per-connection server daemon (147.75.109.163:49778). Dec 16 09:45:01.433691 sshd[4284]: Accepted publickey for core from 147.75.109.163 port 49778 ssh2: RSA SHA256:zB/zPQRxUCFkkFdvDftk99JQqA6bP3NHPa7FnaDUxKk Dec 16 09:45:01.435573 sshd[4284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 09:45:01.440912 systemd-logind[1475]: New session 15 of user core. Dec 16 09:45:01.448639 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 16 09:45:02.378111 sshd[4284]: pam_unix(sshd:session): session closed for user core Dec 16 09:45:02.381707 systemd-logind[1475]: Session 15 logged out. Waiting for processes to exit. Dec 16 09:45:02.383877 systemd[1]: sshd@14-138.199.148.223:22-147.75.109.163:49778.service: Deactivated successfully. Dec 16 09:45:02.388180 systemd[1]: session-15.scope: Deactivated successfully. Dec 16 09:45:02.389105 systemd-logind[1475]: Removed session 15. Dec 16 09:45:02.550822 systemd[1]: Started sshd@15-138.199.148.223:22-147.75.109.163:49782.service - OpenSSH per-connection server daemon (147.75.109.163:49782). Dec 16 09:45:03.547922 sshd[4299]: Accepted publickey for core from 147.75.109.163 port 49782 ssh2: RSA SHA256:zB/zPQRxUCFkkFdvDftk99JQqA6bP3NHPa7FnaDUxKk Dec 16 09:45:03.549995 sshd[4299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 09:45:03.554599 systemd-logind[1475]: New session 16 of user core. Dec 16 09:45:03.557641 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 16 09:45:05.788044 sshd[4299]: pam_unix(sshd:session): session closed for user core Dec 16 09:45:05.796203 systemd[1]: sshd@15-138.199.148.223:22-147.75.109.163:49782.service: Deactivated successfully. Dec 16 09:45:05.798943 systemd[1]: session-16.scope: Deactivated successfully. Dec 16 09:45:05.801115 systemd-logind[1475]: Session 16 logged out. Waiting for processes to exit. Dec 16 09:45:05.802299 systemd-logind[1475]: Removed session 16. Dec 16 09:45:05.957977 systemd[1]: Started sshd@16-138.199.148.223:22-147.75.109.163:49786.service - OpenSSH per-connection server daemon (147.75.109.163:49786). Dec 16 09:45:06.941307 sshd[4317]: Accepted publickey for core from 147.75.109.163 port 49786 ssh2: RSA SHA256:zB/zPQRxUCFkkFdvDftk99JQqA6bP3NHPa7FnaDUxKk Dec 16 09:45:06.943106 sshd[4317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 09:45:06.947501 systemd-logind[1475]: New session 17 of user core. Dec 16 09:45:06.951621 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 16 09:45:07.890674 sshd[4317]: pam_unix(sshd:session): session closed for user core Dec 16 09:45:07.895746 systemd[1]: sshd@16-138.199.148.223:22-147.75.109.163:49786.service: Deactivated successfully. Dec 16 09:45:07.898419 systemd[1]: session-17.scope: Deactivated successfully. Dec 16 09:45:07.899256 systemd-logind[1475]: Session 17 logged out. Waiting for processes to exit. Dec 16 09:45:07.900393 systemd-logind[1475]: Removed session 17. Dec 16 09:45:08.062688 systemd[1]: Started sshd@17-138.199.148.223:22-147.75.109.163:40056.service - OpenSSH per-connection server daemon (147.75.109.163:40056). Dec 16 09:45:09.054412 sshd[4329]: Accepted publickey for core from 147.75.109.163 port 40056 ssh2: RSA SHA256:zB/zPQRxUCFkkFdvDftk99JQqA6bP3NHPa7FnaDUxKk Dec 16 09:45:09.056312 sshd[4329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 09:45:09.061094 systemd-logind[1475]: New session 18 of user core. Dec 16 09:45:09.064760 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 16 09:45:09.801682 sshd[4329]: pam_unix(sshd:session): session closed for user core Dec 16 09:45:09.805329 systemd[1]: sshd@17-138.199.148.223:22-147.75.109.163:40056.service: Deactivated successfully. Dec 16 09:45:09.807969 systemd[1]: session-18.scope: Deactivated successfully. Dec 16 09:45:09.809855 systemd-logind[1475]: Session 18 logged out. Waiting for processes to exit. Dec 16 09:45:09.811520 systemd-logind[1475]: Removed session 18. Dec 16 09:45:14.978061 systemd[1]: Started sshd@18-138.199.148.223:22-147.75.109.163:40064.service - OpenSSH per-connection server daemon (147.75.109.163:40064). Dec 16 09:45:15.955036 sshd[4345]: Accepted publickey for core from 147.75.109.163 port 40064 ssh2: RSA SHA256:zB/zPQRxUCFkkFdvDftk99JQqA6bP3NHPa7FnaDUxKk Dec 16 09:45:15.956966 sshd[4345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 09:45:15.962294 systemd-logind[1475]: New session 19 of user core. Dec 16 09:45:15.966593 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 16 09:45:16.711220 sshd[4345]: pam_unix(sshd:session): session closed for user core Dec 16 09:45:16.715014 systemd[1]: sshd@18-138.199.148.223:22-147.75.109.163:40064.service: Deactivated successfully. Dec 16 09:45:16.716920 systemd[1]: session-19.scope: Deactivated successfully. Dec 16 09:45:16.717484 systemd-logind[1475]: Session 19 logged out. Waiting for processes to exit. Dec 16 09:45:16.718870 systemd-logind[1475]: Removed session 19. Dec 16 09:45:21.883671 systemd[1]: Started sshd@19-138.199.148.223:22-147.75.109.163:59850.service - OpenSSH per-connection server daemon (147.75.109.163:59850). Dec 16 09:45:22.846710 sshd[4358]: Accepted publickey for core from 147.75.109.163 port 59850 ssh2: RSA SHA256:zB/zPQRxUCFkkFdvDftk99JQqA6bP3NHPa7FnaDUxKk Dec 16 09:45:22.848222 sshd[4358]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 09:45:22.852336 systemd-logind[1475]: New session 20 of user core. Dec 16 09:45:22.856562 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 16 09:45:23.570564 sshd[4358]: pam_unix(sshd:session): session closed for user core Dec 16 09:45:23.573969 systemd[1]: sshd@19-138.199.148.223:22-147.75.109.163:59850.service: Deactivated successfully. Dec 16 09:45:23.577156 systemd[1]: session-20.scope: Deactivated successfully. Dec 16 09:45:23.579164 systemd-logind[1475]: Session 20 logged out. Waiting for processes to exit. Dec 16 09:45:23.580480 systemd-logind[1475]: Removed session 20. Dec 16 09:45:23.743065 systemd[1]: Started sshd@20-138.199.148.223:22-147.75.109.163:59864.service - OpenSSH per-connection server daemon (147.75.109.163:59864). Dec 16 09:45:24.708637 sshd[4374]: Accepted publickey for core from 147.75.109.163 port 59864 ssh2: RSA SHA256:zB/zPQRxUCFkkFdvDftk99JQqA6bP3NHPa7FnaDUxKk Dec 16 09:45:24.710752 sshd[4374]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 09:45:24.715892 systemd-logind[1475]: New session 21 of user core. Dec 16 09:45:24.722629 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 16 09:45:26.643813 systemd[1]: run-containerd-runc-k8s.io-a1e32699cd6bb265a0f2c1fc8b0fcfa60b92db0b163e66b31666721ee2232c67-runc.nwM44T.mount: Deactivated successfully. Dec 16 09:45:26.657609 containerd[1491]: time="2024-12-16T09:45:26.657241883Z" level=info msg="StopContainer for \"11aa104d8b71ba9151f12797e2162c017d69cc0c33715fedfb4ec2d27800e4e9\" with timeout 30 (s)" Dec 16 09:45:26.659784 containerd[1491]: time="2024-12-16T09:45:26.659755215Z" level=info msg="Stop container \"11aa104d8b71ba9151f12797e2162c017d69cc0c33715fedfb4ec2d27800e4e9\" with signal terminated" Dec 16 09:45:26.665677 containerd[1491]: time="2024-12-16T09:45:26.665624427Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 09:45:26.681882 systemd[1]: cri-containerd-11aa104d8b71ba9151f12797e2162c017d69cc0c33715fedfb4ec2d27800e4e9.scope: Deactivated successfully. Dec 16 09:45:26.682551 containerd[1491]: time="2024-12-16T09:45:26.682513650Z" level=info msg="StopContainer for \"a1e32699cd6bb265a0f2c1fc8b0fcfa60b92db0b163e66b31666721ee2232c67\" with timeout 2 (s)" Dec 16 09:45:26.684092 containerd[1491]: time="2024-12-16T09:45:26.683584383Z" level=info msg="Stop container \"a1e32699cd6bb265a0f2c1fc8b0fcfa60b92db0b163e66b31666721ee2232c67\" with signal terminated" Dec 16 09:45:26.698394 systemd-networkd[1390]: lxc_health: Link DOWN Dec 16 09:45:26.698406 systemd-networkd[1390]: lxc_health: Lost carrier Dec 16 09:45:26.727136 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-11aa104d8b71ba9151f12797e2162c017d69cc0c33715fedfb4ec2d27800e4e9-rootfs.mount: Deactivated successfully. Dec 16 09:45:26.730894 systemd[1]: cri-containerd-a1e32699cd6bb265a0f2c1fc8b0fcfa60b92db0b163e66b31666721ee2232c67.scope: Deactivated successfully. Dec 16 09:45:26.731165 systemd[1]: cri-containerd-a1e32699cd6bb265a0f2c1fc8b0fcfa60b92db0b163e66b31666721ee2232c67.scope: Consumed 7.647s CPU time. Dec 16 09:45:26.739466 containerd[1491]: time="2024-12-16T09:45:26.739322225Z" level=info msg="shim disconnected" id=11aa104d8b71ba9151f12797e2162c017d69cc0c33715fedfb4ec2d27800e4e9 namespace=k8s.io Dec 16 09:45:26.739466 containerd[1491]: time="2024-12-16T09:45:26.739399039Z" level=warning msg="cleaning up after shim disconnected" id=11aa104d8b71ba9151f12797e2162c017d69cc0c33715fedfb4ec2d27800e4e9 namespace=k8s.io Dec 16 09:45:26.739466 containerd[1491]: time="2024-12-16T09:45:26.739408617Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 16 09:45:26.767984 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a1e32699cd6bb265a0f2c1fc8b0fcfa60b92db0b163e66b31666721ee2232c67-rootfs.mount: Deactivated successfully. Dec 16 09:45:26.772643 containerd[1491]: time="2024-12-16T09:45:26.772576769Z" level=info msg="shim disconnected" id=a1e32699cd6bb265a0f2c1fc8b0fcfa60b92db0b163e66b31666721ee2232c67 namespace=k8s.io Dec 16 09:45:26.772643 containerd[1491]: time="2024-12-16T09:45:26.772621222Z" level=warning msg="cleaning up after shim disconnected" id=a1e32699cd6bb265a0f2c1fc8b0fcfa60b92db0b163e66b31666721ee2232c67 namespace=k8s.io Dec 16 09:45:26.772643 containerd[1491]: time="2024-12-16T09:45:26.772629537Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 16 09:45:26.774281 containerd[1491]: time="2024-12-16T09:45:26.773711533Z" level=info msg="StopContainer for \"11aa104d8b71ba9151f12797e2162c017d69cc0c33715fedfb4ec2d27800e4e9\" returns successfully" Dec 16 09:45:26.774828 containerd[1491]: time="2024-12-16T09:45:26.774805409Z" level=info msg="StopPodSandbox for \"1b76df4b39e88ef8730401f8edc20a1374d882bc59b8a8d98f2fdc260fdc07c8\"" Dec 16 09:45:26.775042 containerd[1491]: time="2024-12-16T09:45:26.774915706Z" level=info msg="Container to stop \"11aa104d8b71ba9151f12797e2162c017d69cc0c33715fedfb4ec2d27800e4e9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 09:45:26.779292 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1b76df4b39e88ef8730401f8edc20a1374d882bc59b8a8d98f2fdc260fdc07c8-shm.mount: Deactivated successfully. Dec 16 09:45:26.797817 systemd[1]: cri-containerd-1b76df4b39e88ef8730401f8edc20a1374d882bc59b8a8d98f2fdc260fdc07c8.scope: Deactivated successfully. Dec 16 09:45:26.804560 containerd[1491]: time="2024-12-16T09:45:26.804524718Z" level=info msg="StopContainer for \"a1e32699cd6bb265a0f2c1fc8b0fcfa60b92db0b163e66b31666721ee2232c67\" returns successfully" Dec 16 09:45:26.805366 containerd[1491]: time="2024-12-16T09:45:26.805321388Z" level=info msg="StopPodSandbox for \"e7e37930a21add5d2403f103fed40658afce7ecc15f8947f44065b76b7568d38\"" Dec 16 09:45:26.805619 containerd[1491]: time="2024-12-16T09:45:26.805597365Z" level=info msg="Container to stop \"54ed152e29d8b83fb5a6275d06669e9fe0d218e6bd0a99c0920d7e748a8120c7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 09:45:26.805733 containerd[1491]: time="2024-12-16T09:45:26.805713893Z" level=info msg="Container to stop \"095069aa854070cf9b4790d8c4a64646d573356b8c39d6b95490511473df64d9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 09:45:26.805841 containerd[1491]: time="2024-12-16T09:45:26.805821896Z" level=info msg="Container to stop \"65a531edc927cf7cdfe8957249c9eae0325c574e88e66d95a4f489207f0c368c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 09:45:26.805997 containerd[1491]: time="2024-12-16T09:45:26.805976274Z" level=info msg="Container to stop \"a1e32699cd6bb265a0f2c1fc8b0fcfa60b92db0b163e66b31666721ee2232c67\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 09:45:26.806211 containerd[1491]: time="2024-12-16T09:45:26.806062827Z" level=info msg="Container to stop \"aa00de5d068855d643232bbb4fecc0b93fcf0303ca1d4fe7e75e552181818591\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 09:45:26.821789 systemd[1]: cri-containerd-e7e37930a21add5d2403f103fed40658afce7ecc15f8947f44065b76b7568d38.scope: Deactivated successfully. Dec 16 09:45:26.830339 containerd[1491]: time="2024-12-16T09:45:26.830279037Z" level=info msg="shim disconnected" id=1b76df4b39e88ef8730401f8edc20a1374d882bc59b8a8d98f2fdc260fdc07c8 namespace=k8s.io Dec 16 09:45:26.830578 containerd[1491]: time="2024-12-16T09:45:26.830556988Z" level=warning msg="cleaning up after shim disconnected" id=1b76df4b39e88ef8730401f8edc20a1374d882bc59b8a8d98f2fdc260fdc07c8 namespace=k8s.io Dec 16 09:45:26.830676 containerd[1491]: time="2024-12-16T09:45:26.830656004Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 16 09:45:26.850967 containerd[1491]: time="2024-12-16T09:45:26.850911654Z" level=info msg="shim disconnected" id=e7e37930a21add5d2403f103fed40658afce7ecc15f8947f44065b76b7568d38 namespace=k8s.io Dec 16 09:45:26.850967 containerd[1491]: time="2024-12-16T09:45:26.850957630Z" level=warning msg="cleaning up after shim disconnected" id=e7e37930a21add5d2403f103fed40658afce7ecc15f8947f44065b76b7568d38 namespace=k8s.io Dec 16 09:45:26.850967 containerd[1491]: time="2024-12-16T09:45:26.850965925Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 16 09:45:26.859218 containerd[1491]: time="2024-12-16T09:45:26.859073886Z" level=info msg="TearDown network for sandbox \"1b76df4b39e88ef8730401f8edc20a1374d882bc59b8a8d98f2fdc260fdc07c8\" successfully" Dec 16 09:45:26.859218 containerd[1491]: time="2024-12-16T09:45:26.859124261Z" level=info msg="StopPodSandbox for \"1b76df4b39e88ef8730401f8edc20a1374d882bc59b8a8d98f2fdc260fdc07c8\" returns successfully" Dec 16 09:45:26.866782 containerd[1491]: time="2024-12-16T09:45:26.866753375Z" level=info msg="TearDown network for sandbox \"e7e37930a21add5d2403f103fed40658afce7ecc15f8947f44065b76b7568d38\" successfully" Dec 16 09:45:26.867372 containerd[1491]: time="2024-12-16T09:45:26.866869062Z" level=info msg="StopPodSandbox for \"e7e37930a21add5d2403f103fed40658afce7ecc15f8947f44065b76b7568d38\" returns successfully" Dec 16 09:45:26.962559 kubelet[2794]: I1216 09:45:26.961287 2794 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f4ec3fce-c376-4ad4-90a5-5fe1d3df7028-cilium-config-path\") pod \"f4ec3fce-c376-4ad4-90a5-5fe1d3df7028\" (UID: \"f4ec3fce-c376-4ad4-90a5-5fe1d3df7028\") " Dec 16 09:45:26.962559 kubelet[2794]: I1216 09:45:26.961345 2794 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ac98d646-15f1-4d2c-9ee6-19650962f029-clustermesh-secrets\") pod \"ac98d646-15f1-4d2c-9ee6-19650962f029\" (UID: \"ac98d646-15f1-4d2c-9ee6-19650962f029\") " Dec 16 09:45:26.962559 kubelet[2794]: I1216 09:45:26.961363 2794 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ac98d646-15f1-4d2c-9ee6-19650962f029-hostproc\") pod \"ac98d646-15f1-4d2c-9ee6-19650962f029\" (UID: \"ac98d646-15f1-4d2c-9ee6-19650962f029\") " Dec 16 09:45:26.962559 kubelet[2794]: I1216 09:45:26.961379 2794 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ac98d646-15f1-4d2c-9ee6-19650962f029-cilium-config-path\") pod \"ac98d646-15f1-4d2c-9ee6-19650962f029\" (UID: \"ac98d646-15f1-4d2c-9ee6-19650962f029\") " Dec 16 09:45:26.962559 kubelet[2794]: I1216 09:45:26.961392 2794 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ac98d646-15f1-4d2c-9ee6-19650962f029-host-proc-sys-kernel\") pod \"ac98d646-15f1-4d2c-9ee6-19650962f029\" (UID: \"ac98d646-15f1-4d2c-9ee6-19650962f029\") " Dec 16 09:45:26.962559 kubelet[2794]: I1216 09:45:26.961405 2794 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ac98d646-15f1-4d2c-9ee6-19650962f029-xtables-lock\") pod \"ac98d646-15f1-4d2c-9ee6-19650962f029\" (UID: \"ac98d646-15f1-4d2c-9ee6-19650962f029\") " Dec 16 09:45:26.963123 kubelet[2794]: I1216 09:45:26.961420 2794 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ac98d646-15f1-4d2c-9ee6-19650962f029-hubble-tls\") pod \"ac98d646-15f1-4d2c-9ee6-19650962f029\" (UID: \"ac98d646-15f1-4d2c-9ee6-19650962f029\") " Dec 16 09:45:26.963123 kubelet[2794]: I1216 09:45:26.961447 2794 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ac98d646-15f1-4d2c-9ee6-19650962f029-lib-modules\") pod \"ac98d646-15f1-4d2c-9ee6-19650962f029\" (UID: \"ac98d646-15f1-4d2c-9ee6-19650962f029\") " Dec 16 09:45:26.963123 kubelet[2794]: I1216 09:45:26.961461 2794 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ac98d646-15f1-4d2c-9ee6-19650962f029-cilium-cgroup\") pod \"ac98d646-15f1-4d2c-9ee6-19650962f029\" (UID: \"ac98d646-15f1-4d2c-9ee6-19650962f029\") " Dec 16 09:45:26.963123 kubelet[2794]: I1216 09:45:26.961475 2794 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ac98d646-15f1-4d2c-9ee6-19650962f029-cilium-run\") pod \"ac98d646-15f1-4d2c-9ee6-19650962f029\" (UID: \"ac98d646-15f1-4d2c-9ee6-19650962f029\") " Dec 16 09:45:26.963123 kubelet[2794]: I1216 09:45:26.961490 2794 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ac98d646-15f1-4d2c-9ee6-19650962f029-host-proc-sys-net\") pod \"ac98d646-15f1-4d2c-9ee6-19650962f029\" (UID: \"ac98d646-15f1-4d2c-9ee6-19650962f029\") " Dec 16 09:45:26.963123 kubelet[2794]: I1216 09:45:26.961504 2794 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ac98d646-15f1-4d2c-9ee6-19650962f029-cni-path\") pod \"ac98d646-15f1-4d2c-9ee6-19650962f029\" (UID: \"ac98d646-15f1-4d2c-9ee6-19650962f029\") " Dec 16 09:45:26.963281 kubelet[2794]: I1216 09:45:26.961516 2794 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ac98d646-15f1-4d2c-9ee6-19650962f029-etc-cni-netd\") pod \"ac98d646-15f1-4d2c-9ee6-19650962f029\" (UID: \"ac98d646-15f1-4d2c-9ee6-19650962f029\") " Dec 16 09:45:26.963281 kubelet[2794]: I1216 09:45:26.961534 2794 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfqg9\" (UniqueName: \"kubernetes.io/projected/f4ec3fce-c376-4ad4-90a5-5fe1d3df7028-kube-api-access-sfqg9\") pod \"f4ec3fce-c376-4ad4-90a5-5fe1d3df7028\" (UID: \"f4ec3fce-c376-4ad4-90a5-5fe1d3df7028\") " Dec 16 09:45:26.963281 kubelet[2794]: I1216 09:45:26.961551 2794 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qr2r\" (UniqueName: \"kubernetes.io/projected/ac98d646-15f1-4d2c-9ee6-19650962f029-kube-api-access-5qr2r\") pod \"ac98d646-15f1-4d2c-9ee6-19650962f029\" (UID: \"ac98d646-15f1-4d2c-9ee6-19650962f029\") " Dec 16 09:45:26.963281 kubelet[2794]: I1216 09:45:26.961564 2794 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ac98d646-15f1-4d2c-9ee6-19650962f029-bpf-maps\") pod \"ac98d646-15f1-4d2c-9ee6-19650962f029\" (UID: \"ac98d646-15f1-4d2c-9ee6-19650962f029\") " Dec 16 09:45:26.963847 kubelet[2794]: I1216 09:45:26.961623 2794 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac98d646-15f1-4d2c-9ee6-19650962f029-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ac98d646-15f1-4d2c-9ee6-19650962f029" (UID: "ac98d646-15f1-4d2c-9ee6-19650962f029"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 16 09:45:26.968447 kubelet[2794]: I1216 09:45:26.966634 2794 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac98d646-15f1-4d2c-9ee6-19650962f029-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ac98d646-15f1-4d2c-9ee6-19650962f029" (UID: "ac98d646-15f1-4d2c-9ee6-19650962f029"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 09:45:26.968447 kubelet[2794]: I1216 09:45:26.966670 2794 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac98d646-15f1-4d2c-9ee6-19650962f029-hostproc" (OuterVolumeSpecName: "hostproc") pod "ac98d646-15f1-4d2c-9ee6-19650962f029" (UID: "ac98d646-15f1-4d2c-9ee6-19650962f029"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 16 09:45:26.968447 kubelet[2794]: I1216 09:45:26.967781 2794 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4ec3fce-c376-4ad4-90a5-5fe1d3df7028-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f4ec3fce-c376-4ad4-90a5-5fe1d3df7028" (UID: "f4ec3fce-c376-4ad4-90a5-5fe1d3df7028"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 09:45:26.968447 kubelet[2794]: I1216 09:45:26.967813 2794 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac98d646-15f1-4d2c-9ee6-19650962f029-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ac98d646-15f1-4d2c-9ee6-19650962f029" (UID: "ac98d646-15f1-4d2c-9ee6-19650962f029"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 16 09:45:26.968447 kubelet[2794]: I1216 09:45:26.967833 2794 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac98d646-15f1-4d2c-9ee6-19650962f029-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ac98d646-15f1-4d2c-9ee6-19650962f029" (UID: "ac98d646-15f1-4d2c-9ee6-19650962f029"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 16 09:45:26.968586 kubelet[2794]: I1216 09:45:26.967869 2794 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac98d646-15f1-4d2c-9ee6-19650962f029-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ac98d646-15f1-4d2c-9ee6-19650962f029" (UID: "ac98d646-15f1-4d2c-9ee6-19650962f029"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 16 09:45:26.969679 kubelet[2794]: I1216 09:45:26.969661 2794 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac98d646-15f1-4d2c-9ee6-19650962f029-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ac98d646-15f1-4d2c-9ee6-19650962f029" (UID: "ac98d646-15f1-4d2c-9ee6-19650962f029"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 09:45:26.969786 kubelet[2794]: I1216 09:45:26.969763 2794 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac98d646-15f1-4d2c-9ee6-19650962f029-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ac98d646-15f1-4d2c-9ee6-19650962f029" (UID: "ac98d646-15f1-4d2c-9ee6-19650962f029"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 16 09:45:26.969895 kubelet[2794]: I1216 09:45:26.969881 2794 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac98d646-15f1-4d2c-9ee6-19650962f029-cni-path" (OuterVolumeSpecName: "cni-path") pod "ac98d646-15f1-4d2c-9ee6-19650962f029" (UID: "ac98d646-15f1-4d2c-9ee6-19650962f029"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 16 09:45:26.969957 kubelet[2794]: I1216 09:45:26.969944 2794 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac98d646-15f1-4d2c-9ee6-19650962f029-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ac98d646-15f1-4d2c-9ee6-19650962f029" (UID: "ac98d646-15f1-4d2c-9ee6-19650962f029"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 16 09:45:26.974263 kubelet[2794]: I1216 09:45:26.974216 2794 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac98d646-15f1-4d2c-9ee6-19650962f029-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ac98d646-15f1-4d2c-9ee6-19650962f029" (UID: "ac98d646-15f1-4d2c-9ee6-19650962f029"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 16 09:45:26.974263 kubelet[2794]: I1216 09:45:26.974250 2794 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac98d646-15f1-4d2c-9ee6-19650962f029-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ac98d646-15f1-4d2c-9ee6-19650962f029" (UID: "ac98d646-15f1-4d2c-9ee6-19650962f029"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 16 09:45:26.975832 kubelet[2794]: I1216 09:45:26.975786 2794 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac98d646-15f1-4d2c-9ee6-19650962f029-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ac98d646-15f1-4d2c-9ee6-19650962f029" (UID: "ac98d646-15f1-4d2c-9ee6-19650962f029"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 09:45:26.975911 kubelet[2794]: I1216 09:45:26.975823 2794 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4ec3fce-c376-4ad4-90a5-5fe1d3df7028-kube-api-access-sfqg9" (OuterVolumeSpecName: "kube-api-access-sfqg9") pod "f4ec3fce-c376-4ad4-90a5-5fe1d3df7028" (UID: "f4ec3fce-c376-4ad4-90a5-5fe1d3df7028"). InnerVolumeSpecName "kube-api-access-sfqg9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 09:45:26.975911 kubelet[2794]: I1216 09:45:26.975882 2794 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac98d646-15f1-4d2c-9ee6-19650962f029-kube-api-access-5qr2r" (OuterVolumeSpecName: "kube-api-access-5qr2r") pod "ac98d646-15f1-4d2c-9ee6-19650962f029" (UID: "ac98d646-15f1-4d2c-9ee6-19650962f029"). InnerVolumeSpecName "kube-api-access-5qr2r". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 09:45:27.064833 kubelet[2794]: I1216 09:45:27.064397 2794 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ac98d646-15f1-4d2c-9ee6-19650962f029-cilium-cgroup\") on node \"ci-4081-2-1-b-2c3a583fea\" DevicePath \"\"" Dec 16 09:45:27.064833 kubelet[2794]: I1216 09:45:27.064455 2794 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ac98d646-15f1-4d2c-9ee6-19650962f029-cilium-run\") on node \"ci-4081-2-1-b-2c3a583fea\" DevicePath \"\"" Dec 16 09:45:27.064833 kubelet[2794]: I1216 09:45:27.064467 2794 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ac98d646-15f1-4d2c-9ee6-19650962f029-host-proc-sys-net\") on node \"ci-4081-2-1-b-2c3a583fea\" DevicePath \"\"" Dec 16 09:45:27.064833 kubelet[2794]: I1216 09:45:27.064478 2794 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ac98d646-15f1-4d2c-9ee6-19650962f029-cni-path\") on node \"ci-4081-2-1-b-2c3a583fea\" DevicePath \"\"" Dec 16 09:45:27.064833 kubelet[2794]: I1216 09:45:27.064488 2794 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ac98d646-15f1-4d2c-9ee6-19650962f029-etc-cni-netd\") on node \"ci-4081-2-1-b-2c3a583fea\" DevicePath \"\"" Dec 16 09:45:27.064833 kubelet[2794]: I1216 09:45:27.064498 2794 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-sfqg9\" (UniqueName: \"kubernetes.io/projected/f4ec3fce-c376-4ad4-90a5-5fe1d3df7028-kube-api-access-sfqg9\") on node \"ci-4081-2-1-b-2c3a583fea\" DevicePath \"\"" Dec 16 09:45:27.064833 kubelet[2794]: I1216 09:45:27.064508 2794 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-5qr2r\" (UniqueName: \"kubernetes.io/projected/ac98d646-15f1-4d2c-9ee6-19650962f029-kube-api-access-5qr2r\") on node \"ci-4081-2-1-b-2c3a583fea\" DevicePath \"\"" Dec 16 09:45:27.064833 kubelet[2794]: I1216 09:45:27.064517 2794 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ac98d646-15f1-4d2c-9ee6-19650962f029-bpf-maps\") on node \"ci-4081-2-1-b-2c3a583fea\" DevicePath \"\"" Dec 16 09:45:27.065121 kubelet[2794]: I1216 09:45:27.064526 2794 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ac98d646-15f1-4d2c-9ee6-19650962f029-clustermesh-secrets\") on node \"ci-4081-2-1-b-2c3a583fea\" DevicePath \"\"" Dec 16 09:45:27.065121 kubelet[2794]: I1216 09:45:27.064534 2794 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ac98d646-15f1-4d2c-9ee6-19650962f029-hostproc\") on node \"ci-4081-2-1-b-2c3a583fea\" DevicePath \"\"" Dec 16 09:45:27.065121 kubelet[2794]: I1216 09:45:27.064542 2794 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ac98d646-15f1-4d2c-9ee6-19650962f029-cilium-config-path\") on node \"ci-4081-2-1-b-2c3a583fea\" DevicePath \"\"" Dec 16 09:45:27.065121 kubelet[2794]: I1216 09:45:27.064550 2794 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ac98d646-15f1-4d2c-9ee6-19650962f029-host-proc-sys-kernel\") on node \"ci-4081-2-1-b-2c3a583fea\" DevicePath \"\"" Dec 16 09:45:27.065121 kubelet[2794]: I1216 09:45:27.064558 2794 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f4ec3fce-c376-4ad4-90a5-5fe1d3df7028-cilium-config-path\") on node \"ci-4081-2-1-b-2c3a583fea\" DevicePath \"\"" Dec 16 09:45:27.065121 kubelet[2794]: I1216 09:45:27.064566 2794 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ac98d646-15f1-4d2c-9ee6-19650962f029-xtables-lock\") on node \"ci-4081-2-1-b-2c3a583fea\" DevicePath \"\"" Dec 16 09:45:27.065121 kubelet[2794]: I1216 09:45:27.064574 2794 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ac98d646-15f1-4d2c-9ee6-19650962f029-hubble-tls\") on node \"ci-4081-2-1-b-2c3a583fea\" DevicePath \"\"" Dec 16 09:45:27.065121 kubelet[2794]: I1216 09:45:27.064583 2794 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ac98d646-15f1-4d2c-9ee6-19650962f029-lib-modules\") on node \"ci-4081-2-1-b-2c3a583fea\" DevicePath \"\"" Dec 16 09:45:27.073817 systemd[1]: Removed slice kubepods-burstable-podac98d646_15f1_4d2c_9ee6_19650962f029.slice - libcontainer container kubepods-burstable-podac98d646_15f1_4d2c_9ee6_19650962f029.slice. Dec 16 09:45:27.074212 systemd[1]: kubepods-burstable-podac98d646_15f1_4d2c_9ee6_19650962f029.slice: Consumed 7.748s CPU time. Dec 16 09:45:27.075593 systemd[1]: Removed slice kubepods-besteffort-podf4ec3fce_c376_4ad4_90a5_5fe1d3df7028.slice - libcontainer container kubepods-besteffort-podf4ec3fce_c376_4ad4_90a5_5fe1d3df7028.slice. Dec 16 09:45:27.204649 kubelet[2794]: E1216 09:45:27.204586 2794 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 16 09:45:27.630245 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1b76df4b39e88ef8730401f8edc20a1374d882bc59b8a8d98f2fdc260fdc07c8-rootfs.mount: Deactivated successfully. Dec 16 09:45:27.630575 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e7e37930a21add5d2403f103fed40658afce7ecc15f8947f44065b76b7568d38-rootfs.mount: Deactivated successfully. Dec 16 09:45:27.630760 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e7e37930a21add5d2403f103fed40658afce7ecc15f8947f44065b76b7568d38-shm.mount: Deactivated successfully. Dec 16 09:45:27.630856 systemd[1]: var-lib-kubelet-pods-f4ec3fce\x2dc376\x2d4ad4\x2d90a5\x2d5fe1d3df7028-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsfqg9.mount: Deactivated successfully. Dec 16 09:45:27.631021 systemd[1]: var-lib-kubelet-pods-ac98d646\x2d15f1\x2d4d2c\x2d9ee6\x2d19650962f029-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5qr2r.mount: Deactivated successfully. Dec 16 09:45:27.631130 systemd[1]: var-lib-kubelet-pods-ac98d646\x2d15f1\x2d4d2c\x2d9ee6\x2d19650962f029-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 16 09:45:27.631218 systemd[1]: var-lib-kubelet-pods-ac98d646\x2d15f1\x2d4d2c\x2d9ee6\x2d19650962f029-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 16 09:45:27.800272 kubelet[2794]: I1216 09:45:27.800235 2794 scope.go:117] "RemoveContainer" containerID="11aa104d8b71ba9151f12797e2162c017d69cc0c33715fedfb4ec2d27800e4e9" Dec 16 09:45:27.802952 containerd[1491]: time="2024-12-16T09:45:27.802576816Z" level=info msg="RemoveContainer for \"11aa104d8b71ba9151f12797e2162c017d69cc0c33715fedfb4ec2d27800e4e9\"" Dec 16 09:45:27.808033 containerd[1491]: time="2024-12-16T09:45:27.807997047Z" level=info msg="RemoveContainer for \"11aa104d8b71ba9151f12797e2162c017d69cc0c33715fedfb4ec2d27800e4e9\" returns successfully" Dec 16 09:45:27.813402 kubelet[2794]: I1216 09:45:27.813372 2794 scope.go:117] "RemoveContainer" containerID="a1e32699cd6bb265a0f2c1fc8b0fcfa60b92db0b163e66b31666721ee2232c67" Dec 16 09:45:27.815127 containerd[1491]: time="2024-12-16T09:45:27.815094338Z" level=info msg="RemoveContainer for \"a1e32699cd6bb265a0f2c1fc8b0fcfa60b92db0b163e66b31666721ee2232c67\"" Dec 16 09:45:27.818293 containerd[1491]: time="2024-12-16T09:45:27.818262916Z" level=info msg="RemoveContainer for \"a1e32699cd6bb265a0f2c1fc8b0fcfa60b92db0b163e66b31666721ee2232c67\" returns successfully" Dec 16 09:45:27.818737 kubelet[2794]: I1216 09:45:27.818418 2794 scope.go:117] "RemoveContainer" containerID="65a531edc927cf7cdfe8957249c9eae0325c574e88e66d95a4f489207f0c368c" Dec 16 09:45:27.820987 containerd[1491]: time="2024-12-16T09:45:27.820603336Z" level=info msg="RemoveContainer for \"65a531edc927cf7cdfe8957249c9eae0325c574e88e66d95a4f489207f0c368c\"" Dec 16 09:45:27.824376 containerd[1491]: time="2024-12-16T09:45:27.824086874Z" level=info msg="RemoveContainer for \"65a531edc927cf7cdfe8957249c9eae0325c574e88e66d95a4f489207f0c368c\" returns successfully" Dec 16 09:45:27.824547 kubelet[2794]: I1216 09:45:27.824531 2794 scope.go:117] "RemoveContainer" containerID="095069aa854070cf9b4790d8c4a64646d573356b8c39d6b95490511473df64d9" Dec 16 09:45:27.825411 containerd[1491]: time="2024-12-16T09:45:27.825393168Z" level=info msg="RemoveContainer for \"095069aa854070cf9b4790d8c4a64646d573356b8c39d6b95490511473df64d9\"" Dec 16 09:45:27.828748 containerd[1491]: time="2024-12-16T09:45:27.828700125Z" level=info msg="RemoveContainer for \"095069aa854070cf9b4790d8c4a64646d573356b8c39d6b95490511473df64d9\" returns successfully" Dec 16 09:45:27.828955 kubelet[2794]: I1216 09:45:27.828930 2794 scope.go:117] "RemoveContainer" containerID="54ed152e29d8b83fb5a6275d06669e9fe0d218e6bd0a99c0920d7e748a8120c7" Dec 16 09:45:27.830508 containerd[1491]: time="2024-12-16T09:45:27.830271485Z" level=info msg="RemoveContainer for \"54ed152e29d8b83fb5a6275d06669e9fe0d218e6bd0a99c0920d7e748a8120c7\"" Dec 16 09:45:27.832807 containerd[1491]: time="2024-12-16T09:45:27.832782314Z" level=info msg="RemoveContainer for \"54ed152e29d8b83fb5a6275d06669e9fe0d218e6bd0a99c0920d7e748a8120c7\" returns successfully" Dec 16 09:45:27.832928 kubelet[2794]: I1216 09:45:27.832905 2794 scope.go:117] "RemoveContainer" containerID="aa00de5d068855d643232bbb4fecc0b93fcf0303ca1d4fe7e75e552181818591" Dec 16 09:45:27.833839 containerd[1491]: time="2024-12-16T09:45:27.833803294Z" level=info msg="RemoveContainer for \"aa00de5d068855d643232bbb4fecc0b93fcf0303ca1d4fe7e75e552181818591\"" Dec 16 09:45:27.838330 containerd[1491]: time="2024-12-16T09:45:27.837838154Z" level=info msg="RemoveContainer for \"aa00de5d068855d643232bbb4fecc0b93fcf0303ca1d4fe7e75e552181818591\" returns successfully" Dec 16 09:45:27.838400 kubelet[2794]: I1216 09:45:27.838149 2794 scope.go:117] "RemoveContainer" containerID="a1e32699cd6bb265a0f2c1fc8b0fcfa60b92db0b163e66b31666721ee2232c67" Dec 16 09:45:27.848357 containerd[1491]: time="2024-12-16T09:45:27.840946290Z" level=error msg="ContainerStatus for \"a1e32699cd6bb265a0f2c1fc8b0fcfa60b92db0b163e66b31666721ee2232c67\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a1e32699cd6bb265a0f2c1fc8b0fcfa60b92db0b163e66b31666721ee2232c67\": not found" Dec 16 09:45:27.852844 kubelet[2794]: E1216 09:45:27.852497 2794 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a1e32699cd6bb265a0f2c1fc8b0fcfa60b92db0b163e66b31666721ee2232c67\": not found" containerID="a1e32699cd6bb265a0f2c1fc8b0fcfa60b92db0b163e66b31666721ee2232c67" Dec 16 09:45:27.852844 kubelet[2794]: I1216 09:45:27.852546 2794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a1e32699cd6bb265a0f2c1fc8b0fcfa60b92db0b163e66b31666721ee2232c67"} err="failed to get container status \"a1e32699cd6bb265a0f2c1fc8b0fcfa60b92db0b163e66b31666721ee2232c67\": rpc error: code = NotFound desc = an error occurred when try to find container \"a1e32699cd6bb265a0f2c1fc8b0fcfa60b92db0b163e66b31666721ee2232c67\": not found" Dec 16 09:45:27.852844 kubelet[2794]: I1216 09:45:27.852617 2794 scope.go:117] "RemoveContainer" containerID="65a531edc927cf7cdfe8957249c9eae0325c574e88e66d95a4f489207f0c368c" Dec 16 09:45:27.852974 containerd[1491]: time="2024-12-16T09:45:27.852810049Z" level=error msg="ContainerStatus for \"65a531edc927cf7cdfe8957249c9eae0325c574e88e66d95a4f489207f0c368c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"65a531edc927cf7cdfe8957249c9eae0325c574e88e66d95a4f489207f0c368c\": not found" Dec 16 09:45:27.853893 kubelet[2794]: E1216 09:45:27.852955 2794 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"65a531edc927cf7cdfe8957249c9eae0325c574e88e66d95a4f489207f0c368c\": not found" containerID="65a531edc927cf7cdfe8957249c9eae0325c574e88e66d95a4f489207f0c368c" Dec 16 09:45:27.853893 kubelet[2794]: I1216 09:45:27.853867 2794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"65a531edc927cf7cdfe8957249c9eae0325c574e88e66d95a4f489207f0c368c"} err="failed to get container status \"65a531edc927cf7cdfe8957249c9eae0325c574e88e66d95a4f489207f0c368c\": rpc error: code = NotFound desc = an error occurred when try to find container \"65a531edc927cf7cdfe8957249c9eae0325c574e88e66d95a4f489207f0c368c\": not found" Dec 16 09:45:27.853893 kubelet[2794]: I1216 09:45:27.853881 2794 scope.go:117] "RemoveContainer" containerID="095069aa854070cf9b4790d8c4a64646d573356b8c39d6b95490511473df64d9" Dec 16 09:45:27.855014 containerd[1491]: time="2024-12-16T09:45:27.854982694Z" level=error msg="ContainerStatus for \"095069aa854070cf9b4790d8c4a64646d573356b8c39d6b95490511473df64d9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"095069aa854070cf9b4790d8c4a64646d573356b8c39d6b95490511473df64d9\": not found" Dec 16 09:45:27.855247 kubelet[2794]: E1216 09:45:27.855124 2794 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"095069aa854070cf9b4790d8c4a64646d573356b8c39d6b95490511473df64d9\": not found" containerID="095069aa854070cf9b4790d8c4a64646d573356b8c39d6b95490511473df64d9" Dec 16 09:45:27.855247 kubelet[2794]: I1216 09:45:27.855170 2794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"095069aa854070cf9b4790d8c4a64646d573356b8c39d6b95490511473df64d9"} err="failed to get container status \"095069aa854070cf9b4790d8c4a64646d573356b8c39d6b95490511473df64d9\": rpc error: code = NotFound desc = an error occurred when try to find container \"095069aa854070cf9b4790d8c4a64646d573356b8c39d6b95490511473df64d9\": not found" Dec 16 09:45:27.855247 kubelet[2794]: I1216 09:45:27.855184 2794 scope.go:117] "RemoveContainer" containerID="54ed152e29d8b83fb5a6275d06669e9fe0d218e6bd0a99c0920d7e748a8120c7" Dec 16 09:45:27.855360 containerd[1491]: time="2024-12-16T09:45:27.855339452Z" level=error msg="ContainerStatus for \"54ed152e29d8b83fb5a6275d06669e9fe0d218e6bd0a99c0920d7e748a8120c7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"54ed152e29d8b83fb5a6275d06669e9fe0d218e6bd0a99c0920d7e748a8120c7\": not found" Dec 16 09:45:27.855608 kubelet[2794]: E1216 09:45:27.855421 2794 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"54ed152e29d8b83fb5a6275d06669e9fe0d218e6bd0a99c0920d7e748a8120c7\": not found" containerID="54ed152e29d8b83fb5a6275d06669e9fe0d218e6bd0a99c0920d7e748a8120c7" Dec 16 09:45:27.855608 kubelet[2794]: I1216 09:45:27.855465 2794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"54ed152e29d8b83fb5a6275d06669e9fe0d218e6bd0a99c0920d7e748a8120c7"} err="failed to get container status \"54ed152e29d8b83fb5a6275d06669e9fe0d218e6bd0a99c0920d7e748a8120c7\": rpc error: code = NotFound desc = an error occurred when try to find container \"54ed152e29d8b83fb5a6275d06669e9fe0d218e6bd0a99c0920d7e748a8120c7\": not found" Dec 16 09:45:27.855608 kubelet[2794]: I1216 09:45:27.855480 2794 scope.go:117] "RemoveContainer" containerID="aa00de5d068855d643232bbb4fecc0b93fcf0303ca1d4fe7e75e552181818591" Dec 16 09:45:27.855690 containerd[1491]: time="2024-12-16T09:45:27.855603765Z" level=error msg="ContainerStatus for \"aa00de5d068855d643232bbb4fecc0b93fcf0303ca1d4fe7e75e552181818591\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"aa00de5d068855d643232bbb4fecc0b93fcf0303ca1d4fe7e75e552181818591\": not found" Dec 16 09:45:27.855774 kubelet[2794]: E1216 09:45:27.855754 2794 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"aa00de5d068855d643232bbb4fecc0b93fcf0303ca1d4fe7e75e552181818591\": not found" containerID="aa00de5d068855d643232bbb4fecc0b93fcf0303ca1d4fe7e75e552181818591" Dec 16 09:45:27.855807 kubelet[2794]: I1216 09:45:27.855773 2794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"aa00de5d068855d643232bbb4fecc0b93fcf0303ca1d4fe7e75e552181818591"} err="failed to get container status \"aa00de5d068855d643232bbb4fecc0b93fcf0303ca1d4fe7e75e552181818591\": rpc error: code = NotFound desc = an error occurred when try to find container \"aa00de5d068855d643232bbb4fecc0b93fcf0303ca1d4fe7e75e552181818591\": not found" Dec 16 09:45:28.631897 sshd[4374]: pam_unix(sshd:session): session closed for user core Dec 16 09:45:28.635225 systemd[1]: sshd@20-138.199.148.223:22-147.75.109.163:59864.service: Deactivated successfully. Dec 16 09:45:28.637604 systemd[1]: session-21.scope: Deactivated successfully. Dec 16 09:45:28.639478 systemd-logind[1475]: Session 21 logged out. Waiting for processes to exit. Dec 16 09:45:28.641225 systemd-logind[1475]: Removed session 21. Dec 16 09:45:28.805688 systemd[1]: Started sshd@21-138.199.148.223:22-147.75.109.163:40746.service - OpenSSH per-connection server daemon (147.75.109.163:40746). Dec 16 09:45:29.065544 kubelet[2794]: I1216 09:45:29.065401 2794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac98d646-15f1-4d2c-9ee6-19650962f029" path="/var/lib/kubelet/pods/ac98d646-15f1-4d2c-9ee6-19650962f029/volumes" Dec 16 09:45:29.066984 kubelet[2794]: I1216 09:45:29.066954 2794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4ec3fce-c376-4ad4-90a5-5fe1d3df7028" path="/var/lib/kubelet/pods/f4ec3fce-c376-4ad4-90a5-5fe1d3df7028/volumes" Dec 16 09:45:29.788178 sshd[4542]: Accepted publickey for core from 147.75.109.163 port 40746 ssh2: RSA SHA256:zB/zPQRxUCFkkFdvDftk99JQqA6bP3NHPa7FnaDUxKk Dec 16 09:45:29.790141 sshd[4542]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 09:45:29.795572 systemd-logind[1475]: New session 22 of user core. Dec 16 09:45:29.800596 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 16 09:45:30.714632 kubelet[2794]: I1216 09:45:30.712868 2794 topology_manager.go:215] "Topology Admit Handler" podUID="57a003e1-db46-456e-a47f-1d01ddea7f96" podNamespace="kube-system" podName="cilium-mh2z4" Dec 16 09:45:30.715861 kubelet[2794]: E1216 09:45:30.715821 2794 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ac98d646-15f1-4d2c-9ee6-19650962f029" containerName="apply-sysctl-overwrites" Dec 16 09:45:30.715861 kubelet[2794]: E1216 09:45:30.715847 2794 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ac98d646-15f1-4d2c-9ee6-19650962f029" containerName="mount-bpf-fs" Dec 16 09:45:30.715861 kubelet[2794]: E1216 09:45:30.715855 2794 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ac98d646-15f1-4d2c-9ee6-19650962f029" containerName="clean-cilium-state" Dec 16 09:45:30.715861 kubelet[2794]: E1216 09:45:30.715860 2794 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ac98d646-15f1-4d2c-9ee6-19650962f029" containerName="cilium-agent" Dec 16 09:45:30.715861 kubelet[2794]: E1216 09:45:30.715866 2794 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f4ec3fce-c376-4ad4-90a5-5fe1d3df7028" containerName="cilium-operator" Dec 16 09:45:30.716049 kubelet[2794]: E1216 09:45:30.715875 2794 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ac98d646-15f1-4d2c-9ee6-19650962f029" containerName="mount-cgroup" Dec 16 09:45:30.716049 kubelet[2794]: I1216 09:45:30.715900 2794 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac98d646-15f1-4d2c-9ee6-19650962f029" containerName="cilium-agent" Dec 16 09:45:30.716049 kubelet[2794]: I1216 09:45:30.715906 2794 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4ec3fce-c376-4ad4-90a5-5fe1d3df7028" containerName="cilium-operator" Dec 16 09:45:30.749880 systemd[1]: Created slice kubepods-burstable-pod57a003e1_db46_456e_a47f_1d01ddea7f96.slice - libcontainer container kubepods-burstable-pod57a003e1_db46_456e_a47f_1d01ddea7f96.slice. Dec 16 09:45:30.889059 kubelet[2794]: I1216 09:45:30.889006 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/57a003e1-db46-456e-a47f-1d01ddea7f96-lib-modules\") pod \"cilium-mh2z4\" (UID: \"57a003e1-db46-456e-a47f-1d01ddea7f96\") " pod="kube-system/cilium-mh2z4" Dec 16 09:45:30.889059 kubelet[2794]: I1216 09:45:30.889052 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/57a003e1-db46-456e-a47f-1d01ddea7f96-cilium-config-path\") pod \"cilium-mh2z4\" (UID: \"57a003e1-db46-456e-a47f-1d01ddea7f96\") " pod="kube-system/cilium-mh2z4" Dec 16 09:45:30.889059 kubelet[2794]: I1216 09:45:30.889077 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/57a003e1-db46-456e-a47f-1d01ddea7f96-cilium-cgroup\") pod \"cilium-mh2z4\" (UID: \"57a003e1-db46-456e-a47f-1d01ddea7f96\") " pod="kube-system/cilium-mh2z4" Dec 16 09:45:30.889312 kubelet[2794]: I1216 09:45:30.889092 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/57a003e1-db46-456e-a47f-1d01ddea7f96-etc-cni-netd\") pod \"cilium-mh2z4\" (UID: \"57a003e1-db46-456e-a47f-1d01ddea7f96\") " pod="kube-system/cilium-mh2z4" Dec 16 09:45:30.889312 kubelet[2794]: I1216 09:45:30.889107 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/57a003e1-db46-456e-a47f-1d01ddea7f96-hostproc\") pod \"cilium-mh2z4\" (UID: \"57a003e1-db46-456e-a47f-1d01ddea7f96\") " pod="kube-system/cilium-mh2z4" Dec 16 09:45:30.889312 kubelet[2794]: I1216 09:45:30.889120 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/57a003e1-db46-456e-a47f-1d01ddea7f96-cni-path\") pod \"cilium-mh2z4\" (UID: \"57a003e1-db46-456e-a47f-1d01ddea7f96\") " pod="kube-system/cilium-mh2z4" Dec 16 09:45:30.889312 kubelet[2794]: I1216 09:45:30.889136 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/57a003e1-db46-456e-a47f-1d01ddea7f96-host-proc-sys-net\") pod \"cilium-mh2z4\" (UID: \"57a003e1-db46-456e-a47f-1d01ddea7f96\") " pod="kube-system/cilium-mh2z4" Dec 16 09:45:30.889312 kubelet[2794]: I1216 09:45:30.889152 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/57a003e1-db46-456e-a47f-1d01ddea7f96-host-proc-sys-kernel\") pod \"cilium-mh2z4\" (UID: \"57a003e1-db46-456e-a47f-1d01ddea7f96\") " pod="kube-system/cilium-mh2z4" Dec 16 09:45:30.889312 kubelet[2794]: I1216 09:45:30.889166 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmnwg\" (UniqueName: \"kubernetes.io/projected/57a003e1-db46-456e-a47f-1d01ddea7f96-kube-api-access-jmnwg\") pod \"cilium-mh2z4\" (UID: \"57a003e1-db46-456e-a47f-1d01ddea7f96\") " pod="kube-system/cilium-mh2z4" Dec 16 09:45:30.889531 kubelet[2794]: I1216 09:45:30.889180 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/57a003e1-db46-456e-a47f-1d01ddea7f96-xtables-lock\") pod \"cilium-mh2z4\" (UID: \"57a003e1-db46-456e-a47f-1d01ddea7f96\") " pod="kube-system/cilium-mh2z4" Dec 16 09:45:30.889531 kubelet[2794]: I1216 09:45:30.889194 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/57a003e1-db46-456e-a47f-1d01ddea7f96-clustermesh-secrets\") pod \"cilium-mh2z4\" (UID: \"57a003e1-db46-456e-a47f-1d01ddea7f96\") " pod="kube-system/cilium-mh2z4" Dec 16 09:45:30.889531 kubelet[2794]: I1216 09:45:30.889210 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/57a003e1-db46-456e-a47f-1d01ddea7f96-cilium-run\") pod \"cilium-mh2z4\" (UID: \"57a003e1-db46-456e-a47f-1d01ddea7f96\") " pod="kube-system/cilium-mh2z4" Dec 16 09:45:30.889531 kubelet[2794]: I1216 09:45:30.889224 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/57a003e1-db46-456e-a47f-1d01ddea7f96-bpf-maps\") pod \"cilium-mh2z4\" (UID: \"57a003e1-db46-456e-a47f-1d01ddea7f96\") " pod="kube-system/cilium-mh2z4" Dec 16 09:45:30.889531 kubelet[2794]: I1216 09:45:30.889238 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/57a003e1-db46-456e-a47f-1d01ddea7f96-hubble-tls\") pod \"cilium-mh2z4\" (UID: \"57a003e1-db46-456e-a47f-1d01ddea7f96\") " pod="kube-system/cilium-mh2z4" Dec 16 09:45:30.889531 kubelet[2794]: I1216 09:45:30.889253 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/57a003e1-db46-456e-a47f-1d01ddea7f96-cilium-ipsec-secrets\") pod \"cilium-mh2z4\" (UID: \"57a003e1-db46-456e-a47f-1d01ddea7f96\") " pod="kube-system/cilium-mh2z4" Dec 16 09:45:30.923955 sshd[4542]: pam_unix(sshd:session): session closed for user core Dec 16 09:45:30.927546 systemd[1]: sshd@21-138.199.148.223:22-147.75.109.163:40746.service: Deactivated successfully. Dec 16 09:45:30.930196 systemd[1]: session-22.scope: Deactivated successfully. Dec 16 09:45:30.931888 systemd-logind[1475]: Session 22 logged out. Waiting for processes to exit. Dec 16 09:45:30.933227 systemd-logind[1475]: Removed session 22. Dec 16 09:45:31.054418 containerd[1491]: time="2024-12-16T09:45:31.054257754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mh2z4,Uid:57a003e1-db46-456e-a47f-1d01ddea7f96,Namespace:kube-system,Attempt:0,}" Dec 16 09:45:31.077950 containerd[1491]: time="2024-12-16T09:45:31.077855469Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 16 09:45:31.078699 containerd[1491]: time="2024-12-16T09:45:31.078574694Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 16 09:45:31.078699 containerd[1491]: time="2024-12-16T09:45:31.078596965Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 16 09:45:31.078699 containerd[1491]: time="2024-12-16T09:45:31.078668079Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 16 09:45:31.102715 systemd[1]: Started sshd@22-138.199.148.223:22-147.75.109.163:40748.service - OpenSSH per-connection server daemon (147.75.109.163:40748). Dec 16 09:45:31.107194 systemd[1]: Started cri-containerd-ea28ee8281abd30604e8727195c87cdc5581dd7039015b10300acbfe8ff52dbf.scope - libcontainer container ea28ee8281abd30604e8727195c87cdc5581dd7039015b10300acbfe8ff52dbf. Dec 16 09:45:31.139423 containerd[1491]: time="2024-12-16T09:45:31.139380201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mh2z4,Uid:57a003e1-db46-456e-a47f-1d01ddea7f96,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea28ee8281abd30604e8727195c87cdc5581dd7039015b10300acbfe8ff52dbf\"" Dec 16 09:45:31.148951 containerd[1491]: time="2024-12-16T09:45:31.148894673Z" level=info msg="CreateContainer within sandbox \"ea28ee8281abd30604e8727195c87cdc5581dd7039015b10300acbfe8ff52dbf\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 16 09:45:31.160971 containerd[1491]: time="2024-12-16T09:45:31.160913172Z" level=info msg="CreateContainer within sandbox \"ea28ee8281abd30604e8727195c87cdc5581dd7039015b10300acbfe8ff52dbf\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"668fe4f7f2c473a2e120c48368076b9b07df03182a358697da41e6c73976a23c\"" Dec 16 09:45:31.162343 containerd[1491]: time="2024-12-16T09:45:31.162312731Z" level=info msg="StartContainer for \"668fe4f7f2c473a2e120c48368076b9b07df03182a358697da41e6c73976a23c\"" Dec 16 09:45:31.187551 systemd[1]: Started cri-containerd-668fe4f7f2c473a2e120c48368076b9b07df03182a358697da41e6c73976a23c.scope - libcontainer container 668fe4f7f2c473a2e120c48368076b9b07df03182a358697da41e6c73976a23c. Dec 16 09:45:31.213734 containerd[1491]: time="2024-12-16T09:45:31.213626166Z" level=info msg="StartContainer for \"668fe4f7f2c473a2e120c48368076b9b07df03182a358697da41e6c73976a23c\" returns successfully" Dec 16 09:45:31.230843 systemd[1]: cri-containerd-668fe4f7f2c473a2e120c48368076b9b07df03182a358697da41e6c73976a23c.scope: Deactivated successfully. Dec 16 09:45:31.261277 containerd[1491]: time="2024-12-16T09:45:31.261145251Z" level=info msg="shim disconnected" id=668fe4f7f2c473a2e120c48368076b9b07df03182a358697da41e6c73976a23c namespace=k8s.io Dec 16 09:45:31.261277 containerd[1491]: time="2024-12-16T09:45:31.261194644Z" level=warning msg="cleaning up after shim disconnected" id=668fe4f7f2c473a2e120c48368076b9b07df03182a358697da41e6c73976a23c namespace=k8s.io Dec 16 09:45:31.261277 containerd[1491]: time="2024-12-16T09:45:31.261202929Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 16 09:45:31.822351 containerd[1491]: time="2024-12-16T09:45:31.821758264Z" level=info msg="CreateContainer within sandbox \"ea28ee8281abd30604e8727195c87cdc5581dd7039015b10300acbfe8ff52dbf\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 16 09:45:31.834754 containerd[1491]: time="2024-12-16T09:45:31.834683839Z" level=info msg="CreateContainer within sandbox \"ea28ee8281abd30604e8727195c87cdc5581dd7039015b10300acbfe8ff52dbf\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e3f67f2e5deca7f8f98d2679df1fc1e978267d7cf35765583421b09b0fa14948\"" Dec 16 09:45:31.837611 containerd[1491]: time="2024-12-16T09:45:31.836074612Z" level=info msg="StartContainer for \"e3f67f2e5deca7f8f98d2679df1fc1e978267d7cf35765583421b09b0fa14948\"" Dec 16 09:45:31.868633 systemd[1]: Started cri-containerd-e3f67f2e5deca7f8f98d2679df1fc1e978267d7cf35765583421b09b0fa14948.scope - libcontainer container e3f67f2e5deca7f8f98d2679df1fc1e978267d7cf35765583421b09b0fa14948. Dec 16 09:45:31.895195 containerd[1491]: time="2024-12-16T09:45:31.895063560Z" level=info msg="StartContainer for \"e3f67f2e5deca7f8f98d2679df1fc1e978267d7cf35765583421b09b0fa14948\" returns successfully" Dec 16 09:45:31.904843 systemd[1]: cri-containerd-e3f67f2e5deca7f8f98d2679df1fc1e978267d7cf35765583421b09b0fa14948.scope: Deactivated successfully. Dec 16 09:45:31.927055 containerd[1491]: time="2024-12-16T09:45:31.926968037Z" level=info msg="shim disconnected" id=e3f67f2e5deca7f8f98d2679df1fc1e978267d7cf35765583421b09b0fa14948 namespace=k8s.io Dec 16 09:45:31.927055 containerd[1491]: time="2024-12-16T09:45:31.927042767Z" level=warning msg="cleaning up after shim disconnected" id=e3f67f2e5deca7f8f98d2679df1fc1e978267d7cf35765583421b09b0fa14948 namespace=k8s.io Dec 16 09:45:31.927055 containerd[1491]: time="2024-12-16T09:45:31.927053688Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 16 09:45:32.091394 sshd[4586]: Accepted publickey for core from 147.75.109.163 port 40748 ssh2: RSA SHA256:zB/zPQRxUCFkkFdvDftk99JQqA6bP3NHPa7FnaDUxKk Dec 16 09:45:32.093119 sshd[4586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 09:45:32.099768 systemd-logind[1475]: New session 23 of user core. Dec 16 09:45:32.102595 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 16 09:45:32.206554 kubelet[2794]: E1216 09:45:32.206422 2794 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 16 09:45:32.776300 sshd[4586]: pam_unix(sshd:session): session closed for user core Dec 16 09:45:32.780210 systemd[1]: sshd@22-138.199.148.223:22-147.75.109.163:40748.service: Deactivated successfully. Dec 16 09:45:32.782375 systemd[1]: session-23.scope: Deactivated successfully. Dec 16 09:45:32.783620 systemd-logind[1475]: Session 23 logged out. Waiting for processes to exit. Dec 16 09:45:32.784697 systemd-logind[1475]: Removed session 23. Dec 16 09:45:32.824731 containerd[1491]: time="2024-12-16T09:45:32.824647306Z" level=info msg="CreateContainer within sandbox \"ea28ee8281abd30604e8727195c87cdc5581dd7039015b10300acbfe8ff52dbf\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 16 09:45:32.850246 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1383384407.mount: Deactivated successfully. Dec 16 09:45:32.852322 containerd[1491]: time="2024-12-16T09:45:32.852258942Z" level=info msg="CreateContainer within sandbox \"ea28ee8281abd30604e8727195c87cdc5581dd7039015b10300acbfe8ff52dbf\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"90f99d5e84b6b00c59d74b6507355c8c85af6080c08e51221fc76d3a1d448647\"" Dec 16 09:45:32.854527 containerd[1491]: time="2024-12-16T09:45:32.853986945Z" level=info msg="StartContainer for \"90f99d5e84b6b00c59d74b6507355c8c85af6080c08e51221fc76d3a1d448647\"" Dec 16 09:45:32.890601 systemd[1]: Started cri-containerd-90f99d5e84b6b00c59d74b6507355c8c85af6080c08e51221fc76d3a1d448647.scope - libcontainer container 90f99d5e84b6b00c59d74b6507355c8c85af6080c08e51221fc76d3a1d448647. Dec 16 09:45:32.921510 containerd[1491]: time="2024-12-16T09:45:32.921464256Z" level=info msg="StartContainer for \"90f99d5e84b6b00c59d74b6507355c8c85af6080c08e51221fc76d3a1d448647\" returns successfully" Dec 16 09:45:32.937688 systemd[1]: cri-containerd-90f99d5e84b6b00c59d74b6507355c8c85af6080c08e51221fc76d3a1d448647.scope: Deactivated successfully. Dec 16 09:45:32.944590 systemd[1]: Started sshd@23-138.199.148.223:22-147.75.109.163:40754.service - OpenSSH per-connection server daemon (147.75.109.163:40754). Dec 16 09:45:32.961459 containerd[1491]: time="2024-12-16T09:45:32.961351842Z" level=info msg="shim disconnected" id=90f99d5e84b6b00c59d74b6507355c8c85af6080c08e51221fc76d3a1d448647 namespace=k8s.io Dec 16 09:45:32.961459 containerd[1491]: time="2024-12-16T09:45:32.961419257Z" level=warning msg="cleaning up after shim disconnected" id=90f99d5e84b6b00c59d74b6507355c8c85af6080c08e51221fc76d3a1d448647 namespace=k8s.io Dec 16 09:45:32.961459 containerd[1491]: time="2024-12-16T09:45:32.961445357Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 16 09:45:32.996768 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-90f99d5e84b6b00c59d74b6507355c8c85af6080c08e51221fc76d3a1d448647-rootfs.mount: Deactivated successfully. Dec 16 09:45:33.837548 containerd[1491]: time="2024-12-16T09:45:33.837404497Z" level=info msg="CreateContainer within sandbox \"ea28ee8281abd30604e8727195c87cdc5581dd7039015b10300acbfe8ff52dbf\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 16 09:45:33.852892 containerd[1491]: time="2024-12-16T09:45:33.852838146Z" level=info msg="CreateContainer within sandbox \"ea28ee8281abd30604e8727195c87cdc5581dd7039015b10300acbfe8ff52dbf\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"96473ae0f461ef0f18f1fc54c6417d4f3c41492f92c3e0668874a4a2b922ac18\"" Dec 16 09:45:33.855946 containerd[1491]: time="2024-12-16T09:45:33.855890256Z" level=info msg="StartContainer for \"96473ae0f461ef0f18f1fc54c6417d4f3c41492f92c3e0668874a4a2b922ac18\"" Dec 16 09:45:33.894611 systemd[1]: Started cri-containerd-96473ae0f461ef0f18f1fc54c6417d4f3c41492f92c3e0668874a4a2b922ac18.scope - libcontainer container 96473ae0f461ef0f18f1fc54c6417d4f3c41492f92c3e0668874a4a2b922ac18. Dec 16 09:45:33.917202 sshd[4770]: Accepted publickey for core from 147.75.109.163 port 40754 ssh2: RSA SHA256:zB/zPQRxUCFkkFdvDftk99JQqA6bP3NHPa7FnaDUxKk Dec 16 09:45:33.916971 sshd[4770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 09:45:33.924828 systemd-logind[1475]: New session 24 of user core. Dec 16 09:45:33.929564 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 16 09:45:33.929768 systemd[1]: cri-containerd-96473ae0f461ef0f18f1fc54c6417d4f3c41492f92c3e0668874a4a2b922ac18.scope: Deactivated successfully. Dec 16 09:45:33.934467 containerd[1491]: time="2024-12-16T09:45:33.933832709Z" level=info msg="StartContainer for \"96473ae0f461ef0f18f1fc54c6417d4f3c41492f92c3e0668874a4a2b922ac18\" returns successfully" Dec 16 09:45:33.962406 containerd[1491]: time="2024-12-16T09:45:33.962337737Z" level=info msg="shim disconnected" id=96473ae0f461ef0f18f1fc54c6417d4f3c41492f92c3e0668874a4a2b922ac18 namespace=k8s.io Dec 16 09:45:33.962940 containerd[1491]: time="2024-12-16T09:45:33.962514337Z" level=warning msg="cleaning up after shim disconnected" id=96473ae0f461ef0f18f1fc54c6417d4f3c41492f92c3e0668874a4a2b922ac18 namespace=k8s.io Dec 16 09:45:33.962940 containerd[1491]: time="2024-12-16T09:45:33.962529516Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 16 09:45:33.997144 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-96473ae0f461ef0f18f1fc54c6417d4f3c41492f92c3e0668874a4a2b922ac18-rootfs.mount: Deactivated successfully. Dec 16 09:45:34.834054 containerd[1491]: time="2024-12-16T09:45:34.834001705Z" level=info msg="CreateContainer within sandbox \"ea28ee8281abd30604e8727195c87cdc5581dd7039015b10300acbfe8ff52dbf\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 16 09:45:34.854069 containerd[1491]: time="2024-12-16T09:45:34.853541317Z" level=info msg="CreateContainer within sandbox \"ea28ee8281abd30604e8727195c87cdc5581dd7039015b10300acbfe8ff52dbf\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6f02852017505ebbba77c45f7962bcf049461486e2d0ea4aa9ad7769d72348b0\"" Dec 16 09:45:34.855591 containerd[1491]: time="2024-12-16T09:45:34.854662825Z" level=info msg="StartContainer for \"6f02852017505ebbba77c45f7962bcf049461486e2d0ea4aa9ad7769d72348b0\"" Dec 16 09:45:34.856550 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1437245868.mount: Deactivated successfully. Dec 16 09:45:34.910774 systemd[1]: Started cri-containerd-6f02852017505ebbba77c45f7962bcf049461486e2d0ea4aa9ad7769d72348b0.scope - libcontainer container 6f02852017505ebbba77c45f7962bcf049461486e2d0ea4aa9ad7769d72348b0. Dec 16 09:45:34.984716 containerd[1491]: time="2024-12-16T09:45:34.983663767Z" level=info msg="StartContainer for \"6f02852017505ebbba77c45f7962bcf049461486e2d0ea4aa9ad7769d72348b0\" returns successfully" Dec 16 09:45:34.998187 systemd[1]: run-containerd-runc-k8s.io-6f02852017505ebbba77c45f7962bcf049461486e2d0ea4aa9ad7769d72348b0-runc.1m9qm9.mount: Deactivated successfully. Dec 16 09:45:35.578469 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 16 09:45:36.315572 kubelet[2794]: I1216 09:45:36.315493 2794 setters.go:580] "Node became not ready" node="ci-4081-2-1-b-2c3a583fea" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-16T09:45:36Z","lastTransitionTime":"2024-12-16T09:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 16 09:45:36.709106 systemd[1]: run-containerd-runc-k8s.io-6f02852017505ebbba77c45f7962bcf049461486e2d0ea4aa9ad7769d72348b0-runc.e3vN35.mount: Deactivated successfully. Dec 16 09:45:37.063775 containerd[1491]: time="2024-12-16T09:45:37.063511420Z" level=info msg="StopPodSandbox for \"1b76df4b39e88ef8730401f8edc20a1374d882bc59b8a8d98f2fdc260fdc07c8\"" Dec 16 09:45:37.063775 containerd[1491]: time="2024-12-16T09:45:37.063649999Z" level=info msg="TearDown network for sandbox \"1b76df4b39e88ef8730401f8edc20a1374d882bc59b8a8d98f2fdc260fdc07c8\" successfully" Dec 16 09:45:37.063775 containerd[1491]: time="2024-12-16T09:45:37.063660499Z" level=info msg="StopPodSandbox for \"1b76df4b39e88ef8730401f8edc20a1374d882bc59b8a8d98f2fdc260fdc07c8\" returns successfully" Dec 16 09:45:37.066512 containerd[1491]: time="2024-12-16T09:45:37.064877086Z" level=info msg="RemovePodSandbox for \"1b76df4b39e88ef8730401f8edc20a1374d882bc59b8a8d98f2fdc260fdc07c8\"" Dec 16 09:45:37.066512 containerd[1491]: time="2024-12-16T09:45:37.065317229Z" level=info msg="Forcibly stopping sandbox \"1b76df4b39e88ef8730401f8edc20a1374d882bc59b8a8d98f2fdc260fdc07c8\"" Dec 16 09:45:37.066512 containerd[1491]: time="2024-12-16T09:45:37.065375058Z" level=info msg="TearDown network for sandbox \"1b76df4b39e88ef8730401f8edc20a1374d882bc59b8a8d98f2fdc260fdc07c8\" successfully" Dec 16 09:45:37.073990 containerd[1491]: time="2024-12-16T09:45:37.073778371Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1b76df4b39e88ef8730401f8edc20a1374d882bc59b8a8d98f2fdc260fdc07c8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 16 09:45:37.073990 containerd[1491]: time="2024-12-16T09:45:37.073856568Z" level=info msg="RemovePodSandbox \"1b76df4b39e88ef8730401f8edc20a1374d882bc59b8a8d98f2fdc260fdc07c8\" returns successfully" Dec 16 09:45:37.074459 containerd[1491]: time="2024-12-16T09:45:37.074397850Z" level=info msg="StopPodSandbox for \"e7e37930a21add5d2403f103fed40658afce7ecc15f8947f44065b76b7568d38\"" Dec 16 09:45:37.074548 containerd[1491]: time="2024-12-16T09:45:37.074520901Z" level=info msg="TearDown network for sandbox \"e7e37930a21add5d2403f103fed40658afce7ecc15f8947f44065b76b7568d38\" successfully" Dec 16 09:45:37.074606 containerd[1491]: time="2024-12-16T09:45:37.074544144Z" level=info msg="StopPodSandbox for \"e7e37930a21add5d2403f103fed40658afce7ecc15f8947f44065b76b7568d38\" returns successfully" Dec 16 09:45:37.074912 containerd[1491]: time="2024-12-16T09:45:37.074861758Z" level=info msg="RemovePodSandbox for \"e7e37930a21add5d2403f103fed40658afce7ecc15f8947f44065b76b7568d38\"" Dec 16 09:45:37.074912 containerd[1491]: time="2024-12-16T09:45:37.074896974Z" level=info msg="Forcibly stopping sandbox \"e7e37930a21add5d2403f103fed40658afce7ecc15f8947f44065b76b7568d38\"" Dec 16 09:45:37.075054 containerd[1491]: time="2024-12-16T09:45:37.074963048Z" level=info msg="TearDown network for sandbox \"e7e37930a21add5d2403f103fed40658afce7ecc15f8947f44065b76b7568d38\" successfully" Dec 16 09:45:37.079787 containerd[1491]: time="2024-12-16T09:45:37.079735628Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e7e37930a21add5d2403f103fed40658afce7ecc15f8947f44065b76b7568d38\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 16 09:45:37.079866 containerd[1491]: time="2024-12-16T09:45:37.079836898Z" level=info msg="RemovePodSandbox \"e7e37930a21add5d2403f103fed40658afce7ecc15f8947f44065b76b7568d38\" returns successfully" Dec 16 09:45:38.536300 systemd-networkd[1390]: lxc_health: Link UP Dec 16 09:45:38.544779 systemd-networkd[1390]: lxc_health: Gained carrier Dec 16 09:45:39.079299 kubelet[2794]: I1216 09:45:39.079210 2794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mh2z4" podStartSLOduration=9.079193888 podStartE2EDuration="9.079193888s" podCreationTimestamp="2024-12-16 09:45:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-16 09:45:35.850289862 +0000 UTC m=+358.916699985" watchObservedRunningTime="2024-12-16 09:45:39.079193888 +0000 UTC m=+362.145603981" Dec 16 09:45:39.694081 systemd-networkd[1390]: lxc_health: Gained IPv6LL Dec 16 09:45:41.285011 systemd[1]: run-containerd-runc-k8s.io-6f02852017505ebbba77c45f7962bcf049461486e2d0ea4aa9ad7769d72348b0-runc.Zxfq6J.mount: Deactivated successfully. Dec 16 09:45:45.644255 systemd[1]: run-containerd-runc-k8s.io-6f02852017505ebbba77c45f7962bcf049461486e2d0ea4aa9ad7769d72348b0-runc.vfjZwl.mount: Deactivated successfully. Dec 16 09:45:45.863030 sshd[4770]: pam_unix(sshd:session): session closed for user core Dec 16 09:45:45.866884 systemd[1]: sshd@23-138.199.148.223:22-147.75.109.163:40754.service: Deactivated successfully. Dec 16 09:45:45.869065 systemd[1]: session-24.scope: Deactivated successfully. Dec 16 09:45:45.870221 systemd-logind[1475]: Session 24 logged out. Waiting for processes to exit. Dec 16 09:45:45.871328 systemd-logind[1475]: Removed session 24. Dec 16 09:46:01.781010 systemd[1]: cri-containerd-9c42544dd956d2d4c013e6d495e211ddfbd2c17c2d53373f7e623d317b46435f.scope: Deactivated successfully. Dec 16 09:46:01.781282 systemd[1]: cri-containerd-9c42544dd956d2d4c013e6d495e211ddfbd2c17c2d53373f7e623d317b46435f.scope: Consumed 1.617s CPU time, 17.8M memory peak, 0B memory swap peak. Dec 16 09:46:01.790266 kubelet[2794]: E1216 09:46:01.789915 2794 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:57854->10.0.0.2:2379: read: connection timed out" Dec 16 09:46:01.806312 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9c42544dd956d2d4c013e6d495e211ddfbd2c17c2d53373f7e623d317b46435f-rootfs.mount: Deactivated successfully. Dec 16 09:46:01.811934 containerd[1491]: time="2024-12-16T09:46:01.811869945Z" level=info msg="shim disconnected" id=9c42544dd956d2d4c013e6d495e211ddfbd2c17c2d53373f7e623d317b46435f namespace=k8s.io Dec 16 09:46:01.811934 containerd[1491]: time="2024-12-16T09:46:01.811922272Z" level=warning msg="cleaning up after shim disconnected" id=9c42544dd956d2d4c013e6d495e211ddfbd2c17c2d53373f7e623d317b46435f namespace=k8s.io Dec 16 09:46:01.811934 containerd[1491]: time="2024-12-16T09:46:01.811932251Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 16 09:46:01.884600 kubelet[2794]: I1216 09:46:01.884362 2794 scope.go:117] "RemoveContainer" containerID="9c42544dd956d2d4c013e6d495e211ddfbd2c17c2d53373f7e623d317b46435f" Dec 16 09:46:01.887253 containerd[1491]: time="2024-12-16T09:46:01.887189677Z" level=info msg="CreateContainer within sandbox \"b73da414ccb81e6a652e6c95367e5fb1672340692c7b22b4e7ee18eea2a86835\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Dec 16 09:46:01.900589 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount931810890.mount: Deactivated successfully. Dec 16 09:46:01.903760 containerd[1491]: time="2024-12-16T09:46:01.903642846Z" level=info msg="CreateContainer within sandbox \"b73da414ccb81e6a652e6c95367e5fb1672340692c7b22b4e7ee18eea2a86835\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"9fa9b119a74282d5e89c14a630fe9bd5b0b3b7ae696185dfcaa411ec4e491d0c\"" Dec 16 09:46:01.905104 containerd[1491]: time="2024-12-16T09:46:01.904217802Z" level=info msg="StartContainer for \"9fa9b119a74282d5e89c14a630fe9bd5b0b3b7ae696185dfcaa411ec4e491d0c\"" Dec 16 09:46:01.904440 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2236557391.mount: Deactivated successfully. Dec 16 09:46:01.937699 systemd[1]: Started cri-containerd-9fa9b119a74282d5e89c14a630fe9bd5b0b3b7ae696185dfcaa411ec4e491d0c.scope - libcontainer container 9fa9b119a74282d5e89c14a630fe9bd5b0b3b7ae696185dfcaa411ec4e491d0c. Dec 16 09:46:01.976815 containerd[1491]: time="2024-12-16T09:46:01.976736177Z" level=info msg="StartContainer for \"9fa9b119a74282d5e89c14a630fe9bd5b0b3b7ae696185dfcaa411ec4e491d0c\" returns successfully" Dec 16 09:46:02.150716 systemd[1]: cri-containerd-928dc52209634a558dce2a4305f2d373c6841a3ef2be2b49382c565b6692bc54.scope: Deactivated successfully. Dec 16 09:46:02.151714 systemd[1]: cri-containerd-928dc52209634a558dce2a4305f2d373c6841a3ef2be2b49382c565b6692bc54.scope: Consumed 5.833s CPU time, 26.7M memory peak, 0B memory swap peak. Dec 16 09:46:02.179695 containerd[1491]: time="2024-12-16T09:46:02.179623875Z" level=info msg="shim disconnected" id=928dc52209634a558dce2a4305f2d373c6841a3ef2be2b49382c565b6692bc54 namespace=k8s.io Dec 16 09:46:02.179695 containerd[1491]: time="2024-12-16T09:46:02.179684820Z" level=warning msg="cleaning up after shim disconnected" id=928dc52209634a558dce2a4305f2d373c6841a3ef2be2b49382c565b6692bc54 namespace=k8s.io Dec 16 09:46:02.179695 containerd[1491]: time="2024-12-16T09:46:02.179693887Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 16 09:46:02.195565 containerd[1491]: time="2024-12-16T09:46:02.195468825Z" level=warning msg="cleanup warnings time=\"2024-12-16T09:46:02Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 16 09:46:02.807631 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-928dc52209634a558dce2a4305f2d373c6841a3ef2be2b49382c565b6692bc54-rootfs.mount: Deactivated successfully. Dec 16 09:46:02.889749 kubelet[2794]: I1216 09:46:02.888984 2794 scope.go:117] "RemoveContainer" containerID="928dc52209634a558dce2a4305f2d373c6841a3ef2be2b49382c565b6692bc54" Dec 16 09:46:02.893734 containerd[1491]: time="2024-12-16T09:46:02.893566499Z" level=info msg="CreateContainer within sandbox \"bca855697465f407eb060a7a5ff21d68322e635ac24345446e6546f68f58f6dc\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Dec 16 09:46:02.923670 containerd[1491]: time="2024-12-16T09:46:02.923627376Z" level=info msg="CreateContainer within sandbox \"bca855697465f407eb060a7a5ff21d68322e635ac24345446e6546f68f58f6dc\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"3012e381627da18884b6f0145947976867766d3dc85bc7b79a41433a32772d8f\"" Dec 16 09:46:02.924576 containerd[1491]: time="2024-12-16T09:46:02.924344189Z" level=info msg="StartContainer for \"3012e381627da18884b6f0145947976867766d3dc85bc7b79a41433a32772d8f\"" Dec 16 09:46:02.965023 systemd[1]: Started cri-containerd-3012e381627da18884b6f0145947976867766d3dc85bc7b79a41433a32772d8f.scope - libcontainer container 3012e381627da18884b6f0145947976867766d3dc85bc7b79a41433a32772d8f. Dec 16 09:46:03.013796 containerd[1491]: time="2024-12-16T09:46:03.013709839Z" level=info msg="StartContainer for \"3012e381627da18884b6f0145947976867766d3dc85bc7b79a41433a32772d8f\" returns successfully" Dec 16 09:46:06.127336 kubelet[2794]: E1216 09:46:06.127145 2794 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:57658->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-2-1-b-2c3a583fea.18119f2efd642a54 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-2-1-b-2c3a583fea,UID:da199e9d38e55dead070c209624f631b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-2-1-b-2c3a583fea,},FirstTimestamp:2024-12-16 09:45:55.6828637 +0000 UTC m=+378.749273792,LastTimestamp:2024-12-16 09:45:55.6828637 +0000 UTC m=+378.749273792,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-2-1-b-2c3a583fea,}" Dec 16 09:46:07.108752 update_engine[1479]: I20241216 09:46:07.108666 1479 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Dec 16 09:46:07.108752 update_engine[1479]: I20241216 09:46:07.108734 1479 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Dec 16 09:46:07.112857 update_engine[1479]: I20241216 09:46:07.112818 1479 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Dec 16 09:46:07.113294 update_engine[1479]: I20241216 09:46:07.113260 1479 omaha_request_params.cc:62] Current group set to stable Dec 16 09:46:07.113406 update_engine[1479]: I20241216 09:46:07.113379 1479 update_attempter.cc:499] Already updated boot flags. Skipping. Dec 16 09:46:07.114103 update_engine[1479]: I20241216 09:46:07.113500 1479 update_attempter.cc:643] Scheduling an action processor start. Dec 16 09:46:07.114103 update_engine[1479]: I20241216 09:46:07.113535 1479 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 16 09:46:07.114103 update_engine[1479]: I20241216 09:46:07.113588 1479 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Dec 16 09:46:07.114103 update_engine[1479]: I20241216 09:46:07.113647 1479 omaha_request_action.cc:271] Posting an Omaha request to disabled Dec 16 09:46:07.114103 update_engine[1479]: I20241216 09:46:07.113657 1479 omaha_request_action.cc:272] Request: Dec 16 09:46:07.114103 update_engine[1479]: Dec 16 09:46:07.114103 update_engine[1479]: Dec 16 09:46:07.114103 update_engine[1479]: Dec 16 09:46:07.114103 update_engine[1479]: Dec 16 09:46:07.114103 update_engine[1479]: Dec 16 09:46:07.114103 update_engine[1479]: Dec 16 09:46:07.114103 update_engine[1479]: Dec 16 09:46:07.114103 update_engine[1479]: Dec 16 09:46:07.114103 update_engine[1479]: I20241216 09:46:07.113664 1479 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 16 09:46:07.124643 locksmithd[1502]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Dec 16 09:46:07.127839 update_engine[1479]: I20241216 09:46:07.127790 1479 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 16 09:46:07.128119 update_engine[1479]: I20241216 09:46:07.128072 1479 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 16 09:46:07.129003 update_engine[1479]: E20241216 09:46:07.128959 1479 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 16 09:46:07.129057 update_engine[1479]: I20241216 09:46:07.129016 1479 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Dec 16 09:46:11.793192 kubelet[2794]: E1216 09:46:11.793123 2794 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io ci-4081-2-1-b-2c3a583fea)" Dec 16 09:46:12.334366 kubelet[2794]: I1216 09:46:12.334303 2794 status_manager.go:853] "Failed to get status for pod" podUID="007695315bc95d4e50154860454250e8" pod="kube-system/kube-scheduler-ci-4081-2-1-b-2c3a583fea" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:57762->10.0.0.2:2379: read: connection timed out"