Dec 13 01:29:30.146933 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Dec 12 23:15:00 -00 2024 Dec 13 01:29:30.146962 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:29:30.146971 kernel: BIOS-provided physical RAM map: Dec 13 01:29:30.147026 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 01:29:30.147032 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 01:29:30.147038 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 01:29:30.147045 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007cfdbfff] usable Dec 13 01:29:30.147052 kernel: BIOS-e820: [mem 0x000000007cfdc000-0x000000007cffffff] reserved Dec 13 01:29:30.147061 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 13 01:29:30.147067 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 13 01:29:30.147074 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 01:29:30.147080 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 01:29:30.147086 kernel: NX (Execute Disable) protection: active Dec 13 01:29:30.147092 kernel: APIC: Static calls initialized Dec 13 01:29:30.147102 kernel: SMBIOS 2.8 present. Dec 13 01:29:30.147109 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Dec 13 01:29:30.147116 kernel: Hypervisor detected: KVM Dec 13 01:29:30.147123 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 01:29:30.147129 kernel: kvm-clock: using sched offset of 3203778086 cycles Dec 13 01:29:30.147137 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 01:29:30.147144 kernel: tsc: Detected 2495.310 MHz processor Dec 13 01:29:30.147151 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 01:29:30.147158 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 01:29:30.147167 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 Dec 13 01:29:30.147174 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 13 01:29:30.147181 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 01:29:30.147188 kernel: Using GB pages for direct mapping Dec 13 01:29:30.147194 kernel: ACPI: Early table checksum verification disabled Dec 13 01:29:30.147201 kernel: ACPI: RSDP 0x00000000000F51F0 000014 (v00 BOCHS ) Dec 13 01:29:30.147208 kernel: ACPI: RSDT 0x000000007CFE265D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:29:30.147215 kernel: ACPI: FACP 0x000000007CFE244D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:29:30.147222 kernel: ACPI: DSDT 0x000000007CFE0040 00240D (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:29:30.147231 kernel: ACPI: FACS 0x000000007CFE0000 000040 Dec 13 01:29:30.147238 kernel: ACPI: APIC 0x000000007CFE2541 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:29:30.147245 kernel: ACPI: HPET 0x000000007CFE25C1 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:29:30.147252 kernel: ACPI: MCFG 0x000000007CFE25F9 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:29:30.147258 kernel: ACPI: WAET 0x000000007CFE2635 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:29:30.147265 kernel: ACPI: Reserving FACP table memory at [mem 0x7cfe244d-0x7cfe2540] Dec 13 01:29:30.147272 kernel: ACPI: Reserving DSDT table memory at [mem 0x7cfe0040-0x7cfe244c] Dec 13 01:29:30.147279 kernel: ACPI: Reserving FACS table memory at [mem 0x7cfe0000-0x7cfe003f] Dec 13 01:29:30.147291 kernel: ACPI: Reserving APIC table memory at [mem 0x7cfe2541-0x7cfe25c0] Dec 13 01:29:30.147298 kernel: ACPI: Reserving HPET table memory at [mem 0x7cfe25c1-0x7cfe25f8] Dec 13 01:29:30.147305 kernel: ACPI: Reserving MCFG table memory at [mem 0x7cfe25f9-0x7cfe2634] Dec 13 01:29:30.147312 kernel: ACPI: Reserving WAET table memory at [mem 0x7cfe2635-0x7cfe265c] Dec 13 01:29:30.147319 kernel: No NUMA configuration found Dec 13 01:29:30.147326 kernel: Faking a node at [mem 0x0000000000000000-0x000000007cfdbfff] Dec 13 01:29:30.147340 kernel: NODE_DATA(0) allocated [mem 0x7cfd6000-0x7cfdbfff] Dec 13 01:29:30.147359 kernel: Zone ranges: Dec 13 01:29:30.147378 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 01:29:30.147394 kernel: DMA32 [mem 0x0000000001000000-0x000000007cfdbfff] Dec 13 01:29:30.147412 kernel: Normal empty Dec 13 01:29:30.147430 kernel: Movable zone start for each node Dec 13 01:29:30.147449 kernel: Early memory node ranges Dec 13 01:29:30.147469 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 01:29:30.147492 kernel: node 0: [mem 0x0000000000100000-0x000000007cfdbfff] Dec 13 01:29:30.147506 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007cfdbfff] Dec 13 01:29:30.147516 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 01:29:30.147523 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 01:29:30.147530 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Dec 13 01:29:30.147537 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 01:29:30.147544 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 01:29:30.147551 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 01:29:30.147558 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 01:29:30.147565 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 01:29:30.147572 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 01:29:30.147582 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 01:29:30.147589 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 01:29:30.147596 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 01:29:30.147603 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 01:29:30.147610 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 01:29:30.147617 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 13 01:29:30.147624 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 13 01:29:30.147631 kernel: Booting paravirtualized kernel on KVM Dec 13 01:29:30.147640 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 01:29:30.147652 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 13 01:29:30.147661 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Dec 13 01:29:30.147668 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Dec 13 01:29:30.147675 kernel: pcpu-alloc: [0] 0 1 Dec 13 01:29:30.147682 kernel: kvm-guest: PV spinlocks disabled, no host support Dec 13 01:29:30.147690 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:29:30.147698 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:29:30.147705 kernel: random: crng init done Dec 13 01:29:30.147715 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 01:29:30.147722 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 01:29:30.147732 kernel: Fallback order for Node 0: 0 Dec 13 01:29:30.147742 kernel: Built 1 zonelists, mobility grouping on. Total pages: 503708 Dec 13 01:29:30.147763 kernel: Policy zone: DMA32 Dec 13 01:29:30.147779 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:29:30.147787 kernel: Memory: 1922056K/2047464K available (12288K kernel code, 2299K rwdata, 22724K rodata, 42844K init, 2348K bss, 125148K reserved, 0K cma-reserved) Dec 13 01:29:30.147794 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 01:29:30.147801 kernel: ftrace: allocating 37902 entries in 149 pages Dec 13 01:29:30.147811 kernel: ftrace: allocated 149 pages with 4 groups Dec 13 01:29:30.147818 kernel: Dynamic Preempt: voluntary Dec 13 01:29:30.147826 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 01:29:30.147834 kernel: rcu: RCU event tracing is enabled. Dec 13 01:29:30.147841 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 01:29:30.147848 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 01:29:30.147856 kernel: Rude variant of Tasks RCU enabled. Dec 13 01:29:30.147863 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:29:30.147870 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:29:30.147880 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 01:29:30.147888 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 13 01:29:30.147897 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 01:29:30.147906 kernel: Console: colour VGA+ 80x25 Dec 13 01:29:30.147916 kernel: printk: console [tty0] enabled Dec 13 01:29:30.147925 kernel: printk: console [ttyS0] enabled Dec 13 01:29:30.147935 kernel: ACPI: Core revision 20230628 Dec 13 01:29:30.147945 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 13 01:29:30.147955 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 01:29:30.147968 kernel: x2apic enabled Dec 13 01:29:30.151106 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 01:29:30.151122 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 01:29:30.151130 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Dec 13 01:29:30.151138 kernel: Calibrating delay loop (skipped) preset value.. 4990.62 BogoMIPS (lpj=2495310) Dec 13 01:29:30.151146 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 13 01:29:30.151153 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 13 01:29:30.151161 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 13 01:29:30.151169 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 01:29:30.151192 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 01:29:30.151200 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 01:29:30.151208 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 01:29:30.151218 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Dec 13 01:29:30.151225 kernel: RETBleed: Mitigation: untrained return thunk Dec 13 01:29:30.151233 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 01:29:30.151241 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 13 01:29:30.151248 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Dec 13 01:29:30.151257 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Dec 13 01:29:30.151265 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Dec 13 01:29:30.151273 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 01:29:30.151283 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 01:29:30.151291 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 01:29:30.151298 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 01:29:30.151306 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Dec 13 01:29:30.151314 kernel: Freeing SMP alternatives memory: 32K Dec 13 01:29:30.151324 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:29:30.151332 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 01:29:30.151339 kernel: landlock: Up and running. Dec 13 01:29:30.151347 kernel: SELinux: Initializing. Dec 13 01:29:30.151355 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 01:29:30.151363 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 01:29:30.151371 kernel: smpboot: CPU0: AMD EPYC Processor (family: 0x17, model: 0x31, stepping: 0x0) Dec 13 01:29:30.151378 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:29:30.151386 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:29:30.151396 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:29:30.151404 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 13 01:29:30.151411 kernel: ... version: 0 Dec 13 01:29:30.151419 kernel: ... bit width: 48 Dec 13 01:29:30.151426 kernel: ... generic registers: 6 Dec 13 01:29:30.151434 kernel: ... value mask: 0000ffffffffffff Dec 13 01:29:30.151442 kernel: ... max period: 00007fffffffffff Dec 13 01:29:30.151450 kernel: ... fixed-purpose events: 0 Dec 13 01:29:30.151457 kernel: ... event mask: 000000000000003f Dec 13 01:29:30.151468 kernel: signal: max sigframe size: 1776 Dec 13 01:29:30.151475 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:29:30.151484 kernel: rcu: Max phase no-delay instances is 400. Dec 13 01:29:30.151491 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:29:30.151499 kernel: smpboot: x86: Booting SMP configuration: Dec 13 01:29:30.151506 kernel: .... node #0, CPUs: #1 Dec 13 01:29:30.151514 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 01:29:30.151522 kernel: smpboot: Max logical packages: 1 Dec 13 01:29:30.151529 kernel: smpboot: Total of 2 processors activated (9981.24 BogoMIPS) Dec 13 01:29:30.151539 kernel: devtmpfs: initialized Dec 13 01:29:30.151547 kernel: x86/mm: Memory block size: 128MB Dec 13 01:29:30.151555 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:29:30.151563 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 01:29:30.151573 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:29:30.151581 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:29:30.151590 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:29:30.151598 kernel: audit: type=2000 audit(1734053368.564:1): state=initialized audit_enabled=0 res=1 Dec 13 01:29:30.151606 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:29:30.151616 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 01:29:30.151624 kernel: cpuidle: using governor menu Dec 13 01:29:30.151631 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:29:30.151639 kernel: dca service started, version 1.12.1 Dec 13 01:29:30.151647 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Dec 13 01:29:30.151654 kernel: PCI: Using configuration type 1 for base access Dec 13 01:29:30.151662 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 01:29:30.151670 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:29:30.151677 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 01:29:30.151687 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:29:30.151695 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 01:29:30.151702 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:29:30.151710 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:29:30.151717 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:29:30.151725 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:29:30.151733 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 01:29:30.151740 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 01:29:30.151748 kernel: ACPI: Interpreter enabled Dec 13 01:29:30.151758 kernel: ACPI: PM: (supports S0 S5) Dec 13 01:29:30.151765 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 01:29:30.151773 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 01:29:30.151780 kernel: PCI: Using E820 reservations for host bridge windows Dec 13 01:29:30.151788 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 13 01:29:30.151796 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 01:29:30.152051 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 01:29:30.152189 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 13 01:29:30.152317 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 13 01:29:30.152328 kernel: PCI host bridge to bus 0000:00 Dec 13 01:29:30.152462 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 01:29:30.152578 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 01:29:30.152691 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 01:29:30.152804 kernel: pci_bus 0000:00: root bus resource [mem 0x7d000000-0xafffffff window] Dec 13 01:29:30.152915 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 13 01:29:30.154344 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Dec 13 01:29:30.154467 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 01:29:30.154621 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Dec 13 01:29:30.154756 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 Dec 13 01:29:30.154879 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfb800000-0xfbffffff pref] Dec 13 01:29:30.156042 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfd200000-0xfd203fff 64bit pref] Dec 13 01:29:30.156202 kernel: pci 0000:00:01.0: reg 0x20: [mem 0xfea10000-0xfea10fff] Dec 13 01:29:30.156327 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea00000-0xfea0ffff pref] Dec 13 01:29:30.156451 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 01:29:30.156584 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Dec 13 01:29:30.156707 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea11000-0xfea11fff] Dec 13 01:29:30.156837 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Dec 13 01:29:30.156960 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea12000-0xfea12fff] Dec 13 01:29:30.159544 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Dec 13 01:29:30.159679 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea13000-0xfea13fff] Dec 13 01:29:30.159810 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Dec 13 01:29:30.159934 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea14000-0xfea14fff] Dec 13 01:29:30.160130 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Dec 13 01:29:30.160265 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea15000-0xfea15fff] Dec 13 01:29:30.160395 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Dec 13 01:29:30.160516 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea16000-0xfea16fff] Dec 13 01:29:30.160644 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Dec 13 01:29:30.160763 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea17000-0xfea17fff] Dec 13 01:29:30.160895 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Dec 13 01:29:30.161057 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea18000-0xfea18fff] Dec 13 01:29:30.161224 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Dec 13 01:29:30.161346 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfea19000-0xfea19fff] Dec 13 01:29:30.161477 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Dec 13 01:29:30.161603 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 13 01:29:30.161735 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Dec 13 01:29:30.161857 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc040-0xc05f] Dec 13 01:29:30.162002 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea1a000-0xfea1afff] Dec 13 01:29:30.162137 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Dec 13 01:29:30.162261 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Dec 13 01:29:30.162398 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Dec 13 01:29:30.162526 kernel: pci 0000:01:00.0: reg 0x14: [mem 0xfe880000-0xfe880fff] Dec 13 01:29:30.162659 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Dec 13 01:29:30.162786 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfe800000-0xfe87ffff pref] Dec 13 01:29:30.162919 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Dec 13 01:29:30.164792 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Dec 13 01:29:30.164920 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Dec 13 01:29:30.166809 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Dec 13 01:29:30.166946 kernel: pci 0000:02:00.0: reg 0x10: [mem 0xfe600000-0xfe603fff 64bit] Dec 13 01:29:30.167090 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Dec 13 01:29:30.167221 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Dec 13 01:29:30.167342 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 01:29:30.167481 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Dec 13 01:29:30.167606 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfe400000-0xfe400fff] Dec 13 01:29:30.167734 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xfcc00000-0xfcc03fff 64bit pref] Dec 13 01:29:30.167860 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Dec 13 01:29:30.168052 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Dec 13 01:29:30.168184 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 01:29:30.168321 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Dec 13 01:29:30.168449 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Dec 13 01:29:30.168575 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Dec 13 01:29:30.168693 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Dec 13 01:29:30.168812 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 01:29:30.168951 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Dec 13 01:29:30.169108 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xfc800000-0xfc803fff 64bit pref] Dec 13 01:29:30.169239 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Dec 13 01:29:30.169359 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Dec 13 01:29:30.169480 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 01:29:30.169618 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Dec 13 01:29:30.169755 kernel: pci 0000:06:00.0: reg 0x14: [mem 0xfde00000-0xfde00fff] Dec 13 01:29:30.169904 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xfc600000-0xfc603fff 64bit pref] Dec 13 01:29:30.170071 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Dec 13 01:29:30.170199 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Dec 13 01:29:30.170320 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 01:29:30.170331 kernel: acpiphp: Slot [0] registered Dec 13 01:29:30.170468 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Dec 13 01:29:30.170601 kernel: pci 0000:07:00.0: reg 0x14: [mem 0xfdc80000-0xfdc80fff] Dec 13 01:29:30.170746 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xfc400000-0xfc403fff 64bit pref] Dec 13 01:29:30.170891 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfdc00000-0xfdc7ffff pref] Dec 13 01:29:30.171043 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Dec 13 01:29:30.171167 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Dec 13 01:29:30.171287 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 01:29:30.171298 kernel: acpiphp: Slot [0-2] registered Dec 13 01:29:30.171422 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Dec 13 01:29:30.171544 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Dec 13 01:29:30.171666 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 01:29:30.171677 kernel: acpiphp: Slot [0-3] registered Dec 13 01:29:30.171819 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Dec 13 01:29:30.171950 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 13 01:29:30.172113 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 01:29:30.172126 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 01:29:30.172135 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 01:29:30.172143 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 01:29:30.172151 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 01:29:30.172159 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 13 01:29:30.172166 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 13 01:29:30.172178 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 13 01:29:30.172186 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 13 01:29:30.172194 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 13 01:29:30.172202 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 13 01:29:30.172210 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 13 01:29:30.172218 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 13 01:29:30.172226 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 13 01:29:30.172234 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 13 01:29:30.172241 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 13 01:29:30.172252 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 13 01:29:30.172260 kernel: iommu: Default domain type: Translated Dec 13 01:29:30.172268 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 01:29:30.172275 kernel: PCI: Using ACPI for IRQ routing Dec 13 01:29:30.172283 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 01:29:30.172292 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 01:29:30.172301 kernel: e820: reserve RAM buffer [mem 0x7cfdc000-0x7fffffff] Dec 13 01:29:30.172426 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 13 01:29:30.172549 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 13 01:29:30.172675 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 01:29:30.172687 kernel: vgaarb: loaded Dec 13 01:29:30.172695 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 13 01:29:30.172703 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 13 01:29:30.172710 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 01:29:30.172719 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:29:30.172730 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:29:30.172741 kernel: pnp: PnP ACPI init Dec 13 01:29:30.172890 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 13 01:29:30.172908 kernel: pnp: PnP ACPI: found 5 devices Dec 13 01:29:30.172916 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 01:29:30.172924 kernel: NET: Registered PF_INET protocol family Dec 13 01:29:30.172932 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 01:29:30.172940 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 01:29:30.172948 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:29:30.172956 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 01:29:30.172964 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 13 01:29:30.172998 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 01:29:30.173006 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 01:29:30.173014 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 01:29:30.173022 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:29:30.173030 kernel: NET: Registered PF_XDP protocol family Dec 13 01:29:30.173175 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Dec 13 01:29:30.173298 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Dec 13 01:29:30.173418 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Dec 13 01:29:30.173546 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] Dec 13 01:29:30.173667 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] Dec 13 01:29:30.173806 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] Dec 13 01:29:30.173936 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Dec 13 01:29:30.174084 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Dec 13 01:29:30.174206 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Dec 13 01:29:30.174331 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Dec 13 01:29:30.174452 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Dec 13 01:29:30.174577 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 01:29:30.174702 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Dec 13 01:29:30.174842 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Dec 13 01:29:30.174965 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 01:29:30.179270 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Dec 13 01:29:30.179404 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Dec 13 01:29:30.179524 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 01:29:30.179658 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Dec 13 01:29:30.179817 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Dec 13 01:29:30.179942 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 01:29:30.180089 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Dec 13 01:29:30.180213 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Dec 13 01:29:30.180334 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 01:29:30.180461 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Dec 13 01:29:30.180582 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Dec 13 01:29:30.180704 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Dec 13 01:29:30.180852 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 01:29:30.181008 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Dec 13 01:29:30.181181 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Dec 13 01:29:30.181319 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Dec 13 01:29:30.181439 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 01:29:30.181563 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Dec 13 01:29:30.181685 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Dec 13 01:29:30.181823 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 13 01:29:30.181954 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 01:29:30.184125 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 01:29:30.184244 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 01:29:30.184358 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 01:29:30.184468 kernel: pci_bus 0000:00: resource 7 [mem 0x7d000000-0xafffffff window] Dec 13 01:29:30.184576 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 13 01:29:30.184690 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Dec 13 01:29:30.184842 kernel: pci_bus 0000:01: resource 1 [mem 0xfe800000-0xfe9fffff] Dec 13 01:29:30.184963 kernel: pci_bus 0000:01: resource 2 [mem 0xfd000000-0xfd1fffff 64bit pref] Dec 13 01:29:30.185127 kernel: pci_bus 0000:02: resource 1 [mem 0xfe600000-0xfe7fffff] Dec 13 01:29:30.185253 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 01:29:30.185381 kernel: pci_bus 0000:03: resource 1 [mem 0xfe400000-0xfe5fffff] Dec 13 01:29:30.185501 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 01:29:30.185628 kernel: pci_bus 0000:04: resource 1 [mem 0xfe200000-0xfe3fffff] Dec 13 01:29:30.185754 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 01:29:30.185893 kernel: pci_bus 0000:05: resource 1 [mem 0xfe000000-0xfe1fffff] Dec 13 01:29:30.188160 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 01:29:30.188363 kernel: pci_bus 0000:06: resource 1 [mem 0xfde00000-0xfdffffff] Dec 13 01:29:30.188513 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 01:29:30.188644 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Dec 13 01:29:30.188774 kernel: pci_bus 0000:07: resource 1 [mem 0xfdc00000-0xfddfffff] Dec 13 01:29:30.188903 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 01:29:30.189083 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Dec 13 01:29:30.189231 kernel: pci_bus 0000:08: resource 1 [mem 0xfda00000-0xfdbfffff] Dec 13 01:29:30.189349 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 01:29:30.189485 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Dec 13 01:29:30.189603 kernel: pci_bus 0000:09: resource 1 [mem 0xfd800000-0xfd9fffff] Dec 13 01:29:30.189720 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 01:29:30.189736 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 13 01:29:30.189753 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:29:30.189765 kernel: Initialise system trusted keyrings Dec 13 01:29:30.189774 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 01:29:30.189782 kernel: Key type asymmetric registered Dec 13 01:29:30.189793 kernel: Asymmetric key parser 'x509' registered Dec 13 01:29:30.189805 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 01:29:30.189814 kernel: io scheduler mq-deadline registered Dec 13 01:29:30.189822 kernel: io scheduler kyber registered Dec 13 01:29:30.189830 kernel: io scheduler bfq registered Dec 13 01:29:30.189967 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Dec 13 01:29:30.190127 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Dec 13 01:29:30.190255 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Dec 13 01:29:30.190378 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Dec 13 01:29:30.190502 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Dec 13 01:29:30.190623 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Dec 13 01:29:30.190756 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Dec 13 01:29:30.190890 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Dec 13 01:29:30.191051 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Dec 13 01:29:30.191199 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Dec 13 01:29:30.191366 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Dec 13 01:29:30.191526 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Dec 13 01:29:30.191675 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Dec 13 01:29:30.191817 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Dec 13 01:29:30.191947 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Dec 13 01:29:30.192311 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Dec 13 01:29:30.192326 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 13 01:29:30.192457 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Dec 13 01:29:30.192618 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Dec 13 01:29:30.192634 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 01:29:30.192643 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Dec 13 01:29:30.192651 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:29:30.192660 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 01:29:30.192669 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 01:29:30.192677 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 01:29:30.192686 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 01:29:30.192840 kernel: rtc_cmos 00:03: RTC can wake from S4 Dec 13 01:29:30.192856 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 01:29:30.192971 kernel: rtc_cmos 00:03: registered as rtc0 Dec 13 01:29:30.193349 kernel: rtc_cmos 00:03: setting system clock to 2024-12-13T01:29:29 UTC (1734053369) Dec 13 01:29:30.193677 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Dec 13 01:29:30.193713 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Dec 13 01:29:30.193748 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:29:30.193778 kernel: Segment Routing with IPv6 Dec 13 01:29:30.193820 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:29:30.193840 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:29:30.193848 kernel: Key type dns_resolver registered Dec 13 01:29:30.193857 kernel: IPI shorthand broadcast: enabled Dec 13 01:29:30.193866 kernel: sched_clock: Marking stable (1427021570, 145700163)->(1587298577, -14576844) Dec 13 01:29:30.193875 kernel: registered taskstats version 1 Dec 13 01:29:30.193883 kernel: Loading compiled-in X.509 certificates Dec 13 01:29:30.193892 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: c82d546f528d79a5758dcebbc47fb6daf92836a0' Dec 13 01:29:30.193900 kernel: Key type .fscrypt registered Dec 13 01:29:30.193911 kernel: Key type fscrypt-provisioning registered Dec 13 01:29:30.193919 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 01:29:30.193928 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:29:30.193936 kernel: ima: No architecture policies found Dec 13 01:29:30.193944 kernel: clk: Disabling unused clocks Dec 13 01:29:30.193952 kernel: Freeing unused kernel image (initmem) memory: 42844K Dec 13 01:29:30.193961 kernel: Write protecting the kernel read-only data: 36864k Dec 13 01:29:30.193970 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Dec 13 01:29:30.194059 kernel: Run /init as init process Dec 13 01:29:30.194071 kernel: with arguments: Dec 13 01:29:30.194080 kernel: /init Dec 13 01:29:30.194088 kernel: with environment: Dec 13 01:29:30.194096 kernel: HOME=/ Dec 13 01:29:30.194104 kernel: TERM=linux Dec 13 01:29:30.194113 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:29:30.194126 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:29:30.194137 systemd[1]: Detected virtualization kvm. Dec 13 01:29:30.194149 systemd[1]: Detected architecture x86-64. Dec 13 01:29:30.194157 systemd[1]: Running in initrd. Dec 13 01:29:30.194166 systemd[1]: No hostname configured, using default hostname. Dec 13 01:29:30.194174 systemd[1]: Hostname set to . Dec 13 01:29:30.194183 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:29:30.194192 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:29:30.194218 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:29:30.194229 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:29:30.194252 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 01:29:30.194261 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:29:30.194270 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 01:29:30.194279 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 01:29:30.194290 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 01:29:30.194298 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 01:29:30.194310 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:29:30.194320 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:29:30.194332 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:29:30.194344 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:29:30.194356 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:29:30.194367 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:29:30.194378 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:29:30.194389 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:29:30.194401 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:29:30.194417 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:29:30.194429 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:29:30.194441 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:29:30.194450 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:29:30.194459 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:29:30.194468 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 01:29:30.194477 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:29:30.194488 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 01:29:30.194500 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:29:30.194509 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:29:30.194517 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:29:30.194526 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:29:30.194535 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 01:29:30.194543 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:29:30.194586 systemd-journald[188]: Collecting audit messages is disabled. Dec 13 01:29:30.194611 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:29:30.194620 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:29:30.194633 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:29:30.194642 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:29:30.194651 systemd-journald[188]: Journal started Dec 13 01:29:30.194670 systemd-journald[188]: Runtime Journal (/run/log/journal/1f3127063c604ef9abefcf362f411088) is 4.8M, max 38.4M, 33.6M free. Dec 13 01:29:30.147810 systemd-modules-load[189]: Inserted module 'overlay' Dec 13 01:29:30.225383 kernel: Bridge firewalling registered Dec 13 01:29:30.225411 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:29:30.197348 systemd-modules-load[189]: Inserted module 'br_netfilter' Dec 13 01:29:30.226271 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:29:30.227117 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:29:30.235248 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:29:30.238180 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:29:30.239332 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:29:30.252353 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:29:30.253659 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:29:30.260107 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 01:29:30.267032 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:29:30.269676 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:29:30.274913 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:29:30.278131 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:29:30.285475 dracut-cmdline[217]: dracut-dracut-053 Dec 13 01:29:30.289485 dracut-cmdline[217]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:29:30.320480 systemd-resolved[223]: Positive Trust Anchors: Dec 13 01:29:30.324743 systemd-resolved[223]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:29:30.324779 systemd-resolved[223]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:29:30.330809 systemd-resolved[223]: Defaulting to hostname 'linux'. Dec 13 01:29:30.331909 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:29:30.332679 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:29:30.374085 kernel: SCSI subsystem initialized Dec 13 01:29:30.384019 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:29:30.398030 kernel: iscsi: registered transport (tcp) Dec 13 01:29:30.422074 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:29:30.422177 kernel: QLogic iSCSI HBA Driver Dec 13 01:29:30.489232 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 01:29:30.500187 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 01:29:30.540681 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:29:30.540791 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:29:30.540808 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 01:29:30.599030 kernel: raid6: avx2x4 gen() 20919 MB/s Dec 13 01:29:30.617038 kernel: raid6: avx2x2 gen() 22948 MB/s Dec 13 01:29:30.635470 kernel: raid6: avx2x1 gen() 17707 MB/s Dec 13 01:29:30.635606 kernel: raid6: using algorithm avx2x2 gen() 22948 MB/s Dec 13 01:29:30.654576 kernel: raid6: .... xor() 12536 MB/s, rmw enabled Dec 13 01:29:30.654728 kernel: raid6: using avx2x2 recovery algorithm Dec 13 01:29:30.684038 kernel: xor: automatically using best checksumming function avx Dec 13 01:29:30.862055 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 01:29:30.886522 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:29:30.895306 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:29:30.911004 systemd-udevd[405]: Using default interface naming scheme 'v255'. Dec 13 01:29:30.916101 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:29:30.928282 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 01:29:30.955681 dracut-pre-trigger[413]: rd.md=0: removing MD RAID activation Dec 13 01:29:31.005491 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:29:31.014322 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:29:31.099404 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:29:31.107126 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 01:29:31.119807 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 01:29:31.123936 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:29:31.128086 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:29:31.129321 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:29:31.140797 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 01:29:31.176122 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:29:31.211016 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 01:29:31.213992 kernel: scsi host0: Virtio SCSI HBA Dec 13 01:29:31.223624 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Dec 13 01:29:31.244016 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 01:29:31.244077 kernel: ACPI: bus type USB registered Dec 13 01:29:31.245254 kernel: AES CTR mode by8 optimization enabled Dec 13 01:29:31.246855 kernel: usbcore: registered new interface driver usbfs Dec 13 01:29:31.250990 kernel: usbcore: registered new interface driver hub Dec 13 01:29:31.261428 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:29:31.261604 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:29:31.263107 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:29:31.264882 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:29:31.265034 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:29:31.266286 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:29:31.277999 kernel: usbcore: registered new device driver usb Dec 13 01:29:31.279315 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:29:31.370018 kernel: libata version 3.00 loaded. Dec 13 01:29:31.395137 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Dec 13 01:29:31.400088 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Dec 13 01:29:31.400305 kernel: sd 0:0:0:0: Power-on or device reset occurred Dec 13 01:29:31.402192 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Dec 13 01:29:31.402346 kernel: sd 0:0:0:0: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Dec 13 01:29:31.402502 kernel: sd 0:0:0:0: [sda] Write Protect is off Dec 13 01:29:31.402653 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Dec 13 01:29:31.402812 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Dec 13 01:29:31.402998 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Dec 13 01:29:31.403154 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Dec 13 01:29:31.403294 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Dec 13 01:29:31.403434 kernel: hub 1-0:1.0: USB hub found Dec 13 01:29:31.403604 kernel: hub 1-0:1.0: 4 ports detected Dec 13 01:29:31.403744 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Dec 13 01:29:31.403913 kernel: hub 2-0:1.0: USB hub found Dec 13 01:29:31.406359 kernel: hub 2-0:1.0: 4 ports detected Dec 13 01:29:31.406508 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 01:29:31.406520 kernel: GPT:17805311 != 80003071 Dec 13 01:29:31.406530 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 01:29:31.406541 kernel: GPT:17805311 != 80003071 Dec 13 01:29:31.406551 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 01:29:31.406561 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:29:31.406575 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Dec 13 01:29:31.437025 kernel: ahci 0000:00:1f.2: version 3.0 Dec 13 01:29:31.508888 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 13 01:29:31.508909 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Dec 13 01:29:31.509152 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 13 01:29:31.509317 kernel: scsi host1: ahci Dec 13 01:29:31.509517 kernel: scsi host2: ahci Dec 13 01:29:31.509697 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (453) Dec 13 01:29:31.509711 kernel: scsi host3: ahci Dec 13 01:29:31.509909 kernel: BTRFS: device fsid c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be devid 1 transid 41 /dev/sda3 scanned by (udev-worker) (449) Dec 13 01:29:31.509924 kernel: scsi host4: ahci Dec 13 01:29:31.510137 kernel: scsi host5: ahci Dec 13 01:29:31.510320 kernel: scsi host6: ahci Dec 13 01:29:31.510497 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a100 irq 49 Dec 13 01:29:31.510515 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a180 irq 49 Dec 13 01:29:31.510528 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a200 irq 49 Dec 13 01:29:31.510541 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a280 irq 49 Dec 13 01:29:31.510553 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a300 irq 49 Dec 13 01:29:31.510566 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a380 irq 49 Dec 13 01:29:31.438295 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:29:31.454266 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:29:31.483916 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Dec 13 01:29:31.486743 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:29:31.498848 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Dec 13 01:29:31.515475 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Dec 13 01:29:31.523998 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Dec 13 01:29:31.524681 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Dec 13 01:29:31.533812 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 01:29:31.541905 disk-uuid[572]: Primary Header is updated. Dec 13 01:29:31.541905 disk-uuid[572]: Secondary Entries is updated. Dec 13 01:29:31.541905 disk-uuid[572]: Secondary Header is updated. Dec 13 01:29:31.556124 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:29:31.571082 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:29:31.583039 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:29:31.659070 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Dec 13 01:29:31.809019 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 01:29:31.816430 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 01:29:31.816463 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 13 01:29:31.816481 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Dec 13 01:29:31.816492 kernel: ata3: SATA link down (SStatus 0 SControl 300) Dec 13 01:29:31.816501 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 01:29:31.816512 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Dec 13 01:29:31.818099 kernel: ata1.00: applying bridge limits Dec 13 01:29:31.820072 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 01:29:31.822995 kernel: ata1.00: configured for UDMA/100 Dec 13 01:29:31.827013 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 13 01:29:31.849970 kernel: usbcore: registered new interface driver usbhid Dec 13 01:29:31.850063 kernel: usbhid: USB HID core driver Dec 13 01:29:31.859712 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Dec 13 01:29:31.859780 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Dec 13 01:29:31.869707 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Dec 13 01:29:31.892224 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 01:29:31.892241 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Dec 13 01:29:32.584087 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:29:32.586130 disk-uuid[573]: The operation has completed successfully. Dec 13 01:29:32.677162 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:29:32.677362 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 01:29:32.707227 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 01:29:32.712554 sh[595]: Success Dec 13 01:29:32.731073 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Dec 13 01:29:32.796761 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 01:29:32.805150 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 01:29:32.810551 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 01:29:32.851650 kernel: BTRFS info (device dm-0): first mount of filesystem c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be Dec 13 01:29:32.851742 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:29:32.851763 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 01:29:32.854697 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 01:29:32.854730 kernel: BTRFS info (device dm-0): using free space tree Dec 13 01:29:32.863991 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 13 01:29:32.866386 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 01:29:32.867565 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 01:29:32.874180 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 01:29:32.877202 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 01:29:32.896001 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:29:32.896066 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:29:32.896078 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:29:32.904588 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 01:29:32.904659 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:29:32.918752 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:29:32.920815 kernel: BTRFS info (device sda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:29:32.926592 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 01:29:32.933253 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 01:29:33.039353 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:29:33.049263 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:29:33.057093 ignition[679]: Ignition 2.19.0 Dec 13 01:29:33.057115 ignition[679]: Stage: fetch-offline Dec 13 01:29:33.057212 ignition[679]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:29:33.057226 ignition[679]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 01:29:33.057582 ignition[679]: parsed url from cmdline: "" Dec 13 01:29:33.057587 ignition[679]: no config URL provided Dec 13 01:29:33.057594 ignition[679]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:29:33.057606 ignition[679]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:29:33.057613 ignition[679]: failed to fetch config: resource requires networking Dec 13 01:29:33.063464 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:29:33.057810 ignition[679]: Ignition finished successfully Dec 13 01:29:33.079111 systemd-networkd[780]: lo: Link UP Dec 13 01:29:33.079127 systemd-networkd[780]: lo: Gained carrier Dec 13 01:29:33.082090 systemd-networkd[780]: Enumeration completed Dec 13 01:29:33.082387 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:29:33.082918 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:29:33.082923 systemd-networkd[780]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:29:33.084505 systemd-networkd[780]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:29:33.084509 systemd-networkd[780]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:29:33.084744 systemd[1]: Reached target network.target - Network. Dec 13 01:29:33.085885 systemd-networkd[780]: eth0: Link UP Dec 13 01:29:33.085890 systemd-networkd[780]: eth0: Gained carrier Dec 13 01:29:33.085897 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:29:33.092755 systemd-networkd[780]: eth1: Link UP Dec 13 01:29:33.092766 systemd-networkd[780]: eth1: Gained carrier Dec 13 01:29:33.092774 systemd-networkd[780]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:29:33.095202 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 01:29:33.113078 ignition[783]: Ignition 2.19.0 Dec 13 01:29:33.113092 ignition[783]: Stage: fetch Dec 13 01:29:33.113331 ignition[783]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:29:33.113346 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 01:29:33.113478 ignition[783]: parsed url from cmdline: "" Dec 13 01:29:33.113482 ignition[783]: no config URL provided Dec 13 01:29:33.113487 ignition[783]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:29:33.113497 ignition[783]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:29:33.113518 ignition[783]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Dec 13 01:29:33.113696 ignition[783]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Dec 13 01:29:33.157069 systemd-networkd[780]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 01:29:33.207083 systemd-networkd[780]: eth0: DHCPv4 address 78.47.95.53/32, gateway 172.31.1.1 acquired from 172.31.1.1 Dec 13 01:29:33.313953 ignition[783]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Dec 13 01:29:33.325237 ignition[783]: GET result: OK Dec 13 01:29:33.325375 ignition[783]: parsing config with SHA512: 342a4ddafda301644d47558f4d7cb0c1a3c67a12014185195d6259149dc9896e2eae1d8af04008aee2ed04375d9b2c5450de9ef83d798daa0392c0381912773c Dec 13 01:29:33.333700 unknown[783]: fetched base config from "system" Dec 13 01:29:33.333733 unknown[783]: fetched base config from "system" Dec 13 01:29:33.334670 ignition[783]: fetch: fetch complete Dec 13 01:29:33.333748 unknown[783]: fetched user config from "hetzner" Dec 13 01:29:33.334685 ignition[783]: fetch: fetch passed Dec 13 01:29:33.334789 ignition[783]: Ignition finished successfully Dec 13 01:29:33.342812 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 01:29:33.349287 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 01:29:33.400891 ignition[790]: Ignition 2.19.0 Dec 13 01:29:33.400918 ignition[790]: Stage: kargs Dec 13 01:29:33.401345 ignition[790]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:29:33.401372 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 01:29:33.403362 ignition[790]: kargs: kargs passed Dec 13 01:29:33.406954 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 01:29:33.403456 ignition[790]: Ignition finished successfully Dec 13 01:29:33.422244 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 01:29:33.470829 ignition[796]: Ignition 2.19.0 Dec 13 01:29:33.470853 ignition[796]: Stage: disks Dec 13 01:29:33.471245 ignition[796]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:29:33.471270 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 01:29:33.473152 ignition[796]: disks: disks passed Dec 13 01:29:33.477485 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 01:29:33.473252 ignition[796]: Ignition finished successfully Dec 13 01:29:33.480619 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 01:29:33.482492 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:29:33.484932 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:29:33.487481 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:29:33.489820 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:29:33.498225 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 01:29:33.543374 systemd-fsck[804]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Dec 13 01:29:33.549354 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 01:29:33.558152 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 01:29:33.696020 kernel: EXT4-fs (sda9): mounted filesystem 390119fa-ab9c-4f50-b046-3b5c76c46193 r/w with ordered data mode. Quota mode: none. Dec 13 01:29:33.697178 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 01:29:33.698411 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 01:29:33.717097 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:29:33.720773 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 01:29:33.722764 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 13 01:29:33.724294 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:29:33.724321 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:29:33.734992 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (812) Dec 13 01:29:33.740004 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:29:33.741860 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 01:29:33.747492 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:29:33.747608 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:29:33.755426 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 01:29:33.763511 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 01:29:33.763599 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:29:33.771960 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:29:33.822790 coreos-metadata[814]: Dec 13 01:29:33.822 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Dec 13 01:29:33.824621 coreos-metadata[814]: Dec 13 01:29:33.824 INFO Fetch successful Dec 13 01:29:33.825657 coreos-metadata[814]: Dec 13 01:29:33.825 INFO wrote hostname ci-4081-2-1-2-54968c70cf to /sysroot/etc/hostname Dec 13 01:29:33.829956 initrd-setup-root[841]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:29:33.830649 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 01:29:33.837396 initrd-setup-root[849]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:29:33.843212 initrd-setup-root[856]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:29:33.850393 initrd-setup-root[863]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:29:33.993055 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 01:29:34.001101 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 01:29:34.011179 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 01:29:34.022588 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 01:29:34.033241 kernel: BTRFS info (device sda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:29:34.058311 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 01:29:34.071375 ignition[932]: INFO : Ignition 2.19.0 Dec 13 01:29:34.071375 ignition[932]: INFO : Stage: mount Dec 13 01:29:34.072957 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:29:34.072957 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 01:29:34.072957 ignition[932]: INFO : mount: mount passed Dec 13 01:29:34.072957 ignition[932]: INFO : Ignition finished successfully Dec 13 01:29:34.074646 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 01:29:34.080095 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 01:29:34.097415 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:29:34.112082 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (942) Dec 13 01:29:34.116401 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:29:34.116484 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:29:34.119657 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:29:34.126076 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 01:29:34.126118 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:29:34.132363 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:29:34.140133 systemd-networkd[780]: eth1: Gained IPv6LL Dec 13 01:29:34.163945 ignition[959]: INFO : Ignition 2.19.0 Dec 13 01:29:34.163945 ignition[959]: INFO : Stage: files Dec 13 01:29:34.166217 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:29:34.166217 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 01:29:34.166217 ignition[959]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:29:34.170181 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:29:34.170181 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:29:34.172872 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:29:34.172872 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:29:34.172872 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:29:34.171747 unknown[959]: wrote ssh authorized keys file for user: core Dec 13 01:29:34.177886 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:29:34.177886 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 01:29:34.259850 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 01:29:34.330945 systemd-networkd[780]: eth0: Gained IPv6LL Dec 13 01:29:34.425892 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:29:34.427194 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 01:29:34.427194 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Dec 13 01:29:34.976602 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 01:29:35.086063 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 01:29:35.087203 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:29:35.087203 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:29:35.087203 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:29:35.087203 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:29:35.087203 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:29:35.087203 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:29:35.087203 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:29:35.087203 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:29:35.098506 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:29:35.098506 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:29:35.098506 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 01:29:35.098506 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 01:29:35.098506 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 01:29:35.098506 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Dec 13 01:29:35.449557 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 13 01:29:35.891154 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 01:29:35.891154 ignition[959]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Dec 13 01:29:35.894295 ignition[959]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:29:35.894295 ignition[959]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:29:35.894295 ignition[959]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Dec 13 01:29:35.894295 ignition[959]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Dec 13 01:29:35.894295 ignition[959]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Dec 13 01:29:35.894295 ignition[959]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Dec 13 01:29:35.894295 ignition[959]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Dec 13 01:29:35.894295 ignition[959]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Dec 13 01:29:35.894295 ignition[959]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 01:29:35.894295 ignition[959]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:29:35.908786 ignition[959]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:29:35.908786 ignition[959]: INFO : files: files passed Dec 13 01:29:35.908786 ignition[959]: INFO : Ignition finished successfully Dec 13 01:29:35.898385 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 01:29:35.909291 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 01:29:35.913445 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 01:29:35.923132 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:29:35.924056 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 01:29:35.934231 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:29:35.935754 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:29:35.937142 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:29:35.937107 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:29:35.938147 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 01:29:35.945122 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 01:29:35.972205 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:29:35.972397 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 01:29:35.974568 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 01:29:35.975507 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 01:29:35.977088 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 01:29:35.982271 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 01:29:35.998268 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:29:36.005216 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 01:29:36.019148 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:29:36.020399 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:29:36.021000 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 01:29:36.021584 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:29:36.021767 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:29:36.023609 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 01:29:36.024474 systemd[1]: Stopped target basic.target - Basic System. Dec 13 01:29:36.025718 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 01:29:36.027236 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:29:36.028480 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 01:29:36.029731 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 01:29:36.031178 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:29:36.032722 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 01:29:36.034213 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 01:29:36.035631 systemd[1]: Stopped target swap.target - Swaps. Dec 13 01:29:36.037090 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:29:36.037214 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:29:36.039113 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:29:36.039926 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:29:36.041313 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 01:29:36.041435 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:29:36.043107 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:29:36.043252 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 01:29:36.045844 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:29:36.046175 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:29:36.047321 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:29:36.047422 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 01:29:36.048959 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 13 01:29:36.049078 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 01:29:36.059050 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 01:29:36.060236 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:29:36.060360 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:29:36.075139 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 01:29:36.076278 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:29:36.080377 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:29:36.081449 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:29:36.081584 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:29:36.088929 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:29:36.092087 ignition[1011]: INFO : Ignition 2.19.0 Dec 13 01:29:36.092087 ignition[1011]: INFO : Stage: umount Dec 13 01:29:36.089090 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 01:29:36.095607 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:29:36.095607 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 01:29:36.095607 ignition[1011]: INFO : umount: umount passed Dec 13 01:29:36.095607 ignition[1011]: INFO : Ignition finished successfully Dec 13 01:29:36.095414 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:29:36.096106 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 01:29:36.098776 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:29:36.098866 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 01:29:36.101919 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:29:36.101995 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 01:29:36.102737 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 01:29:36.102785 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 01:29:36.105071 systemd[1]: Stopped target network.target - Network. Dec 13 01:29:36.106114 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:29:36.106169 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:29:36.106683 systemd[1]: Stopped target paths.target - Path Units. Dec 13 01:29:36.108084 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:29:36.119190 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:29:36.121132 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 01:29:36.121622 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 01:29:36.122134 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:29:36.122186 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:29:36.122990 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:29:36.123035 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:29:36.124944 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:29:36.125065 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 01:29:36.137094 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 01:29:36.137167 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 01:29:36.138229 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 01:29:36.139259 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 01:29:36.141443 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:29:36.143062 systemd-networkd[780]: eth0: DHCPv6 lease lost Dec 13 01:29:36.148781 systemd-networkd[780]: eth1: DHCPv6 lease lost Dec 13 01:29:36.151020 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:29:36.151141 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 01:29:36.162194 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:29:36.162334 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 01:29:36.165504 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:29:36.165552 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:29:36.170145 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 01:29:36.173016 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:29:36.173091 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:29:36.173640 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:29:36.173687 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:29:36.177080 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:29:36.177128 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 01:29:36.177903 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 01:29:36.177950 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:29:36.178589 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:29:36.180297 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:29:36.180411 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 01:29:36.188840 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:29:36.188928 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 01:29:36.191684 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:29:36.191798 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 01:29:36.194356 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:29:36.194525 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:29:36.195885 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:29:36.195954 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 01:29:36.196481 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:29:36.196519 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:29:36.197539 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:29:36.197589 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:29:36.199072 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:29:36.199118 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 01:29:36.200025 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:29:36.200071 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:29:36.205132 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 01:29:36.205642 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:29:36.205704 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:29:36.209368 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:29:36.209432 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:29:36.216045 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:29:36.216163 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 01:29:36.217418 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 01:29:36.223099 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 01:29:36.230350 systemd[1]: Switching root. Dec 13 01:29:36.269388 systemd-journald[188]: Journal stopped Dec 13 01:29:37.625860 systemd-journald[188]: Received SIGTERM from PID 1 (systemd). Dec 13 01:29:37.625939 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 01:29:37.625953 kernel: SELinux: policy capability open_perms=1 Dec 13 01:29:37.625963 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 01:29:37.636386 kernel: SELinux: policy capability always_check_network=0 Dec 13 01:29:37.636416 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 01:29:37.636428 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 01:29:37.636440 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 01:29:37.636458 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 01:29:37.636475 kernel: audit: type=1403 audit(1734053376.496:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 01:29:37.636488 systemd[1]: Successfully loaded SELinux policy in 65.537ms. Dec 13 01:29:37.636513 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 28.178ms. Dec 13 01:29:37.636526 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:29:37.636538 systemd[1]: Detected virtualization kvm. Dec 13 01:29:37.636553 systemd[1]: Detected architecture x86-64. Dec 13 01:29:37.636565 systemd[1]: Detected first boot. Dec 13 01:29:37.636577 systemd[1]: Hostname set to . Dec 13 01:29:37.636589 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:29:37.636600 zram_generator::config[1053]: No configuration found. Dec 13 01:29:37.636626 systemd[1]: Populated /etc with preset unit settings. Dec 13 01:29:37.636644 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 01:29:37.636656 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 01:29:37.636677 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 01:29:37.636695 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 01:29:37.636712 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 01:29:37.636726 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 01:29:37.636739 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 01:29:37.636751 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 01:29:37.636763 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 01:29:37.636775 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 01:29:37.636791 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 01:29:37.636803 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:29:37.636815 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:29:37.636827 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 01:29:37.636839 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 01:29:37.636851 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 01:29:37.636862 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:29:37.636875 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 01:29:37.637970 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:29:37.638004 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 01:29:37.638017 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 01:29:37.638029 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 01:29:37.638041 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 01:29:37.638053 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:29:37.638065 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:29:37.638079 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:29:37.638091 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:29:37.638104 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 01:29:37.638116 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 01:29:37.638128 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:29:37.638140 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:29:37.638151 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:29:37.638166 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 01:29:37.638180 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 01:29:37.638194 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 01:29:37.638206 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 01:29:37.638218 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:29:37.638230 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 01:29:37.638242 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 01:29:37.638254 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 01:29:37.638268 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 01:29:37.638280 systemd[1]: Reached target machines.target - Containers. Dec 13 01:29:37.638292 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 01:29:37.638304 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:29:37.638318 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:29:37.638330 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 01:29:37.638342 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:29:37.638353 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:29:37.638367 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:29:37.638380 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 01:29:37.638392 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:29:37.638405 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 01:29:37.638417 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 01:29:37.638429 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 01:29:37.638441 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 01:29:37.638453 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 01:29:37.638465 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:29:37.638478 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:29:37.638490 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 01:29:37.638502 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 01:29:37.638514 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:29:37.638525 kernel: loop: module loaded Dec 13 01:29:37.638537 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 01:29:37.638549 systemd[1]: Stopped verity-setup.service. Dec 13 01:29:37.638561 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:29:37.638596 systemd-journald[1123]: Collecting audit messages is disabled. Dec 13 01:29:37.638621 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 01:29:37.638633 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 01:29:37.638645 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 01:29:37.638661 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 01:29:37.638673 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 01:29:37.638685 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 01:29:37.638697 systemd-journald[1123]: Journal started Dec 13 01:29:37.638719 systemd-journald[1123]: Runtime Journal (/run/log/journal/1f3127063c604ef9abefcf362f411088) is 4.8M, max 38.4M, 33.6M free. Dec 13 01:29:37.298669 systemd[1]: Queued start job for default target multi-user.target. Dec 13 01:29:37.641329 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:29:37.322298 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Dec 13 01:29:37.323645 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 01:29:37.642273 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:29:37.643430 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 01:29:37.643601 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 01:29:37.653843 kernel: fuse: init (API version 7.39) Dec 13 01:29:37.645475 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:29:37.645644 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:29:37.647055 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:29:37.647295 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:29:37.648953 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:29:37.649590 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:29:37.652352 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 01:29:37.653182 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 01:29:37.653347 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 01:29:37.654652 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 01:29:37.669549 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:29:37.681304 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 01:29:37.688071 kernel: ACPI: bus type drm_connector registered Dec 13 01:29:37.690056 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 01:29:37.699095 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 01:29:37.699651 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 01:29:37.699684 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:29:37.701214 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 01:29:37.703123 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 01:29:37.707125 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 01:29:37.707841 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:29:37.716174 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 01:29:37.723172 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 01:29:37.723749 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:29:37.725038 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 01:29:37.726082 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:29:37.729893 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:29:37.737121 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 01:29:37.740597 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:29:37.740816 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:29:37.741539 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 01:29:37.743185 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 01:29:37.748088 systemd-journald[1123]: Time spent on flushing to /var/log/journal/1f3127063c604ef9abefcf362f411088 is 135.641ms for 1135 entries. Dec 13 01:29:37.748088 systemd-journald[1123]: System Journal (/var/log/journal/1f3127063c604ef9abefcf362f411088) is 8.0M, max 584.8M, 576.8M free. Dec 13 01:29:37.913632 systemd-journald[1123]: Received client request to flush runtime journal. Dec 13 01:29:37.913680 kernel: loop0: detected capacity change from 0 to 140768 Dec 13 01:29:37.913703 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 01:29:37.750472 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 01:29:37.784433 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 01:29:37.786528 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 01:29:37.807367 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 01:29:37.808272 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 01:29:37.824597 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 01:29:37.825997 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:29:37.868378 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 01:29:37.871759 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 01:29:37.892635 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 01:29:37.899167 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:29:37.922521 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 01:29:37.932626 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:29:37.940007 kernel: loop1: detected capacity change from 0 to 8 Dec 13 01:29:37.943169 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 01:29:37.960162 systemd-tmpfiles[1185]: ACLs are not supported, ignoring. Dec 13 01:29:37.960180 systemd-tmpfiles[1185]: ACLs are not supported, ignoring. Dec 13 01:29:37.965491 udevadm[1194]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 01:29:37.973199 kernel: loop2: detected capacity change from 0 to 142488 Dec 13 01:29:37.973410 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:29:38.019004 kernel: loop3: detected capacity change from 0 to 210664 Dec 13 01:29:38.068255 kernel: loop4: detected capacity change from 0 to 140768 Dec 13 01:29:38.095015 kernel: loop5: detected capacity change from 0 to 8 Dec 13 01:29:38.099000 kernel: loop6: detected capacity change from 0 to 142488 Dec 13 01:29:38.118005 kernel: loop7: detected capacity change from 0 to 210664 Dec 13 01:29:38.143056 (sd-merge)[1199]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Dec 13 01:29:38.145116 (sd-merge)[1199]: Merged extensions into '/usr'. Dec 13 01:29:38.151991 systemd[1]: Reloading requested from client PID 1168 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 01:29:38.152105 systemd[1]: Reloading... Dec 13 01:29:38.260157 zram_generator::config[1225]: No configuration found. Dec 13 01:29:38.388075 ldconfig[1159]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 01:29:38.414929 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:29:38.469770 systemd[1]: Reloading finished in 317 ms. Dec 13 01:29:38.493770 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 01:29:38.499050 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 01:29:38.506138 systemd[1]: Starting ensure-sysext.service... Dec 13 01:29:38.509163 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:29:38.523061 systemd[1]: Reloading requested from client PID 1268 ('systemctl') (unit ensure-sysext.service)... Dec 13 01:29:38.523072 systemd[1]: Reloading... Dec 13 01:29:38.541098 systemd-tmpfiles[1269]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 01:29:38.541797 systemd-tmpfiles[1269]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 01:29:38.542834 systemd-tmpfiles[1269]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 01:29:38.543204 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. Dec 13 01:29:38.543342 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. Dec 13 01:29:38.547074 systemd-tmpfiles[1269]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:29:38.547147 systemd-tmpfiles[1269]: Skipping /boot Dec 13 01:29:38.559491 systemd-tmpfiles[1269]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:29:38.559549 systemd-tmpfiles[1269]: Skipping /boot Dec 13 01:29:38.637075 zram_generator::config[1310]: No configuration found. Dec 13 01:29:38.748639 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:29:38.807769 systemd[1]: Reloading finished in 284 ms. Dec 13 01:29:38.828413 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 01:29:38.833511 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:29:38.846243 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:29:38.851588 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 01:29:38.859279 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 01:29:38.864969 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:29:38.873269 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:29:38.880263 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 01:29:38.884553 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:29:38.884729 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:29:38.888321 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:29:38.894223 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:29:38.896393 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:29:38.898142 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:29:38.902182 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 01:29:38.902707 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:29:38.908206 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:29:38.909191 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:29:38.909397 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:29:38.909524 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:29:38.915451 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:29:38.915764 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:29:38.925299 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:29:38.925963 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:29:38.926187 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:29:38.932161 systemd-udevd[1351]: Using default interface naming scheme 'v255'. Dec 13 01:29:38.936030 systemd[1]: Finished ensure-sysext.service. Dec 13 01:29:38.944801 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 01:29:38.955667 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 01:29:38.957619 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:29:38.957807 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:29:38.958841 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:29:38.959269 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:29:38.960366 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:29:38.960520 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:29:38.962489 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:29:38.962665 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:29:38.970940 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:29:38.971053 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:29:38.990235 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 01:29:38.991135 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:29:38.995103 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 01:29:39.002661 augenrules[1375]: No rules Dec 13 01:29:39.002162 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 01:29:39.004090 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:29:39.004825 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:29:39.015892 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:29:39.018963 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 01:29:39.033727 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 01:29:39.119880 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 13 01:29:39.139016 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1399) Dec 13 01:29:39.160005 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1399) Dec 13 01:29:39.162616 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 01:29:39.164436 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 01:29:39.190790 systemd-networkd[1387]: lo: Link UP Dec 13 01:29:39.191139 systemd-networkd[1387]: lo: Gained carrier Dec 13 01:29:39.193078 systemd-networkd[1387]: Enumeration completed Dec 13 01:29:39.193224 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:29:39.203132 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 01:29:39.203427 systemd-resolved[1350]: Positive Trust Anchors: Dec 13 01:29:39.203440 systemd-resolved[1350]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:29:39.203483 systemd-resolved[1350]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:29:39.217760 systemd-resolved[1350]: Using system hostname 'ci-4081-2-1-2-54968c70cf'. Dec 13 01:29:39.223046 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:29:39.224234 systemd[1]: Reached target network.target - Network. Dec 13 01:29:39.225061 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:29:39.265932 systemd-networkd[1387]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:29:39.267001 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1400) Dec 13 01:29:39.267829 systemd-networkd[1387]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:29:39.271136 systemd-networkd[1387]: eth1: Link UP Dec 13 01:29:39.271150 systemd-networkd[1387]: eth1: Gained carrier Dec 13 01:29:39.271163 systemd-networkd[1387]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:29:39.277028 systemd-networkd[1387]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:29:39.277042 systemd-networkd[1387]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:29:39.280210 systemd-networkd[1387]: eth0: Link UP Dec 13 01:29:39.280300 systemd-networkd[1387]: eth0: Gained carrier Dec 13 01:29:39.280375 systemd-networkd[1387]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:29:39.299009 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 01:29:39.311040 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 13 01:29:39.323991 kernel: ACPI: button: Power Button [PWRF] Dec 13 01:29:39.343073 systemd-networkd[1387]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 01:29:39.343943 systemd-timesyncd[1365]: Network configuration changed, trying to establish connection. Dec 13 01:29:39.350892 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Dec 13 01:29:39.351161 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:29:39.351319 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:29:39.358462 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:29:39.367201 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:29:39.371122 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:29:39.371668 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:29:39.371700 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:29:39.371713 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:29:39.379262 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:29:39.379443 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:29:39.388521 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:29:39.388700 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:29:39.389355 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:29:39.390076 systemd-networkd[1387]: eth0: DHCPv4 address 78.47.95.53/32, gateway 172.31.1.1 acquired from 172.31.1.1 Dec 13 01:29:39.391197 systemd-timesyncd[1365]: Network configuration changed, trying to establish connection. Dec 13 01:29:39.396882 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Dec 13 01:29:39.398331 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:29:39.399209 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:29:39.408590 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 01:29:39.410592 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:29:39.414329 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 13 01:29:39.418383 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Dec 13 01:29:39.418570 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 13 01:29:39.428046 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Dec 13 01:29:39.444951 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 01:29:39.445082 kernel: EDAC MC: Ver: 3.0.0 Dec 13 01:29:39.453269 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Dec 13 01:29:39.453317 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Dec 13 01:29:39.457515 kernel: Console: switching to colour dummy device 80x25 Dec 13 01:29:39.458891 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Dec 13 01:29:39.458916 kernel: [drm] features: -context_init Dec 13 01:29:39.463407 kernel: [drm] number of scanouts: 1 Dec 13 01:29:39.463439 kernel: [drm] number of cap sets: 0 Dec 13 01:29:39.464871 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:29:39.466020 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Dec 13 01:29:39.486184 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:29:39.486523 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:29:39.487303 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Dec 13 01:29:39.487341 kernel: Console: switching to colour frame buffer device 160x50 Dec 13 01:29:39.493243 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Dec 13 01:29:39.507217 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:29:39.508426 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:29:39.509313 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:29:39.513156 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:29:39.619501 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 01:29:39.620244 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:29:39.629335 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 01:29:39.649576 lvm[1453]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:29:39.692963 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 01:29:39.695124 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:29:39.696342 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:29:39.696663 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 01:29:39.696859 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 01:29:39.697323 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 01:29:39.697613 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 01:29:39.697730 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 01:29:39.697832 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 01:29:39.697868 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:29:39.697949 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:29:39.701091 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 01:29:39.704597 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 01:29:39.713421 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 01:29:39.716634 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 01:29:39.719494 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 01:29:39.720422 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:29:39.721955 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:29:39.726594 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:29:39.726632 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:29:39.733115 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 01:29:39.741696 lvm[1457]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:29:39.751609 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 13 01:29:39.769234 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 01:29:39.782124 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 01:29:39.795324 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 01:29:39.799801 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 01:29:39.812227 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 01:29:39.818291 jq[1462]: false Dec 13 01:29:39.822622 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 01:29:39.822942 coreos-metadata[1459]: Dec 13 01:29:39.818 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Dec 13 01:29:39.822942 coreos-metadata[1459]: Dec 13 01:29:39.819 INFO Fetch successful Dec 13 01:29:39.822942 coreos-metadata[1459]: Dec 13 01:29:39.820 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Dec 13 01:29:39.822942 coreos-metadata[1459]: Dec 13 01:29:39.820 INFO Fetch successful Dec 13 01:29:39.828428 dbus-daemon[1460]: [system] SELinux support is enabled Dec 13 01:29:39.837162 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Dec 13 01:29:39.843710 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 01:29:39.850116 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 01:29:39.854749 extend-filesystems[1464]: Found loop4 Dec 13 01:29:39.859064 extend-filesystems[1464]: Found loop5 Dec 13 01:29:39.859064 extend-filesystems[1464]: Found loop6 Dec 13 01:29:39.859064 extend-filesystems[1464]: Found loop7 Dec 13 01:29:39.859064 extend-filesystems[1464]: Found sda Dec 13 01:29:39.859064 extend-filesystems[1464]: Found sda1 Dec 13 01:29:39.859064 extend-filesystems[1464]: Found sda2 Dec 13 01:29:39.859064 extend-filesystems[1464]: Found sda3 Dec 13 01:29:39.859064 extend-filesystems[1464]: Found usr Dec 13 01:29:39.859064 extend-filesystems[1464]: Found sda4 Dec 13 01:29:39.859064 extend-filesystems[1464]: Found sda6 Dec 13 01:29:39.859064 extend-filesystems[1464]: Found sda7 Dec 13 01:29:39.859064 extend-filesystems[1464]: Found sda9 Dec 13 01:29:39.859064 extend-filesystems[1464]: Checking size of /dev/sda9 Dec 13 01:29:39.949125 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Dec 13 01:29:39.871274 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 01:29:39.949367 extend-filesystems[1464]: Resized partition /dev/sda9 Dec 13 01:29:39.875419 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 01:29:39.954529 extend-filesystems[1485]: resize2fs 1.47.1 (20-May-2024) Dec 13 01:29:39.875965 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 01:29:39.877431 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 01:29:39.979355 jq[1476]: true Dec 13 01:29:39.890655 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 01:29:39.908280 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 01:29:39.921020 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 01:29:39.938058 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 01:29:39.938337 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 01:29:39.938694 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 01:29:39.939827 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 01:29:39.955094 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 01:29:39.958585 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 01:29:39.999583 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 01:29:39.999638 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 01:29:40.002893 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 01:29:40.002921 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 01:29:40.008171 tar[1492]: linux-amd64/helm Dec 13 01:29:40.015613 (ntainerd)[1494]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 01:29:40.028014 jq[1493]: true Dec 13 01:29:40.046807 update_engine[1474]: I20241213 01:29:40.046726 1474 main.cc:92] Flatcar Update Engine starting Dec 13 01:29:40.051434 systemd-logind[1473]: New seat seat0. Dec 13 01:29:40.053589 systemd-logind[1473]: Watching system buttons on /dev/input/event2 (Power Button) Dec 13 01:29:40.053617 systemd-logind[1473]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 01:29:40.060346 update_engine[1474]: I20241213 01:29:40.060207 1474 update_check_scheduler.cc:74] Next update check in 2m48s Dec 13 01:29:40.062006 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 01:29:40.066114 systemd[1]: Started update-engine.service - Update Engine. Dec 13 01:29:40.084539 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 01:29:40.096443 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1389) Dec 13 01:29:40.152041 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 13 01:29:40.165487 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 01:29:40.255558 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Dec 13 01:29:40.262018 locksmithd[1513]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 01:29:40.288754 bash[1530]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:29:40.290288 extend-filesystems[1485]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Dec 13 01:29:40.290288 extend-filesystems[1485]: old_desc_blocks = 1, new_desc_blocks = 5 Dec 13 01:29:40.290288 extend-filesystems[1485]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Dec 13 01:29:40.302429 extend-filesystems[1464]: Resized filesystem in /dev/sda9 Dec 13 01:29:40.302429 extend-filesystems[1464]: Found sr0 Dec 13 01:29:40.291035 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 01:29:40.291264 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 01:29:40.308555 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 01:29:40.329573 systemd[1]: Starting sshkeys.service... Dec 13 01:29:40.365261 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 13 01:29:40.378267 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 13 01:29:40.424006 coreos-metadata[1544]: Dec 13 01:29:40.423 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Dec 13 01:29:40.425355 coreos-metadata[1544]: Dec 13 01:29:40.425 INFO Fetch successful Dec 13 01:29:40.428288 unknown[1544]: wrote ssh authorized keys file for user: core Dec 13 01:29:40.436001 containerd[1494]: time="2024-12-13T01:29:40.434950282Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 01:29:40.474815 update-ssh-keys[1548]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:29:40.476205 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 13 01:29:40.483589 systemd[1]: Finished sshkeys.service. Dec 13 01:29:40.504082 containerd[1494]: time="2024-12-13T01:29:40.504019818Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:29:40.507126 containerd[1494]: time="2024-12-13T01:29:40.507074198Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:29:40.507126 containerd[1494]: time="2024-12-13T01:29:40.507121687Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 01:29:40.507198 containerd[1494]: time="2024-12-13T01:29:40.507138558Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 01:29:40.507346 containerd[1494]: time="2024-12-13T01:29:40.507322854Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 01:29:40.507369 containerd[1494]: time="2024-12-13T01:29:40.507345607Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 01:29:40.507988 containerd[1494]: time="2024-12-13T01:29:40.507428382Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:29:40.507988 containerd[1494]: time="2024-12-13T01:29:40.507443931Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:29:40.507988 containerd[1494]: time="2024-12-13T01:29:40.507655538Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:29:40.507988 containerd[1494]: time="2024-12-13T01:29:40.507670396Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 01:29:40.507988 containerd[1494]: time="2024-12-13T01:29:40.507686576Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:29:40.507988 containerd[1494]: time="2024-12-13T01:29:40.507696846Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 01:29:40.507988 containerd[1494]: time="2024-12-13T01:29:40.507786975Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:29:40.509094 containerd[1494]: time="2024-12-13T01:29:40.509069430Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:29:40.509243 containerd[1494]: time="2024-12-13T01:29:40.509213390Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:29:40.509243 containerd[1494]: time="2024-12-13T01:29:40.509239058Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 01:29:40.509353 containerd[1494]: time="2024-12-13T01:29:40.509332182Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 01:29:40.509409 containerd[1494]: time="2024-12-13T01:29:40.509389550Z" level=info msg="metadata content store policy set" policy=shared Dec 13 01:29:40.517170 containerd[1494]: time="2024-12-13T01:29:40.517142181Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 01:29:40.517307 containerd[1494]: time="2024-12-13T01:29:40.517183488Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 01:29:40.517332 containerd[1494]: time="2024-12-13T01:29:40.517307982Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 01:29:40.517332 containerd[1494]: time="2024-12-13T01:29:40.517323401Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 01:29:40.517376 containerd[1494]: time="2024-12-13T01:29:40.517336635Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 01:29:40.517479 containerd[1494]: time="2024-12-13T01:29:40.517456150Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 01:29:40.518004 containerd[1494]: time="2024-12-13T01:29:40.517693155Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 01:29:40.518004 containerd[1494]: time="2024-12-13T01:29:40.517803502Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 01:29:40.518004 containerd[1494]: time="2024-12-13T01:29:40.517817889Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 01:29:40.518004 containerd[1494]: time="2024-12-13T01:29:40.517830913Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 01:29:40.518004 containerd[1494]: time="2024-12-13T01:29:40.517847444Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 01:29:40.518004 containerd[1494]: time="2024-12-13T01:29:40.517859356Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 01:29:40.518004 containerd[1494]: time="2024-12-13T01:29:40.517870788Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 01:29:40.518004 containerd[1494]: time="2024-12-13T01:29:40.517886928Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 01:29:40.518004 containerd[1494]: time="2024-12-13T01:29:40.517900273Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 01:29:40.518004 containerd[1494]: time="2024-12-13T01:29:40.517912195Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 01:29:40.518004 containerd[1494]: time="2024-12-13T01:29:40.517925871Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 01:29:40.518004 containerd[1494]: time="2024-12-13T01:29:40.517936100Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 01:29:40.518004 containerd[1494]: time="2024-12-13T01:29:40.517957070Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 01:29:40.518004 containerd[1494]: time="2024-12-13T01:29:40.517970324Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 01:29:40.518332 containerd[1494]: time="2024-12-13T01:29:40.518002815Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 01:29:40.518332 containerd[1494]: time="2024-12-13T01:29:40.518017893Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 01:29:40.518332 containerd[1494]: time="2024-12-13T01:29:40.518034084Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 01:29:40.518332 containerd[1494]: time="2024-12-13T01:29:40.518048731Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 01:29:40.518332 containerd[1494]: time="2024-12-13T01:29:40.518059932Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 01:29:40.518332 containerd[1494]: time="2024-12-13T01:29:40.518072286Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 01:29:40.518332 containerd[1494]: time="2024-12-13T01:29:40.518083797Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 01:29:40.518332 containerd[1494]: time="2024-12-13T01:29:40.518097743Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 01:29:40.518332 containerd[1494]: time="2024-12-13T01:29:40.518110778Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 01:29:40.518332 containerd[1494]: time="2024-12-13T01:29:40.518122590Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 01:29:40.518332 containerd[1494]: time="2024-12-13T01:29:40.518134833Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 01:29:40.518332 containerd[1494]: time="2024-12-13T01:29:40.518152496Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 01:29:40.518332 containerd[1494]: time="2024-12-13T01:29:40.518170069Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 01:29:40.518332 containerd[1494]: time="2024-12-13T01:29:40.518182011Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 01:29:40.518332 containerd[1494]: time="2024-12-13T01:29:40.518192060Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 01:29:40.518591 containerd[1494]: time="2024-12-13T01:29:40.518223790Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 01:29:40.518591 containerd[1494]: time="2024-12-13T01:29:40.518238578Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 01:29:40.518591 containerd[1494]: time="2024-12-13T01:29:40.518248245Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 01:29:40.518591 containerd[1494]: time="2024-12-13T01:29:40.518260729Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 01:29:40.518591 containerd[1494]: time="2024-12-13T01:29:40.518269726Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 01:29:40.518591 containerd[1494]: time="2024-12-13T01:29:40.518282400Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 01:29:40.518591 containerd[1494]: time="2024-12-13T01:29:40.518296626Z" level=info msg="NRI interface is disabled by configuration." Dec 13 01:29:40.518591 containerd[1494]: time="2024-12-13T01:29:40.518306454Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 01:29:40.518761 containerd[1494]: time="2024-12-13T01:29:40.518568577Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 01:29:40.518761 containerd[1494]: time="2024-12-13T01:29:40.518622197Z" level=info msg="Connect containerd service" Dec 13 01:29:40.518761 containerd[1494]: time="2024-12-13T01:29:40.518658605Z" level=info msg="using legacy CRI server" Dec 13 01:29:40.518761 containerd[1494]: time="2024-12-13T01:29:40.518664957Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 01:29:40.518761 containerd[1494]: time="2024-12-13T01:29:40.518745017Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 01:29:40.528993 containerd[1494]: time="2024-12-13T01:29:40.527118914Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:29:40.528993 containerd[1494]: time="2024-12-13T01:29:40.527469220Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 01:29:40.528993 containerd[1494]: time="2024-12-13T01:29:40.527515557Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 01:29:40.528993 containerd[1494]: time="2024-12-13T01:29:40.527560471Z" level=info msg="Start subscribing containerd event" Dec 13 01:29:40.528993 containerd[1494]: time="2024-12-13T01:29:40.527598854Z" level=info msg="Start recovering state" Dec 13 01:29:40.528993 containerd[1494]: time="2024-12-13T01:29:40.527665098Z" level=info msg="Start event monitor" Dec 13 01:29:40.528993 containerd[1494]: time="2024-12-13T01:29:40.527674736Z" level=info msg="Start snapshots syncer" Dec 13 01:29:40.528993 containerd[1494]: time="2024-12-13T01:29:40.527684074Z" level=info msg="Start cni network conf syncer for default" Dec 13 01:29:40.528993 containerd[1494]: time="2024-12-13T01:29:40.527695605Z" level=info msg="Start streaming server" Dec 13 01:29:40.527852 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 01:29:40.535436 sshd_keygen[1490]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 01:29:40.539148 containerd[1494]: time="2024-12-13T01:29:40.539088714Z" level=info msg="containerd successfully booted in 0.108444s" Dec 13 01:29:40.541086 systemd-networkd[1387]: eth1: Gained IPv6LL Dec 13 01:29:40.541920 systemd-timesyncd[1365]: Network configuration changed, trying to establish connection. Dec 13 01:29:40.548035 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 01:29:40.553105 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 01:29:40.565146 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:29:40.571061 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 01:29:40.575369 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 01:29:40.589524 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 01:29:40.608657 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 01:29:40.609169 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 01:29:40.620475 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 01:29:40.629812 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 01:29:40.644913 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 01:29:40.656661 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 01:29:40.667328 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 01:29:40.669807 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 01:29:40.858250 systemd-networkd[1387]: eth0: Gained IPv6LL Dec 13 01:29:40.859311 systemd-timesyncd[1365]: Network configuration changed, trying to establish connection. Dec 13 01:29:40.888624 tar[1492]: linux-amd64/LICENSE Dec 13 01:29:40.888624 tar[1492]: linux-amd64/README.md Dec 13 01:29:40.902526 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 01:29:41.635240 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:29:41.642323 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 01:29:41.642716 (kubelet)[1590]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:29:41.647013 systemd[1]: Startup finished in 1.646s (kernel) + 6.658s (initrd) + 5.214s (userspace) = 13.519s. Dec 13 01:29:42.520525 kubelet[1590]: E1213 01:29:42.520414 1590 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:29:42.529590 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:29:42.530118 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:29:42.531148 systemd[1]: kubelet.service: Consumed 1.323s CPU time. Dec 13 01:29:52.782473 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 01:29:52.792721 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:29:53.026417 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:29:53.031536 (kubelet)[1610]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:29:53.084765 kubelet[1610]: E1213 01:29:53.084509 1610 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:29:53.100341 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:29:53.101043 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:30:03.352006 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 01:30:03.360384 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:30:03.604534 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:30:03.609066 (kubelet)[1627]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:30:03.668266 kubelet[1627]: E1213 01:30:03.668075 1627 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:30:03.671783 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:30:03.672219 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:30:11.117606 systemd-timesyncd[1365]: Contacted time server 148.251.5.46:123 (2.flatcar.pool.ntp.org). Dec 13 01:30:11.117736 systemd-timesyncd[1365]: Initial clock synchronization to Fri 2024-12-13 01:30:11.486337 UTC. Dec 13 01:30:13.923189 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 01:30:13.929336 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:30:14.135331 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:30:14.154740 (kubelet)[1643]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:30:14.208678 kubelet[1643]: E1213 01:30:14.208536 1643 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:30:14.213661 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:30:14.213929 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:30:24.399735 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 13 01:30:24.409435 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:30:24.637290 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:30:24.637747 (kubelet)[1659]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:30:24.680848 kubelet[1659]: E1213 01:30:24.680659 1659 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:30:24.688904 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:30:24.689154 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:30:25.173631 update_engine[1474]: I20241213 01:30:25.173431 1474 update_attempter.cc:509] Updating boot flags... Dec 13 01:30:25.261078 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1677) Dec 13 01:30:25.331463 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1676) Dec 13 01:30:34.897674 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Dec 13 01:30:34.912332 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:30:35.137252 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:30:35.138585 (kubelet)[1694]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:30:35.181959 kubelet[1694]: E1213 01:30:35.181764 1694 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:30:35.190445 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:30:35.190659 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:30:45.397704 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Dec 13 01:30:45.411417 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:30:45.630049 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:30:45.634583 (kubelet)[1711]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:30:45.676594 kubelet[1711]: E1213 01:30:45.676231 1711 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:30:45.686051 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:30:45.686303 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:30:55.897523 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Dec 13 01:30:55.909316 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:30:56.145368 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:30:56.158502 (kubelet)[1727]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:30:56.217691 kubelet[1727]: E1213 01:30:56.217554 1727 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:30:56.223551 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:30:56.223857 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:31:06.397331 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Dec 13 01:31:06.404306 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:31:06.630734 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:31:06.640322 (kubelet)[1743]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:31:06.691651 kubelet[1743]: E1213 01:31:06.691433 1743 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:31:06.695247 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:31:06.695725 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:31:16.898330 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Dec 13 01:31:16.905282 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:31:17.118799 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:31:17.123566 (kubelet)[1759]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:31:17.185727 kubelet[1759]: E1213 01:31:17.185541 1759 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:31:17.195494 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:31:17.195819 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:31:27.397268 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Dec 13 01:31:27.404160 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:31:27.598354 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:31:27.602519 (kubelet)[1775]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:31:27.676358 kubelet[1775]: E1213 01:31:27.676139 1775 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:31:27.680552 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:31:27.681051 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:31:29.777404 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 01:31:29.783476 systemd[1]: Started sshd@0-78.47.95.53:22-147.75.109.163:51476.service - OpenSSH per-connection server daemon (147.75.109.163:51476). Dec 13 01:31:30.807785 sshd[1784]: Accepted publickey for core from 147.75.109.163 port 51476 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:31:30.813635 sshd[1784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:30.839751 systemd-logind[1473]: New session 1 of user core. Dec 13 01:31:30.841873 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 01:31:30.861907 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 01:31:30.887785 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 01:31:30.898501 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 01:31:30.928029 (systemd)[1788]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:31:31.116783 systemd[1788]: Queued start job for default target default.target. Dec 13 01:31:31.123409 systemd[1788]: Created slice app.slice - User Application Slice. Dec 13 01:31:31.123442 systemd[1788]: Reached target paths.target - Paths. Dec 13 01:31:31.123455 systemd[1788]: Reached target timers.target - Timers. Dec 13 01:31:31.125737 systemd[1788]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 01:31:31.165377 systemd[1788]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 01:31:31.165685 systemd[1788]: Reached target sockets.target - Sockets. Dec 13 01:31:31.165718 systemd[1788]: Reached target basic.target - Basic System. Dec 13 01:31:31.165799 systemd[1788]: Reached target default.target - Main User Target. Dec 13 01:31:31.165875 systemd[1788]: Startup finished in 219ms. Dec 13 01:31:31.167134 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 01:31:31.179258 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 01:31:31.889461 systemd[1]: Started sshd@1-78.47.95.53:22-147.75.109.163:51484.service - OpenSSH per-connection server daemon (147.75.109.163:51484). Dec 13 01:31:32.892493 sshd[1799]: Accepted publickey for core from 147.75.109.163 port 51484 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:31:32.896426 sshd[1799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:32.907010 systemd-logind[1473]: New session 2 of user core. Dec 13 01:31:32.925278 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 01:31:33.585737 sshd[1799]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:33.593510 systemd[1]: sshd@1-78.47.95.53:22-147.75.109.163:51484.service: Deactivated successfully. Dec 13 01:31:33.596501 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 01:31:33.597674 systemd-logind[1473]: Session 2 logged out. Waiting for processes to exit. Dec 13 01:31:33.598892 systemd-logind[1473]: Removed session 2. Dec 13 01:31:33.771447 systemd[1]: Started sshd@2-78.47.95.53:22-147.75.109.163:51492.service - OpenSSH per-connection server daemon (147.75.109.163:51492). Dec 13 01:31:34.776059 sshd[1806]: Accepted publickey for core from 147.75.109.163 port 51492 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:31:34.779645 sshd[1806]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:34.789752 systemd-logind[1473]: New session 3 of user core. Dec 13 01:31:34.806342 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 01:31:35.456570 sshd[1806]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:35.464937 systemd[1]: sshd@2-78.47.95.53:22-147.75.109.163:51492.service: Deactivated successfully. Dec 13 01:31:35.469885 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 01:31:35.471132 systemd-logind[1473]: Session 3 logged out. Waiting for processes to exit. Dec 13 01:31:35.473076 systemd-logind[1473]: Removed session 3. Dec 13 01:31:35.634443 systemd[1]: Started sshd@3-78.47.95.53:22-147.75.109.163:51500.service - OpenSSH per-connection server daemon (147.75.109.163:51500). Dec 13 01:31:36.642741 sshd[1813]: Accepted publickey for core from 147.75.109.163 port 51500 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:31:36.646138 sshd[1813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:36.655391 systemd-logind[1473]: New session 4 of user core. Dec 13 01:31:36.673267 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 01:31:37.329967 sshd[1813]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:37.338189 systemd[1]: sshd@3-78.47.95.53:22-147.75.109.163:51500.service: Deactivated successfully. Dec 13 01:31:37.343465 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 01:31:37.347397 systemd-logind[1473]: Session 4 logged out. Waiting for processes to exit. Dec 13 01:31:37.349176 systemd-logind[1473]: Removed session 4. Dec 13 01:31:37.508591 systemd[1]: Started sshd@4-78.47.95.53:22-147.75.109.163:48710.service - OpenSSH per-connection server daemon (147.75.109.163:48710). Dec 13 01:31:37.897239 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Dec 13 01:31:37.904364 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:31:38.158545 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:31:38.168433 (kubelet)[1830]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:31:38.223894 kubelet[1830]: E1213 01:31:38.223807 1830 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:31:38.232373 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:31:38.232599 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:31:38.510244 sshd[1820]: Accepted publickey for core from 147.75.109.163 port 48710 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:31:38.513716 sshd[1820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:38.522063 systemd-logind[1473]: New session 5 of user core. Dec 13 01:31:38.533364 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 01:31:39.048594 sudo[1840]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 01:31:39.049077 sudo[1840]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:31:39.072272 sudo[1840]: pam_unix(sudo:session): session closed for user root Dec 13 01:31:39.234059 sshd[1820]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:39.241767 systemd[1]: sshd@4-78.47.95.53:22-147.75.109.163:48710.service: Deactivated successfully. Dec 13 01:31:39.246789 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 01:31:39.248854 systemd-logind[1473]: Session 5 logged out. Waiting for processes to exit. Dec 13 01:31:39.251015 systemd-logind[1473]: Removed session 5. Dec 13 01:31:39.414451 systemd[1]: Started sshd@5-78.47.95.53:22-147.75.109.163:48712.service - OpenSSH per-connection server daemon (147.75.109.163:48712). Dec 13 01:31:40.431878 sshd[1845]: Accepted publickey for core from 147.75.109.163 port 48712 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:31:40.435512 sshd[1845]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:40.446016 systemd-logind[1473]: New session 6 of user core. Dec 13 01:31:40.462285 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 01:31:40.968493 sudo[1849]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 01:31:40.969283 sudo[1849]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:31:40.976878 sudo[1849]: pam_unix(sudo:session): session closed for user root Dec 13 01:31:40.990356 sudo[1848]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 01:31:40.991142 sudo[1848]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:31:41.014590 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 13 01:31:41.029239 auditctl[1852]: No rules Dec 13 01:31:41.030248 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 01:31:41.030713 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 13 01:31:41.040665 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:31:41.107834 augenrules[1870]: No rules Dec 13 01:31:41.109739 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:31:41.113747 sudo[1848]: pam_unix(sudo:session): session closed for user root Dec 13 01:31:41.276333 sshd[1845]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:41.282961 systemd[1]: sshd@5-78.47.95.53:22-147.75.109.163:48712.service: Deactivated successfully. Dec 13 01:31:41.287065 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 01:31:41.291514 systemd-logind[1473]: Session 6 logged out. Waiting for processes to exit. Dec 13 01:31:41.293664 systemd-logind[1473]: Removed session 6. Dec 13 01:31:41.463721 systemd[1]: Started sshd@6-78.47.95.53:22-147.75.109.163:48728.service - OpenSSH per-connection server daemon (147.75.109.163:48728). Dec 13 01:31:42.440285 sshd[1878]: Accepted publickey for core from 147.75.109.163 port 48728 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:31:42.442878 sshd[1878]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:42.450063 systemd-logind[1473]: New session 7 of user core. Dec 13 01:31:42.460272 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 01:31:42.971246 sudo[1881]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 01:31:42.972184 sudo[1881]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:31:43.577913 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 01:31:43.598753 (dockerd)[1896]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 01:31:44.261788 dockerd[1896]: time="2024-12-13T01:31:44.261693845Z" level=info msg="Starting up" Dec 13 01:31:44.442144 dockerd[1896]: time="2024-12-13T01:31:44.442029820Z" level=info msg="Loading containers: start." Dec 13 01:31:44.610027 kernel: Initializing XFRM netlink socket Dec 13 01:31:44.708670 systemd-networkd[1387]: docker0: Link UP Dec 13 01:31:44.724470 dockerd[1896]: time="2024-12-13T01:31:44.724406251Z" level=info msg="Loading containers: done." Dec 13 01:31:44.749582 dockerd[1896]: time="2024-12-13T01:31:44.749515355Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 01:31:44.749771 dockerd[1896]: time="2024-12-13T01:31:44.749668102Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Dec 13 01:31:44.749872 dockerd[1896]: time="2024-12-13T01:31:44.749841028Z" level=info msg="Daemon has completed initialization" Dec 13 01:31:44.798107 dockerd[1896]: time="2024-12-13T01:31:44.797566758Z" level=info msg="API listen on /run/docker.sock" Dec 13 01:31:44.797690 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 01:31:46.206870 containerd[1494]: time="2024-12-13T01:31:46.206728765Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\"" Dec 13 01:31:46.934586 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3469374610.mount: Deactivated successfully. Dec 13 01:31:48.398866 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Dec 13 01:31:48.406795 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:31:48.516926 containerd[1494]: time="2024-12-13T01:31:48.515730154Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:48.518575 containerd[1494]: time="2024-12-13T01:31:48.518543746Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.8: active requests=0, bytes read=32675734" Dec 13 01:31:48.522992 containerd[1494]: time="2024-12-13T01:31:48.521934866Z" level=info msg="ImageCreate event name:\"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:48.526606 containerd[1494]: time="2024-12-13T01:31:48.526585660Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:48.528168 containerd[1494]: time="2024-12-13T01:31:48.528146260Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.8\" with image id \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\", size \"32672442\" in 2.321330743s" Dec 13 01:31:48.528592 containerd[1494]: time="2024-12-13T01:31:48.528575975Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\" returns image reference \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\"" Dec 13 01:31:48.566607 containerd[1494]: time="2024-12-13T01:31:48.566557213Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\"" Dec 13 01:31:48.617180 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:31:48.620963 (kubelet)[2111]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:31:48.683255 kubelet[2111]: E1213 01:31:48.682810 2111 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:31:48.694191 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:31:48.694566 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:31:50.203915 containerd[1494]: time="2024-12-13T01:31:50.203782120Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:50.205996 containerd[1494]: time="2024-12-13T01:31:50.205912656Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.8: active requests=0, bytes read=29606429" Dec 13 01:31:50.207826 containerd[1494]: time="2024-12-13T01:31:50.207774368Z" level=info msg="ImageCreate event name:\"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:50.214394 containerd[1494]: time="2024-12-13T01:31:50.213693841Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:50.215421 containerd[1494]: time="2024-12-13T01:31:50.215384569Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.8\" with image id \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\", size \"31051521\" in 1.648514872s" Dec 13 01:31:50.215482 containerd[1494]: time="2024-12-13T01:31:50.215423476Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\" returns image reference \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\"" Dec 13 01:31:50.244313 containerd[1494]: time="2024-12-13T01:31:50.244239627Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\"" Dec 13 01:31:51.484595 containerd[1494]: time="2024-12-13T01:31:51.484533071Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:51.485787 containerd[1494]: time="2024-12-13T01:31:51.485749769Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.8: active requests=0, bytes read=17783055" Dec 13 01:31:51.487057 containerd[1494]: time="2024-12-13T01:31:51.487018056Z" level=info msg="ImageCreate event name:\"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:51.489800 containerd[1494]: time="2024-12-13T01:31:51.489765425Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:51.492001 containerd[1494]: time="2024-12-13T01:31:51.490656841Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.8\" with image id \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\", size \"19228165\" in 1.246158878s" Dec 13 01:31:51.492001 containerd[1494]: time="2024-12-13T01:31:51.490683477Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\" returns image reference \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\"" Dec 13 01:31:51.512350 containerd[1494]: time="2024-12-13T01:31:51.512313895Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Dec 13 01:31:52.589877 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1716803052.mount: Deactivated successfully. Dec 13 01:31:52.986102 containerd[1494]: time="2024-12-13T01:31:52.986008128Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:52.987480 containerd[1494]: time="2024-12-13T01:31:52.987428619Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.8: active requests=0, bytes read=29057496" Dec 13 01:31:52.988610 containerd[1494]: time="2024-12-13T01:31:52.988567210Z" level=info msg="ImageCreate event name:\"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:52.990856 containerd[1494]: time="2024-12-13T01:31:52.990835065Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:52.991537 containerd[1494]: time="2024-12-13T01:31:52.991362714Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.8\" with image id \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\", size \"29056489\" in 1.478847989s" Dec 13 01:31:52.991537 containerd[1494]: time="2024-12-13T01:31:52.991411037Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\"" Dec 13 01:31:53.013355 containerd[1494]: time="2024-12-13T01:31:53.013291796Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 01:31:53.628446 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3637786531.mount: Deactivated successfully. Dec 13 01:31:54.542774 containerd[1494]: time="2024-12-13T01:31:54.542697883Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:54.544477 containerd[1494]: time="2024-12-13T01:31:54.544384689Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185841" Dec 13 01:31:54.545409 containerd[1494]: time="2024-12-13T01:31:54.545387488Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:54.548575 containerd[1494]: time="2024-12-13T01:31:54.548527371Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:54.549816 containerd[1494]: time="2024-12-13T01:31:54.549671749Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.536336576s" Dec 13 01:31:54.549816 containerd[1494]: time="2024-12-13T01:31:54.549704887Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 01:31:54.574896 containerd[1494]: time="2024-12-13T01:31:54.574841351Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 01:31:55.136222 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount217114070.mount: Deactivated successfully. Dec 13 01:31:55.145841 containerd[1494]: time="2024-12-13T01:31:55.145689766Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:55.147399 containerd[1494]: time="2024-12-13T01:31:55.147327287Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322310" Dec 13 01:31:55.148616 containerd[1494]: time="2024-12-13T01:31:55.148515588Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:55.159155 containerd[1494]: time="2024-12-13T01:31:55.158692393Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:55.161827 containerd[1494]: time="2024-12-13T01:31:55.159968218Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 585.084752ms" Dec 13 01:31:55.161827 containerd[1494]: time="2024-12-13T01:31:55.160081217Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 01:31:55.208055 containerd[1494]: time="2024-12-13T01:31:55.207971766Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Dec 13 01:31:55.841374 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1971752496.mount: Deactivated successfully. Dec 13 01:31:58.897240 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Dec 13 01:31:58.905604 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:31:59.098088 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:31:59.103318 (kubelet)[2258]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:31:59.190983 kubelet[2258]: E1213 01:31:59.190650 2258 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:31:59.197158 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:31:59.197391 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:31:59.611655 containerd[1494]: time="2024-12-13T01:31:59.611256485Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:59.612918 containerd[1494]: time="2024-12-13T01:31:59.612877103Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238651" Dec 13 01:31:59.613595 containerd[1494]: time="2024-12-13T01:31:59.613547990Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:59.616663 containerd[1494]: time="2024-12-13T01:31:59.616624655Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:59.617868 containerd[1494]: time="2024-12-13T01:31:59.617662716Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 4.409337878s" Dec 13 01:31:59.617868 containerd[1494]: time="2024-12-13T01:31:59.617696467Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Dec 13 01:32:02.437833 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:32:02.447536 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:32:02.497304 systemd[1]: Reloading requested from client PID 2336 ('systemctl') (unit session-7.scope)... Dec 13 01:32:02.497330 systemd[1]: Reloading... Dec 13 01:32:02.692153 zram_generator::config[2382]: No configuration found. Dec 13 01:32:02.802007 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:32:02.896864 systemd[1]: Reloading finished in 398 ms. Dec 13 01:32:02.961562 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 01:32:02.961704 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 01:32:02.962140 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:32:02.967634 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:32:03.128410 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:32:03.139756 (kubelet)[2428]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:32:03.212636 kubelet[2428]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:32:03.212636 kubelet[2428]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:32:03.212636 kubelet[2428]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:32:03.214017 kubelet[2428]: I1213 01:32:03.213985 2428 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:32:03.694464 kubelet[2428]: I1213 01:32:03.694359 2428 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 01:32:03.694464 kubelet[2428]: I1213 01:32:03.694394 2428 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:32:03.694839 kubelet[2428]: I1213 01:32:03.694608 2428 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 01:32:03.726945 kubelet[2428]: E1213 01:32:03.726217 2428 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://78.47.95.53:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 78.47.95.53:6443: connect: connection refused Dec 13 01:32:03.726945 kubelet[2428]: I1213 01:32:03.726635 2428 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:32:03.764895 kubelet[2428]: I1213 01:32:03.764442 2428 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:32:03.770032 kubelet[2428]: I1213 01:32:03.769613 2428 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:32:03.771014 kubelet[2428]: I1213 01:32:03.769924 2428 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-2-1-2-54968c70cf","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:32:03.771014 kubelet[2428]: I1213 01:32:03.770478 2428 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:32:03.771014 kubelet[2428]: I1213 01:32:03.770499 2428 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:32:03.771014 kubelet[2428]: I1213 01:32:03.770769 2428 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:32:03.772214 kubelet[2428]: I1213 01:32:03.772164 2428 kubelet.go:400] "Attempting to sync node with API server" Dec 13 01:32:03.772656 kubelet[2428]: I1213 01:32:03.772320 2428 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:32:03.772656 kubelet[2428]: I1213 01:32:03.772369 2428 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:32:03.772656 kubelet[2428]: I1213 01:32:03.772402 2428 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:32:03.773166 kubelet[2428]: W1213 01:32:03.773070 2428 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://78.47.95.53:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-2-54968c70cf&limit=500&resourceVersion=0": dial tcp 78.47.95.53:6443: connect: connection refused Dec 13 01:32:03.773299 kubelet[2428]: E1213 01:32:03.773241 2428 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://78.47.95.53:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-2-54968c70cf&limit=500&resourceVersion=0": dial tcp 78.47.95.53:6443: connect: connection refused Dec 13 01:32:03.778395 kubelet[2428]: W1213 01:32:03.778106 2428 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://78.47.95.53:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 78.47.95.53:6443: connect: connection refused Dec 13 01:32:03.778395 kubelet[2428]: E1213 01:32:03.778171 2428 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://78.47.95.53:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 78.47.95.53:6443: connect: connection refused Dec 13 01:32:03.779250 kubelet[2428]: I1213 01:32:03.778887 2428 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:32:03.781036 kubelet[2428]: I1213 01:32:03.780906 2428 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:32:03.781036 kubelet[2428]: W1213 01:32:03.781005 2428 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 01:32:03.782516 kubelet[2428]: I1213 01:32:03.782495 2428 server.go:1264] "Started kubelet" Dec 13 01:32:03.791237 kubelet[2428]: I1213 01:32:03.791140 2428 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:32:03.795386 kubelet[2428]: I1213 01:32:03.794363 2428 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:32:03.795386 kubelet[2428]: I1213 01:32:03.794859 2428 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:32:03.795386 kubelet[2428]: E1213 01:32:03.795116 2428 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://78.47.95.53:6443/api/v1/namespaces/default/events\": dial tcp 78.47.95.53:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-2-1-2-54968c70cf.1810987e12917a75 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-2-1-2-54968c70cf,UID:ci-4081-2-1-2-54968c70cf,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-2-1-2-54968c70cf,},FirstTimestamp:2024-12-13 01:32:03.782466165 +0000 UTC m=+0.633749635,LastTimestamp:2024-12-13 01:32:03.782466165 +0000 UTC m=+0.633749635,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-2-1-2-54968c70cf,}" Dec 13 01:32:03.798660 kubelet[2428]: I1213 01:32:03.798616 2428 server.go:455] "Adding debug handlers to kubelet server" Dec 13 01:32:03.799156 kubelet[2428]: I1213 01:32:03.799137 2428 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:32:03.805648 kubelet[2428]: E1213 01:32:03.805605 2428 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-2-1-2-54968c70cf\" not found" Dec 13 01:32:03.807254 kubelet[2428]: I1213 01:32:03.807206 2428 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:32:03.808070 kubelet[2428]: I1213 01:32:03.807390 2428 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 01:32:03.808070 kubelet[2428]: I1213 01:32:03.807521 2428 reconciler.go:26] "Reconciler: start to sync state" Dec 13 01:32:03.810080 kubelet[2428]: W1213 01:32:03.809061 2428 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://78.47.95.53:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 78.47.95.53:6443: connect: connection refused Dec 13 01:32:03.810080 kubelet[2428]: E1213 01:32:03.809316 2428 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://78.47.95.53:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 78.47.95.53:6443: connect: connection refused Dec 13 01:32:03.812276 kubelet[2428]: E1213 01:32:03.812254 2428 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://78.47.95.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-2-54968c70cf?timeout=10s\": dial tcp 78.47.95.53:6443: connect: connection refused" interval="200ms" Dec 13 01:32:03.816479 kubelet[2428]: I1213 01:32:03.816444 2428 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:32:03.816607 kubelet[2428]: I1213 01:32:03.816576 2428 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:32:03.817768 kubelet[2428]: E1213 01:32:03.817731 2428 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:32:03.818625 kubelet[2428]: I1213 01:32:03.818590 2428 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:32:03.829270 kubelet[2428]: I1213 01:32:03.829065 2428 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:32:03.839865 kubelet[2428]: I1213 01:32:03.839828 2428 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:32:03.839925 kubelet[2428]: I1213 01:32:03.839875 2428 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:32:03.839925 kubelet[2428]: I1213 01:32:03.839897 2428 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 01:32:03.840017 kubelet[2428]: E1213 01:32:03.839957 2428 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:32:03.846290 kubelet[2428]: W1213 01:32:03.846219 2428 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://78.47.95.53:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 78.47.95.53:6443: connect: connection refused Dec 13 01:32:03.846355 kubelet[2428]: E1213 01:32:03.846301 2428 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://78.47.95.53:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 78.47.95.53:6443: connect: connection refused Dec 13 01:32:03.847864 kubelet[2428]: I1213 01:32:03.847836 2428 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:32:03.847864 kubelet[2428]: I1213 01:32:03.847855 2428 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:32:03.847937 kubelet[2428]: I1213 01:32:03.847879 2428 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:32:03.851254 kubelet[2428]: I1213 01:32:03.851226 2428 policy_none.go:49] "None policy: Start" Dec 13 01:32:03.852305 kubelet[2428]: I1213 01:32:03.852276 2428 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:32:03.852358 kubelet[2428]: I1213 01:32:03.852309 2428 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:32:03.860068 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 01:32:03.878481 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 01:32:03.884047 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 01:32:03.895101 kubelet[2428]: I1213 01:32:03.895034 2428 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:32:03.895258 kubelet[2428]: I1213 01:32:03.895213 2428 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 01:32:03.895602 kubelet[2428]: I1213 01:32:03.895331 2428 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:32:03.897239 kubelet[2428]: E1213 01:32:03.897101 2428 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-2-1-2-54968c70cf\" not found" Dec 13 01:32:03.914639 kubelet[2428]: I1213 01:32:03.914594 2428 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-2-54968c70cf" Dec 13 01:32:03.915074 kubelet[2428]: E1213 01:32:03.915041 2428 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://78.47.95.53:6443/api/v1/nodes\": dial tcp 78.47.95.53:6443: connect: connection refused" node="ci-4081-2-1-2-54968c70cf" Dec 13 01:32:03.940941 kubelet[2428]: I1213 01:32:03.940841 2428 topology_manager.go:215] "Topology Admit Handler" podUID="0c8124d3e0a67cd251f9bd0dfeb3af63" podNamespace="kube-system" podName="kube-apiserver-ci-4081-2-1-2-54968c70cf" Dec 13 01:32:03.942859 kubelet[2428]: I1213 01:32:03.942820 2428 topology_manager.go:215] "Topology Admit Handler" podUID="493c7585bd5509d4bc020bd265649454" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-2-1-2-54968c70cf" Dec 13 01:32:03.945357 kubelet[2428]: I1213 01:32:03.945263 2428 topology_manager.go:215] "Topology Admit Handler" podUID="4360acdd961b0e4bead88c342b334e09" podNamespace="kube-system" podName="kube-scheduler-ci-4081-2-1-2-54968c70cf" Dec 13 01:32:03.955506 systemd[1]: Created slice kubepods-burstable-pod0c8124d3e0a67cd251f9bd0dfeb3af63.slice - libcontainer container kubepods-burstable-pod0c8124d3e0a67cd251f9bd0dfeb3af63.slice. Dec 13 01:32:03.975659 systemd[1]: Created slice kubepods-burstable-pod493c7585bd5509d4bc020bd265649454.slice - libcontainer container kubepods-burstable-pod493c7585bd5509d4bc020bd265649454.slice. Dec 13 01:32:03.983219 systemd[1]: Created slice kubepods-burstable-pod4360acdd961b0e4bead88c342b334e09.slice - libcontainer container kubepods-burstable-pod4360acdd961b0e4bead88c342b334e09.slice. Dec 13 01:32:04.009130 kubelet[2428]: I1213 01:32:04.008828 2428 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0c8124d3e0a67cd251f9bd0dfeb3af63-ca-certs\") pod \"kube-apiserver-ci-4081-2-1-2-54968c70cf\" (UID: \"0c8124d3e0a67cd251f9bd0dfeb3af63\") " pod="kube-system/kube-apiserver-ci-4081-2-1-2-54968c70cf" Dec 13 01:32:04.009130 kubelet[2428]: I1213 01:32:04.008881 2428 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0c8124d3e0a67cd251f9bd0dfeb3af63-k8s-certs\") pod \"kube-apiserver-ci-4081-2-1-2-54968c70cf\" (UID: \"0c8124d3e0a67cd251f9bd0dfeb3af63\") " pod="kube-system/kube-apiserver-ci-4081-2-1-2-54968c70cf" Dec 13 01:32:04.009130 kubelet[2428]: I1213 01:32:04.008905 2428 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/493c7585bd5509d4bc020bd265649454-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-2-1-2-54968c70cf\" (UID: \"493c7585bd5509d4bc020bd265649454\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-2-54968c70cf" Dec 13 01:32:04.009130 kubelet[2428]: I1213 01:32:04.008927 2428 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4360acdd961b0e4bead88c342b334e09-kubeconfig\") pod \"kube-scheduler-ci-4081-2-1-2-54968c70cf\" (UID: \"4360acdd961b0e4bead88c342b334e09\") " pod="kube-system/kube-scheduler-ci-4081-2-1-2-54968c70cf" Dec 13 01:32:04.009130 kubelet[2428]: I1213 01:32:04.008945 2428 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0c8124d3e0a67cd251f9bd0dfeb3af63-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-2-1-2-54968c70cf\" (UID: \"0c8124d3e0a67cd251f9bd0dfeb3af63\") " pod="kube-system/kube-apiserver-ci-4081-2-1-2-54968c70cf" Dec 13 01:32:04.009666 kubelet[2428]: I1213 01:32:04.008963 2428 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/493c7585bd5509d4bc020bd265649454-ca-certs\") pod \"kube-controller-manager-ci-4081-2-1-2-54968c70cf\" (UID: \"493c7585bd5509d4bc020bd265649454\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-2-54968c70cf" Dec 13 01:32:04.009666 kubelet[2428]: I1213 01:32:04.009014 2428 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/493c7585bd5509d4bc020bd265649454-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-2-1-2-54968c70cf\" (UID: \"493c7585bd5509d4bc020bd265649454\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-2-54968c70cf" Dec 13 01:32:04.009666 kubelet[2428]: I1213 01:32:04.009037 2428 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/493c7585bd5509d4bc020bd265649454-k8s-certs\") pod \"kube-controller-manager-ci-4081-2-1-2-54968c70cf\" (UID: \"493c7585bd5509d4bc020bd265649454\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-2-54968c70cf" Dec 13 01:32:04.009666 kubelet[2428]: I1213 01:32:04.009055 2428 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/493c7585bd5509d4bc020bd265649454-kubeconfig\") pod \"kube-controller-manager-ci-4081-2-1-2-54968c70cf\" (UID: \"493c7585bd5509d4bc020bd265649454\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-2-54968c70cf" Dec 13 01:32:04.013590 kubelet[2428]: E1213 01:32:04.013551 2428 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://78.47.95.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-2-54968c70cf?timeout=10s\": dial tcp 78.47.95.53:6443: connect: connection refused" interval="400ms" Dec 13 01:32:04.118348 kubelet[2428]: I1213 01:32:04.118296 2428 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-2-54968c70cf" Dec 13 01:32:04.118816 kubelet[2428]: E1213 01:32:04.118762 2428 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://78.47.95.53:6443/api/v1/nodes\": dial tcp 78.47.95.53:6443: connect: connection refused" node="ci-4081-2-1-2-54968c70cf" Dec 13 01:32:04.274198 containerd[1494]: time="2024-12-13T01:32:04.273966632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-2-1-2-54968c70cf,Uid:0c8124d3e0a67cd251f9bd0dfeb3af63,Namespace:kube-system,Attempt:0,}" Dec 13 01:32:04.290919 containerd[1494]: time="2024-12-13T01:32:04.290363726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-2-1-2-54968c70cf,Uid:4360acdd961b0e4bead88c342b334e09,Namespace:kube-system,Attempt:0,}" Dec 13 01:32:04.290919 containerd[1494]: time="2024-12-13T01:32:04.290371560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-2-1-2-54968c70cf,Uid:493c7585bd5509d4bc020bd265649454,Namespace:kube-system,Attempt:0,}" Dec 13 01:32:04.414504 kubelet[2428]: E1213 01:32:04.414428 2428 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://78.47.95.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-2-54968c70cf?timeout=10s\": dial tcp 78.47.95.53:6443: connect: connection refused" interval="800ms" Dec 13 01:32:04.521632 kubelet[2428]: I1213 01:32:04.521557 2428 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-2-54968c70cf" Dec 13 01:32:04.522172 kubelet[2428]: E1213 01:32:04.522096 2428 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://78.47.95.53:6443/api/v1/nodes\": dial tcp 78.47.95.53:6443: connect: connection refused" node="ci-4081-2-1-2-54968c70cf" Dec 13 01:32:04.694704 kubelet[2428]: W1213 01:32:04.694510 2428 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://78.47.95.53:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 78.47.95.53:6443: connect: connection refused Dec 13 01:32:04.694704 kubelet[2428]: E1213 01:32:04.694647 2428 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://78.47.95.53:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 78.47.95.53:6443: connect: connection refused Dec 13 01:32:04.868669 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4125511787.mount: Deactivated successfully. Dec 13 01:32:04.878422 containerd[1494]: time="2024-12-13T01:32:04.878343801Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:32:04.880755 containerd[1494]: time="2024-12-13T01:32:04.880657095Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:32:04.882080 containerd[1494]: time="2024-12-13T01:32:04.882024858Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:32:04.885145 containerd[1494]: time="2024-12-13T01:32:04.885081446Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:32:04.887197 containerd[1494]: time="2024-12-13T01:32:04.887064652Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:32:04.888676 containerd[1494]: time="2024-12-13T01:32:04.888506951Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312076" Dec 13 01:32:04.888676 containerd[1494]: time="2024-12-13T01:32:04.888601271Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:32:04.891502 containerd[1494]: time="2024-12-13T01:32:04.891439063Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:32:04.894003 containerd[1494]: time="2024-12-13T01:32:04.893393789Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 602.628858ms" Dec 13 01:32:04.896408 containerd[1494]: time="2024-12-13T01:32:04.896374749Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 622.214488ms" Dec 13 01:32:04.902391 containerd[1494]: time="2024-12-13T01:32:04.902330881Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 611.852329ms" Dec 13 01:32:04.909520 kubelet[2428]: W1213 01:32:04.909453 2428 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://78.47.95.53:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 78.47.95.53:6443: connect: connection refused Dec 13 01:32:04.909777 kubelet[2428]: E1213 01:32:04.909759 2428 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://78.47.95.53:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 78.47.95.53:6443: connect: connection refused Dec 13 01:32:05.100371 containerd[1494]: time="2024-12-13T01:32:05.099854455Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:32:05.100371 containerd[1494]: time="2024-12-13T01:32:05.099938347Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:32:05.100371 containerd[1494]: time="2024-12-13T01:32:05.099954226Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:05.100371 containerd[1494]: time="2024-12-13T01:32:05.100121470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:05.101820 containerd[1494]: time="2024-12-13T01:32:05.101733923Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:32:05.102787 containerd[1494]: time="2024-12-13T01:32:05.101963850Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:32:05.103417 containerd[1494]: time="2024-12-13T01:32:05.103153447Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:32:05.103417 containerd[1494]: time="2024-12-13T01:32:05.103188010Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:32:05.103417 containerd[1494]: time="2024-12-13T01:32:05.103210580Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:05.103417 containerd[1494]: time="2024-12-13T01:32:05.103290315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:05.103417 containerd[1494]: time="2024-12-13T01:32:05.102747340Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:05.104260 containerd[1494]: time="2024-12-13T01:32:05.103784340Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:05.135101 systemd[1]: Started cri-containerd-2c1972a936e04223195774797c566bd8f2b16bf42f73e1b88a965e39462a2267.scope - libcontainer container 2c1972a936e04223195774797c566bd8f2b16bf42f73e1b88a965e39462a2267. Dec 13 01:32:05.153124 systemd[1]: Started cri-containerd-4e0117636c40e79a11b1ddead901660e36304806d3b6b41ba00b55da04b2d8b6.scope - libcontainer container 4e0117636c40e79a11b1ddead901660e36304806d3b6b41ba00b55da04b2d8b6. Dec 13 01:32:05.156494 systemd[1]: Started cri-containerd-1d2fa38872706e7e1a4fc298ddee51d531bbff2039afa70929519ed821f92d7a.scope - libcontainer container 1d2fa38872706e7e1a4fc298ddee51d531bbff2039afa70929519ed821f92d7a. Dec 13 01:32:05.180028 kubelet[2428]: W1213 01:32:05.179685 2428 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://78.47.95.53:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-2-54968c70cf&limit=500&resourceVersion=0": dial tcp 78.47.95.53:6443: connect: connection refused Dec 13 01:32:05.180028 kubelet[2428]: E1213 01:32:05.179848 2428 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://78.47.95.53:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-2-54968c70cf&limit=500&resourceVersion=0": dial tcp 78.47.95.53:6443: connect: connection refused Dec 13 01:32:05.215638 kubelet[2428]: E1213 01:32:05.215263 2428 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://78.47.95.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-2-54968c70cf?timeout=10s\": dial tcp 78.47.95.53:6443: connect: connection refused" interval="1.6s" Dec 13 01:32:05.219435 containerd[1494]: time="2024-12-13T01:32:05.219350935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-2-1-2-54968c70cf,Uid:0c8124d3e0a67cd251f9bd0dfeb3af63,Namespace:kube-system,Attempt:0,} returns sandbox id \"2c1972a936e04223195774797c566bd8f2b16bf42f73e1b88a965e39462a2267\"" Dec 13 01:32:05.222999 containerd[1494]: time="2024-12-13T01:32:05.222954689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-2-1-2-54968c70cf,Uid:493c7585bd5509d4bc020bd265649454,Namespace:kube-system,Attempt:0,} returns sandbox id \"1d2fa38872706e7e1a4fc298ddee51d531bbff2039afa70929519ed821f92d7a\"" Dec 13 01:32:05.227694 kubelet[2428]: W1213 01:32:05.227635 2428 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://78.47.95.53:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 78.47.95.53:6443: connect: connection refused Dec 13 01:32:05.227694 kubelet[2428]: E1213 01:32:05.227696 2428 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://78.47.95.53:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 78.47.95.53:6443: connect: connection refused Dec 13 01:32:05.230399 containerd[1494]: time="2024-12-13T01:32:05.230366928Z" level=info msg="CreateContainer within sandbox \"1d2fa38872706e7e1a4fc298ddee51d531bbff2039afa70929519ed821f92d7a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 01:32:05.232541 containerd[1494]: time="2024-12-13T01:32:05.231670602Z" level=info msg="CreateContainer within sandbox \"2c1972a936e04223195774797c566bd8f2b16bf42f73e1b88a965e39462a2267\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 01:32:05.254018 containerd[1494]: time="2024-12-13T01:32:05.253543870Z" level=info msg="CreateContainer within sandbox \"1d2fa38872706e7e1a4fc298ddee51d531bbff2039afa70929519ed821f92d7a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3d6b5b545d29266a581a56656cda99960dd7e4dbf93526f95f3250b941701923\"" Dec 13 01:32:05.255141 containerd[1494]: time="2024-12-13T01:32:05.255113306Z" level=info msg="StartContainer for \"3d6b5b545d29266a581a56656cda99960dd7e4dbf93526f95f3250b941701923\"" Dec 13 01:32:05.255571 containerd[1494]: time="2024-12-13T01:32:05.255539769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-2-1-2-54968c70cf,Uid:4360acdd961b0e4bead88c342b334e09,Namespace:kube-system,Attempt:0,} returns sandbox id \"4e0117636c40e79a11b1ddead901660e36304806d3b6b41ba00b55da04b2d8b6\"" Dec 13 01:32:05.257981 containerd[1494]: time="2024-12-13T01:32:05.257935914Z" level=info msg="CreateContainer within sandbox \"2c1972a936e04223195774797c566bd8f2b16bf42f73e1b88a965e39462a2267\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ffcfca7bdaaa0e321cac5f9c16c1d4e217780dd2b914be499e474ef352675ef3\"" Dec 13 01:32:05.258908 containerd[1494]: time="2024-12-13T01:32:05.258595789Z" level=info msg="StartContainer for \"ffcfca7bdaaa0e321cac5f9c16c1d4e217780dd2b914be499e474ef352675ef3\"" Dec 13 01:32:05.259079 containerd[1494]: time="2024-12-13T01:32:05.259050274Z" level=info msg="CreateContainer within sandbox \"4e0117636c40e79a11b1ddead901660e36304806d3b6b41ba00b55da04b2d8b6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 01:32:05.286691 containerd[1494]: time="2024-12-13T01:32:05.286640317Z" level=info msg="CreateContainer within sandbox \"4e0117636c40e79a11b1ddead901660e36304806d3b6b41ba00b55da04b2d8b6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e701100d820c4842a2ed3cb925e5231144f0fedab69933a49399a1dcf8c18261\"" Dec 13 01:32:05.287300 containerd[1494]: time="2024-12-13T01:32:05.287141646Z" level=info msg="StartContainer for \"e701100d820c4842a2ed3cb925e5231144f0fedab69933a49399a1dcf8c18261\"" Dec 13 01:32:05.292272 systemd[1]: Started cri-containerd-3d6b5b545d29266a581a56656cda99960dd7e4dbf93526f95f3250b941701923.scope - libcontainer container 3d6b5b545d29266a581a56656cda99960dd7e4dbf93526f95f3250b941701923. Dec 13 01:32:05.317242 systemd[1]: Started cri-containerd-ffcfca7bdaaa0e321cac5f9c16c1d4e217780dd2b914be499e474ef352675ef3.scope - libcontainer container ffcfca7bdaaa0e321cac5f9c16c1d4e217780dd2b914be499e474ef352675ef3. Dec 13 01:32:05.326112 kubelet[2428]: I1213 01:32:05.325896 2428 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-2-54968c70cf" Dec 13 01:32:05.326623 kubelet[2428]: E1213 01:32:05.326600 2428 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://78.47.95.53:6443/api/v1/nodes\": dial tcp 78.47.95.53:6443: connect: connection refused" node="ci-4081-2-1-2-54968c70cf" Dec 13 01:32:05.342154 systemd[1]: Started cri-containerd-e701100d820c4842a2ed3cb925e5231144f0fedab69933a49399a1dcf8c18261.scope - libcontainer container e701100d820c4842a2ed3cb925e5231144f0fedab69933a49399a1dcf8c18261. Dec 13 01:32:05.384024 containerd[1494]: time="2024-12-13T01:32:05.383661928Z" level=info msg="StartContainer for \"3d6b5b545d29266a581a56656cda99960dd7e4dbf93526f95f3250b941701923\" returns successfully" Dec 13 01:32:05.413871 containerd[1494]: time="2024-12-13T01:32:05.413737018Z" level=info msg="StartContainer for \"ffcfca7bdaaa0e321cac5f9c16c1d4e217780dd2b914be499e474ef352675ef3\" returns successfully" Dec 13 01:32:05.420938 containerd[1494]: time="2024-12-13T01:32:05.420409896Z" level=info msg="StartContainer for \"e701100d820c4842a2ed3cb925e5231144f0fedab69933a49399a1dcf8c18261\" returns successfully" Dec 13 01:32:05.651608 kubelet[2428]: E1213 01:32:05.651505 2428 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://78.47.95.53:6443/api/v1/namespaces/default/events\": dial tcp 78.47.95.53:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-2-1-2-54968c70cf.1810987e12917a75 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-2-1-2-54968c70cf,UID:ci-4081-2-1-2-54968c70cf,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-2-1-2-54968c70cf,},FirstTimestamp:2024-12-13 01:32:03.782466165 +0000 UTC m=+0.633749635,LastTimestamp:2024-12-13 01:32:03.782466165 +0000 UTC m=+0.633749635,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-2-1-2-54968c70cf,}" Dec 13 01:32:06.931027 kubelet[2428]: I1213 01:32:06.930589 2428 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-2-54968c70cf" Dec 13 01:32:07.140891 kubelet[2428]: E1213 01:32:07.139845 2428 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-2-1-2-54968c70cf\" not found" node="ci-4081-2-1-2-54968c70cf" Dec 13 01:32:07.230832 kubelet[2428]: I1213 01:32:07.230689 2428 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-2-1-2-54968c70cf" Dec 13 01:32:07.241272 kubelet[2428]: E1213 01:32:07.241200 2428 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-2-1-2-54968c70cf\" not found" Dec 13 01:32:07.341664 kubelet[2428]: E1213 01:32:07.341600 2428 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-2-1-2-54968c70cf\" not found" Dec 13 01:32:07.442496 kubelet[2428]: E1213 01:32:07.442417 2428 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-2-1-2-54968c70cf\" not found" Dec 13 01:32:07.543443 kubelet[2428]: E1213 01:32:07.543184 2428 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-2-1-2-54968c70cf\" not found" Dec 13 01:32:07.644195 kubelet[2428]: E1213 01:32:07.644092 2428 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-2-1-2-54968c70cf\" not found" Dec 13 01:32:07.745064 kubelet[2428]: E1213 01:32:07.744994 2428 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-2-1-2-54968c70cf\" not found" Dec 13 01:32:07.846005 kubelet[2428]: E1213 01:32:07.845750 2428 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-2-1-2-54968c70cf\" not found" Dec 13 01:32:07.946008 kubelet[2428]: E1213 01:32:07.945914 2428 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-2-1-2-54968c70cf\" not found" Dec 13 01:32:08.047321 kubelet[2428]: E1213 01:32:08.047270 2428 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-2-1-2-54968c70cf\" not found" Dec 13 01:32:08.147901 kubelet[2428]: E1213 01:32:08.147836 2428 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-2-1-2-54968c70cf\" not found" Dec 13 01:32:08.248212 kubelet[2428]: E1213 01:32:08.248114 2428 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-2-1-2-54968c70cf\" not found" Dec 13 01:32:08.776807 kubelet[2428]: I1213 01:32:08.776769 2428 apiserver.go:52] "Watching apiserver" Dec 13 01:32:08.808322 kubelet[2428]: I1213 01:32:08.808224 2428 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 01:32:09.606171 systemd[1]: Reloading requested from client PID 2703 ('systemctl') (unit session-7.scope)... Dec 13 01:32:09.606204 systemd[1]: Reloading... Dec 13 01:32:09.785046 zram_generator::config[2746]: No configuration found. Dec 13 01:32:09.926045 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:32:10.040950 systemd[1]: Reloading finished in 433 ms. Dec 13 01:32:10.110735 kubelet[2428]: E1213 01:32:10.110481 2428 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{ci-4081-2-1-2-54968c70cf.1810987e12917a75 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-2-1-2-54968c70cf,UID:ci-4081-2-1-2-54968c70cf,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-2-1-2-54968c70cf,},FirstTimestamp:2024-12-13 01:32:03.782466165 +0000 UTC m=+0.633749635,LastTimestamp:2024-12-13 01:32:03.782466165 +0000 UTC m=+0.633749635,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-2-1-2-54968c70cf,}" Dec 13 01:32:10.111054 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:32:10.130599 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:32:10.130923 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:32:10.131058 systemd[1]: kubelet.service: Consumed 1.188s CPU time, 111.2M memory peak, 0B memory swap peak. Dec 13 01:32:10.139287 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:32:10.310115 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:32:10.321749 (kubelet)[2794]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:32:10.404109 kubelet[2794]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:32:10.404109 kubelet[2794]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:32:10.404109 kubelet[2794]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:32:10.404566 kubelet[2794]: I1213 01:32:10.404190 2794 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:32:10.408864 kubelet[2794]: I1213 01:32:10.408834 2794 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 01:32:10.408864 kubelet[2794]: I1213 01:32:10.408857 2794 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:32:10.409088 kubelet[2794]: I1213 01:32:10.409067 2794 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 01:32:10.411922 kubelet[2794]: I1213 01:32:10.411895 2794 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 01:32:10.413170 kubelet[2794]: I1213 01:32:10.413010 2794 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:32:10.423030 kubelet[2794]: I1213 01:32:10.423000 2794 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:32:10.423240 kubelet[2794]: I1213 01:32:10.423215 2794 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:32:10.423507 kubelet[2794]: I1213 01:32:10.423243 2794 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-2-1-2-54968c70cf","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:32:10.423610 kubelet[2794]: I1213 01:32:10.423534 2794 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:32:10.423610 kubelet[2794]: I1213 01:32:10.423549 2794 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:32:10.423610 kubelet[2794]: I1213 01:32:10.423604 2794 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:32:10.423869 kubelet[2794]: I1213 01:32:10.423707 2794 kubelet.go:400] "Attempting to sync node with API server" Dec 13 01:32:10.423869 kubelet[2794]: I1213 01:32:10.423721 2794 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:32:10.423869 kubelet[2794]: I1213 01:32:10.423741 2794 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:32:10.423869 kubelet[2794]: I1213 01:32:10.423759 2794 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:32:10.431645 kubelet[2794]: I1213 01:32:10.430044 2794 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:32:10.431645 kubelet[2794]: I1213 01:32:10.430216 2794 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:32:10.431645 kubelet[2794]: I1213 01:32:10.430631 2794 server.go:1264] "Started kubelet" Dec 13 01:32:10.433568 kubelet[2794]: I1213 01:32:10.433530 2794 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:32:10.439005 kubelet[2794]: I1213 01:32:10.434411 2794 server.go:455] "Adding debug handlers to kubelet server" Dec 13 01:32:10.439005 kubelet[2794]: I1213 01:32:10.435327 2794 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:32:10.439005 kubelet[2794]: I1213 01:32:10.435520 2794 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:32:10.439005 kubelet[2794]: I1213 01:32:10.436265 2794 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:32:10.446908 kubelet[2794]: I1213 01:32:10.446538 2794 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:32:10.447294 kubelet[2794]: I1213 01:32:10.447273 2794 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 01:32:10.448372 kubelet[2794]: I1213 01:32:10.447506 2794 reconciler.go:26] "Reconciler: start to sync state" Dec 13 01:32:10.450635 kubelet[2794]: I1213 01:32:10.449711 2794 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:32:10.451223 kubelet[2794]: I1213 01:32:10.450915 2794 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:32:10.452261 kubelet[2794]: I1213 01:32:10.452236 2794 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:32:10.452319 kubelet[2794]: I1213 01:32:10.452284 2794 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:32:10.452319 kubelet[2794]: I1213 01:32:10.452299 2794 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 01:32:10.452383 kubelet[2794]: E1213 01:32:10.452336 2794 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:32:10.454467 kubelet[2794]: E1213 01:32:10.454434 2794 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:32:10.456999 kubelet[2794]: I1213 01:32:10.455360 2794 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:32:10.456999 kubelet[2794]: I1213 01:32:10.455374 2794 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:32:10.505459 kubelet[2794]: I1213 01:32:10.505434 2794 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:32:10.505668 kubelet[2794]: I1213 01:32:10.505645 2794 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:32:10.505732 kubelet[2794]: I1213 01:32:10.505721 2794 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:32:10.505936 kubelet[2794]: I1213 01:32:10.505922 2794 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 01:32:10.506045 kubelet[2794]: I1213 01:32:10.506024 2794 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 01:32:10.506093 kubelet[2794]: I1213 01:32:10.506086 2794 policy_none.go:49] "None policy: Start" Dec 13 01:32:10.508093 kubelet[2794]: I1213 01:32:10.508069 2794 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:32:10.508139 kubelet[2794]: I1213 01:32:10.508099 2794 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:32:10.508284 kubelet[2794]: I1213 01:32:10.508260 2794 state_mem.go:75] "Updated machine memory state" Dec 13 01:32:10.515095 kubelet[2794]: I1213 01:32:10.515001 2794 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:32:10.515248 kubelet[2794]: I1213 01:32:10.515180 2794 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 01:32:10.515301 kubelet[2794]: I1213 01:32:10.515280 2794 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:32:10.551087 kubelet[2794]: I1213 01:32:10.551060 2794 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-2-54968c70cf" Dec 13 01:32:10.554000 kubelet[2794]: I1213 01:32:10.552614 2794 topology_manager.go:215] "Topology Admit Handler" podUID="0c8124d3e0a67cd251f9bd0dfeb3af63" podNamespace="kube-system" podName="kube-apiserver-ci-4081-2-1-2-54968c70cf" Dec 13 01:32:10.554000 kubelet[2794]: I1213 01:32:10.552694 2794 topology_manager.go:215] "Topology Admit Handler" podUID="493c7585bd5509d4bc020bd265649454" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-2-1-2-54968c70cf" Dec 13 01:32:10.554000 kubelet[2794]: I1213 01:32:10.552742 2794 topology_manager.go:215] "Topology Admit Handler" podUID="4360acdd961b0e4bead88c342b334e09" podNamespace="kube-system" podName="kube-scheduler-ci-4081-2-1-2-54968c70cf" Dec 13 01:32:10.560003 kubelet[2794]: E1213 01:32:10.559947 2794 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4081-2-1-2-54968c70cf\" already exists" pod="kube-system/kube-scheduler-ci-4081-2-1-2-54968c70cf" Dec 13 01:32:10.563028 kubelet[2794]: I1213 01:32:10.562644 2794 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081-2-1-2-54968c70cf" Dec 13 01:32:10.563028 kubelet[2794]: I1213 01:32:10.562731 2794 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-2-1-2-54968c70cf" Dec 13 01:32:10.599186 sudo[2827]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 13 01:32:10.599678 sudo[2827]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Dec 13 01:32:10.649183 kubelet[2794]: I1213 01:32:10.649134 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0c8124d3e0a67cd251f9bd0dfeb3af63-ca-certs\") pod \"kube-apiserver-ci-4081-2-1-2-54968c70cf\" (UID: \"0c8124d3e0a67cd251f9bd0dfeb3af63\") " pod="kube-system/kube-apiserver-ci-4081-2-1-2-54968c70cf" Dec 13 01:32:10.750891 kubelet[2794]: I1213 01:32:10.750486 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0c8124d3e0a67cd251f9bd0dfeb3af63-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-2-1-2-54968c70cf\" (UID: \"0c8124d3e0a67cd251f9bd0dfeb3af63\") " pod="kube-system/kube-apiserver-ci-4081-2-1-2-54968c70cf" Dec 13 01:32:10.750891 kubelet[2794]: I1213 01:32:10.750547 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4360acdd961b0e4bead88c342b334e09-kubeconfig\") pod \"kube-scheduler-ci-4081-2-1-2-54968c70cf\" (UID: \"4360acdd961b0e4bead88c342b334e09\") " pod="kube-system/kube-scheduler-ci-4081-2-1-2-54968c70cf" Dec 13 01:32:10.750891 kubelet[2794]: I1213 01:32:10.750571 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/493c7585bd5509d4bc020bd265649454-k8s-certs\") pod \"kube-controller-manager-ci-4081-2-1-2-54968c70cf\" (UID: \"493c7585bd5509d4bc020bd265649454\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-2-54968c70cf" Dec 13 01:32:10.750891 kubelet[2794]: I1213 01:32:10.750585 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/493c7585bd5509d4bc020bd265649454-kubeconfig\") pod \"kube-controller-manager-ci-4081-2-1-2-54968c70cf\" (UID: \"493c7585bd5509d4bc020bd265649454\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-2-54968c70cf" Dec 13 01:32:10.750891 kubelet[2794]: I1213 01:32:10.750599 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/493c7585bd5509d4bc020bd265649454-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-2-1-2-54968c70cf\" (UID: \"493c7585bd5509d4bc020bd265649454\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-2-54968c70cf" Dec 13 01:32:10.751192 kubelet[2794]: I1213 01:32:10.750657 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0c8124d3e0a67cd251f9bd0dfeb3af63-k8s-certs\") pod \"kube-apiserver-ci-4081-2-1-2-54968c70cf\" (UID: \"0c8124d3e0a67cd251f9bd0dfeb3af63\") " pod="kube-system/kube-apiserver-ci-4081-2-1-2-54968c70cf" Dec 13 01:32:10.751192 kubelet[2794]: I1213 01:32:10.750675 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/493c7585bd5509d4bc020bd265649454-ca-certs\") pod \"kube-controller-manager-ci-4081-2-1-2-54968c70cf\" (UID: \"493c7585bd5509d4bc020bd265649454\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-2-54968c70cf" Dec 13 01:32:10.751192 kubelet[2794]: I1213 01:32:10.750705 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/493c7585bd5509d4bc020bd265649454-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-2-1-2-54968c70cf\" (UID: \"493c7585bd5509d4bc020bd265649454\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-2-54968c70cf" Dec 13 01:32:11.268158 sudo[2827]: pam_unix(sudo:session): session closed for user root Dec 13 01:32:11.425801 kubelet[2794]: I1213 01:32:11.425759 2794 apiserver.go:52] "Watching apiserver" Dec 13 01:32:11.449392 kubelet[2794]: I1213 01:32:11.449287 2794 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 01:32:11.495009 kubelet[2794]: E1213 01:32:11.492403 2794 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-2-1-2-54968c70cf\" already exists" pod="kube-system/kube-apiserver-ci-4081-2-1-2-54968c70cf" Dec 13 01:32:11.513712 kubelet[2794]: I1213 01:32:11.513470 2794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-2-1-2-54968c70cf" podStartSLOduration=1.513450546 podStartE2EDuration="1.513450546s" podCreationTimestamp="2024-12-13 01:32:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:32:11.512085948 +0000 UTC m=+1.168170414" watchObservedRunningTime="2024-12-13 01:32:11.513450546 +0000 UTC m=+1.169535012" Dec 13 01:32:11.547104 kubelet[2794]: I1213 01:32:11.544455 2794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-2-1-2-54968c70cf" podStartSLOduration=1.544437912 podStartE2EDuration="1.544437912s" podCreationTimestamp="2024-12-13 01:32:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:32:11.530770581 +0000 UTC m=+1.186855047" watchObservedRunningTime="2024-12-13 01:32:11.544437912 +0000 UTC m=+1.200522368" Dec 13 01:32:11.559011 kubelet[2794]: I1213 01:32:11.558935 2794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-2-1-2-54968c70cf" podStartSLOduration=1.558917345 podStartE2EDuration="1.558917345s" podCreationTimestamp="2024-12-13 01:32:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:32:11.547471327 +0000 UTC m=+1.203555784" watchObservedRunningTime="2024-12-13 01:32:11.558917345 +0000 UTC m=+1.215001771" Dec 13 01:32:13.114569 sudo[1881]: pam_unix(sudo:session): session closed for user root Dec 13 01:32:13.275931 sshd[1878]: pam_unix(sshd:session): session closed for user core Dec 13 01:32:13.285343 systemd[1]: sshd@6-78.47.95.53:22-147.75.109.163:48728.service: Deactivated successfully. Dec 13 01:32:13.291084 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 01:32:13.291583 systemd[1]: session-7.scope: Consumed 5.809s CPU time, 189.3M memory peak, 0B memory swap peak. Dec 13 01:32:13.295038 systemd-logind[1473]: Session 7 logged out. Waiting for processes to exit. Dec 13 01:32:13.298214 systemd-logind[1473]: Removed session 7. Dec 13 01:32:24.911651 kubelet[2794]: I1213 01:32:24.911543 2794 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 01:32:24.912349 containerd[1494]: time="2024-12-13T01:32:24.912285577Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 01:32:24.912922 kubelet[2794]: I1213 01:32:24.912582 2794 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 01:32:25.728502 kubelet[2794]: I1213 01:32:25.728331 2794 topology_manager.go:215] "Topology Admit Handler" podUID="d35dc2c8-eb5a-4485-8f47-619aaed984c3" podNamespace="kube-system" podName="kube-proxy-6dtrh" Dec 13 01:32:25.747116 kubelet[2794]: I1213 01:32:25.745389 2794 topology_manager.go:215] "Topology Admit Handler" podUID="3e29260e-d0a9-434d-8eb2-02da89d8fc6e" podNamespace="kube-system" podName="cilium-q2fvh" Dec 13 01:32:25.746083 systemd[1]: Created slice kubepods-besteffort-podd35dc2c8_eb5a_4485_8f47_619aaed984c3.slice - libcontainer container kubepods-besteffort-podd35dc2c8_eb5a_4485_8f47_619aaed984c3.slice. Dec 13 01:32:25.754002 kubelet[2794]: I1213 01:32:25.753687 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d35dc2c8-eb5a-4485-8f47-619aaed984c3-kube-proxy\") pod \"kube-proxy-6dtrh\" (UID: \"d35dc2c8-eb5a-4485-8f47-619aaed984c3\") " pod="kube-system/kube-proxy-6dtrh" Dec 13 01:32:25.754002 kubelet[2794]: I1213 01:32:25.753731 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pt7zm\" (UniqueName: \"kubernetes.io/projected/d35dc2c8-eb5a-4485-8f47-619aaed984c3-kube-api-access-pt7zm\") pod \"kube-proxy-6dtrh\" (UID: \"d35dc2c8-eb5a-4485-8f47-619aaed984c3\") " pod="kube-system/kube-proxy-6dtrh" Dec 13 01:32:25.754002 kubelet[2794]: I1213 01:32:25.753765 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d35dc2c8-eb5a-4485-8f47-619aaed984c3-xtables-lock\") pod \"kube-proxy-6dtrh\" (UID: \"d35dc2c8-eb5a-4485-8f47-619aaed984c3\") " pod="kube-system/kube-proxy-6dtrh" Dec 13 01:32:25.754002 kubelet[2794]: I1213 01:32:25.753785 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d35dc2c8-eb5a-4485-8f47-619aaed984c3-lib-modules\") pod \"kube-proxy-6dtrh\" (UID: \"d35dc2c8-eb5a-4485-8f47-619aaed984c3\") " pod="kube-system/kube-proxy-6dtrh" Dec 13 01:32:25.768144 kubelet[2794]: W1213 01:32:25.766853 2794 reflector.go:547] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4081-2-1-2-54968c70cf" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-2-1-2-54968c70cf' and this object Dec 13 01:32:25.768144 kubelet[2794]: E1213 01:32:25.766911 2794 reflector.go:150] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4081-2-1-2-54968c70cf" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-2-1-2-54968c70cf' and this object Dec 13 01:32:25.768144 kubelet[2794]: W1213 01:32:25.766957 2794 reflector.go:547] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-4081-2-1-2-54968c70cf" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-2-1-2-54968c70cf' and this object Dec 13 01:32:25.768144 kubelet[2794]: E1213 01:32:25.767003 2794 reflector.go:150] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-4081-2-1-2-54968c70cf" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-2-1-2-54968c70cf' and this object Dec 13 01:32:25.768144 kubelet[2794]: W1213 01:32:25.767150 2794 reflector.go:547] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4081-2-1-2-54968c70cf" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-2-1-2-54968c70cf' and this object Dec 13 01:32:25.768447 kubelet[2794]: E1213 01:32:25.767168 2794 reflector.go:150] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4081-2-1-2-54968c70cf" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-2-1-2-54968c70cf' and this object Dec 13 01:32:25.768579 systemd[1]: Created slice kubepods-burstable-pod3e29260e_d0a9_434d_8eb2_02da89d8fc6e.slice - libcontainer container kubepods-burstable-pod3e29260e_d0a9_434d_8eb2_02da89d8fc6e.slice. Dec 13 01:32:25.854551 kubelet[2794]: I1213 01:32:25.854500 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3e29260e-d0a9-434d-8eb2-02da89d8fc6e-cilium-run\") pod \"cilium-q2fvh\" (UID: \"3e29260e-d0a9-434d-8eb2-02da89d8fc6e\") " pod="kube-system/cilium-q2fvh" Dec 13 01:32:25.854756 kubelet[2794]: I1213 01:32:25.854736 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3e29260e-d0a9-434d-8eb2-02da89d8fc6e-xtables-lock\") pod \"cilium-q2fvh\" (UID: \"3e29260e-d0a9-434d-8eb2-02da89d8fc6e\") " pod="kube-system/cilium-q2fvh" Dec 13 01:32:25.854868 kubelet[2794]: I1213 01:32:25.854819 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3e29260e-d0a9-434d-8eb2-02da89d8fc6e-hubble-tls\") pod \"cilium-q2fvh\" (UID: \"3e29260e-d0a9-434d-8eb2-02da89d8fc6e\") " pod="kube-system/cilium-q2fvh" Dec 13 01:32:25.854948 kubelet[2794]: I1213 01:32:25.854871 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3e29260e-d0a9-434d-8eb2-02da89d8fc6e-bpf-maps\") pod \"cilium-q2fvh\" (UID: \"3e29260e-d0a9-434d-8eb2-02da89d8fc6e\") " pod="kube-system/cilium-q2fvh" Dec 13 01:32:25.854948 kubelet[2794]: I1213 01:32:25.854888 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3e29260e-d0a9-434d-8eb2-02da89d8fc6e-hostproc\") pod \"cilium-q2fvh\" (UID: \"3e29260e-d0a9-434d-8eb2-02da89d8fc6e\") " pod="kube-system/cilium-q2fvh" Dec 13 01:32:25.854948 kubelet[2794]: I1213 01:32:25.854904 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3e29260e-d0a9-434d-8eb2-02da89d8fc6e-cilium-cgroup\") pod \"cilium-q2fvh\" (UID: \"3e29260e-d0a9-434d-8eb2-02da89d8fc6e\") " pod="kube-system/cilium-q2fvh" Dec 13 01:32:25.854948 kubelet[2794]: I1213 01:32:25.854926 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2mqj\" (UniqueName: \"kubernetes.io/projected/3e29260e-d0a9-434d-8eb2-02da89d8fc6e-kube-api-access-z2mqj\") pod \"cilium-q2fvh\" (UID: \"3e29260e-d0a9-434d-8eb2-02da89d8fc6e\") " pod="kube-system/cilium-q2fvh" Dec 13 01:32:25.855136 kubelet[2794]: I1213 01:32:25.854957 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3e29260e-d0a9-434d-8eb2-02da89d8fc6e-clustermesh-secrets\") pod \"cilium-q2fvh\" (UID: \"3e29260e-d0a9-434d-8eb2-02da89d8fc6e\") " pod="kube-system/cilium-q2fvh" Dec 13 01:32:25.855136 kubelet[2794]: I1213 01:32:25.855004 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3e29260e-d0a9-434d-8eb2-02da89d8fc6e-cilium-config-path\") pod \"cilium-q2fvh\" (UID: \"3e29260e-d0a9-434d-8eb2-02da89d8fc6e\") " pod="kube-system/cilium-q2fvh" Dec 13 01:32:25.855136 kubelet[2794]: I1213 01:32:25.855021 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3e29260e-d0a9-434d-8eb2-02da89d8fc6e-host-proc-sys-kernel\") pod \"cilium-q2fvh\" (UID: \"3e29260e-d0a9-434d-8eb2-02da89d8fc6e\") " pod="kube-system/cilium-q2fvh" Dec 13 01:32:25.855136 kubelet[2794]: I1213 01:32:25.855037 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3e29260e-d0a9-434d-8eb2-02da89d8fc6e-host-proc-sys-net\") pod \"cilium-q2fvh\" (UID: \"3e29260e-d0a9-434d-8eb2-02da89d8fc6e\") " pod="kube-system/cilium-q2fvh" Dec 13 01:32:25.855136 kubelet[2794]: I1213 01:32:25.855076 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3e29260e-d0a9-434d-8eb2-02da89d8fc6e-etc-cni-netd\") pod \"cilium-q2fvh\" (UID: \"3e29260e-d0a9-434d-8eb2-02da89d8fc6e\") " pod="kube-system/cilium-q2fvh" Dec 13 01:32:25.855261 kubelet[2794]: I1213 01:32:25.855104 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3e29260e-d0a9-434d-8eb2-02da89d8fc6e-cni-path\") pod \"cilium-q2fvh\" (UID: \"3e29260e-d0a9-434d-8eb2-02da89d8fc6e\") " pod="kube-system/cilium-q2fvh" Dec 13 01:32:25.855261 kubelet[2794]: I1213 01:32:25.855120 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e29260e-d0a9-434d-8eb2-02da89d8fc6e-lib-modules\") pod \"cilium-q2fvh\" (UID: \"3e29260e-d0a9-434d-8eb2-02da89d8fc6e\") " pod="kube-system/cilium-q2fvh" Dec 13 01:32:25.960060 kubelet[2794]: I1213 01:32:25.958380 2794 topology_manager.go:215] "Topology Admit Handler" podUID="f789b555-b05f-4899-811a-0ec6fe0479b8" podNamespace="kube-system" podName="cilium-operator-599987898-7fhqk" Dec 13 01:32:25.973186 systemd[1]: Created slice kubepods-besteffort-podf789b555_b05f_4899_811a_0ec6fe0479b8.slice - libcontainer container kubepods-besteffort-podf789b555_b05f_4899_811a_0ec6fe0479b8.slice. Dec 13 01:32:26.057444 kubelet[2794]: I1213 01:32:26.057278 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f789b555-b05f-4899-811a-0ec6fe0479b8-cilium-config-path\") pod \"cilium-operator-599987898-7fhqk\" (UID: \"f789b555-b05f-4899-811a-0ec6fe0479b8\") " pod="kube-system/cilium-operator-599987898-7fhqk" Dec 13 01:32:26.057444 kubelet[2794]: I1213 01:32:26.057353 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txljb\" (UniqueName: \"kubernetes.io/projected/f789b555-b05f-4899-811a-0ec6fe0479b8-kube-api-access-txljb\") pod \"cilium-operator-599987898-7fhqk\" (UID: \"f789b555-b05f-4899-811a-0ec6fe0479b8\") " pod="kube-system/cilium-operator-599987898-7fhqk" Dec 13 01:32:26.064564 containerd[1494]: time="2024-12-13T01:32:26.064507088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6dtrh,Uid:d35dc2c8-eb5a-4485-8f47-619aaed984c3,Namespace:kube-system,Attempt:0,}" Dec 13 01:32:26.104350 containerd[1494]: time="2024-12-13T01:32:26.103849131Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:32:26.104350 containerd[1494]: time="2024-12-13T01:32:26.104021074Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:32:26.104350 containerd[1494]: time="2024-12-13T01:32:26.104055870Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:26.104726 containerd[1494]: time="2024-12-13T01:32:26.104534731Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:26.133341 systemd[1]: Started cri-containerd-5b505f047fd62fe863322bcabf9f32a6e3b905c6d7a1a6863788cf29f706bf35.scope - libcontainer container 5b505f047fd62fe863322bcabf9f32a6e3b905c6d7a1a6863788cf29f706bf35. Dec 13 01:32:26.181494 containerd[1494]: time="2024-12-13T01:32:26.181343745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6dtrh,Uid:d35dc2c8-eb5a-4485-8f47-619aaed984c3,Namespace:kube-system,Attempt:0,} returns sandbox id \"5b505f047fd62fe863322bcabf9f32a6e3b905c6d7a1a6863788cf29f706bf35\"" Dec 13 01:32:26.185906 containerd[1494]: time="2024-12-13T01:32:26.185651186Z" level=info msg="CreateContainer within sandbox \"5b505f047fd62fe863322bcabf9f32a6e3b905c6d7a1a6863788cf29f706bf35\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 01:32:26.207058 containerd[1494]: time="2024-12-13T01:32:26.206958357Z" level=info msg="CreateContainer within sandbox \"5b505f047fd62fe863322bcabf9f32a6e3b905c6d7a1a6863788cf29f706bf35\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5b3b5dc7855e4efff453b9f806e8dff2ef23ef95ff749ba3b359f78b0202264a\"" Dec 13 01:32:26.208606 containerd[1494]: time="2024-12-13T01:32:26.207721786Z" level=info msg="StartContainer for \"5b3b5dc7855e4efff453b9f806e8dff2ef23ef95ff749ba3b359f78b0202264a\"" Dec 13 01:32:26.247285 systemd[1]: Started cri-containerd-5b3b5dc7855e4efff453b9f806e8dff2ef23ef95ff749ba3b359f78b0202264a.scope - libcontainer container 5b3b5dc7855e4efff453b9f806e8dff2ef23ef95ff749ba3b359f78b0202264a. Dec 13 01:32:26.291836 containerd[1494]: time="2024-12-13T01:32:26.291778195Z" level=info msg="StartContainer for \"5b3b5dc7855e4efff453b9f806e8dff2ef23ef95ff749ba3b359f78b0202264a\" returns successfully" Dec 13 01:32:26.526724 kubelet[2794]: I1213 01:32:26.526405 2794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6dtrh" podStartSLOduration=1.5263874020000001 podStartE2EDuration="1.526387402s" podCreationTimestamp="2024-12-13 01:32:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:32:26.525905353 +0000 UTC m=+16.181989789" watchObservedRunningTime="2024-12-13 01:32:26.526387402 +0000 UTC m=+16.182471837" Dec 13 01:32:26.962373 kubelet[2794]: E1213 01:32:26.962318 2794 secret.go:194] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Dec 13 01:32:26.963001 kubelet[2794]: E1213 01:32:26.962431 2794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3e29260e-d0a9-434d-8eb2-02da89d8fc6e-clustermesh-secrets podName:3e29260e-d0a9-434d-8eb2-02da89d8fc6e nodeName:}" failed. No retries permitted until 2024-12-13 01:32:27.462394944 +0000 UTC m=+17.118479380 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/3e29260e-d0a9-434d-8eb2-02da89d8fc6e-clustermesh-secrets") pod "cilium-q2fvh" (UID: "3e29260e-d0a9-434d-8eb2-02da89d8fc6e") : failed to sync secret cache: timed out waiting for the condition Dec 13 01:32:26.965600 kubelet[2794]: E1213 01:32:26.965457 2794 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Dec 13 01:32:26.965600 kubelet[2794]: E1213 01:32:26.965522 2794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3e29260e-d0a9-434d-8eb2-02da89d8fc6e-cilium-config-path podName:3e29260e-d0a9-434d-8eb2-02da89d8fc6e nodeName:}" failed. No retries permitted until 2024-12-13 01:32:27.465508618 +0000 UTC m=+17.121593054 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/3e29260e-d0a9-434d-8eb2-02da89d8fc6e-cilium-config-path") pod "cilium-q2fvh" (UID: "3e29260e-d0a9-434d-8eb2-02da89d8fc6e") : failed to sync configmap cache: timed out waiting for the condition Dec 13 01:32:27.178273 containerd[1494]: time="2024-12-13T01:32:27.178156655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-7fhqk,Uid:f789b555-b05f-4899-811a-0ec6fe0479b8,Namespace:kube-system,Attempt:0,}" Dec 13 01:32:27.230732 containerd[1494]: time="2024-12-13T01:32:27.228290774Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:32:27.230732 containerd[1494]: time="2024-12-13T01:32:27.229419933Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:32:27.230732 containerd[1494]: time="2024-12-13T01:32:27.229465659Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:27.232199 containerd[1494]: time="2024-12-13T01:32:27.230465234Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:27.276199 systemd[1]: Started cri-containerd-69b9f8ba37133c7c791bea4d705a6caafff1eb20e2655e7e62391e6d49ae1704.scope - libcontainer container 69b9f8ba37133c7c791bea4d705a6caafff1eb20e2655e7e62391e6d49ae1704. Dec 13 01:32:27.332625 containerd[1494]: time="2024-12-13T01:32:27.332567942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-7fhqk,Uid:f789b555-b05f-4899-811a-0ec6fe0479b8,Namespace:kube-system,Attempt:0,} returns sandbox id \"69b9f8ba37133c7c791bea4d705a6caafff1eb20e2655e7e62391e6d49ae1704\"" Dec 13 01:32:27.335500 containerd[1494]: time="2024-12-13T01:32:27.335471305Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 01:32:27.574600 containerd[1494]: time="2024-12-13T01:32:27.574053041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q2fvh,Uid:3e29260e-d0a9-434d-8eb2-02da89d8fc6e,Namespace:kube-system,Attempt:0,}" Dec 13 01:32:27.604382 containerd[1494]: time="2024-12-13T01:32:27.604165402Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:32:27.606358 containerd[1494]: time="2024-12-13T01:32:27.605928326Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:32:27.606358 containerd[1494]: time="2024-12-13T01:32:27.606019036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:27.606358 containerd[1494]: time="2024-12-13T01:32:27.606186012Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:27.640288 systemd[1]: Started cri-containerd-f2e29207363bfec2ab3ca2c39bb6f63c74b8f0a035281a8f4084709f891e251f.scope - libcontainer container f2e29207363bfec2ab3ca2c39bb6f63c74b8f0a035281a8f4084709f891e251f. Dec 13 01:32:27.685461 containerd[1494]: time="2024-12-13T01:32:27.685356384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q2fvh,Uid:3e29260e-d0a9-434d-8eb2-02da89d8fc6e,Namespace:kube-system,Attempt:0,} returns sandbox id \"f2e29207363bfec2ab3ca2c39bb6f63c74b8f0a035281a8f4084709f891e251f\"" Dec 13 01:32:28.128905 update_engine[1474]: I20241213 01:32:28.128787 1474 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Dec 13 01:32:28.128905 update_engine[1474]: I20241213 01:32:28.128851 1474 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Dec 13 01:32:28.129913 update_engine[1474]: I20241213 01:32:28.129218 1474 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Dec 13 01:32:28.129913 update_engine[1474]: I20241213 01:32:28.129721 1474 omaha_request_params.cc:62] Current group set to stable Dec 13 01:32:28.131554 update_engine[1474]: I20241213 01:32:28.131421 1474 update_attempter.cc:499] Already updated boot flags. Skipping. Dec 13 01:32:28.131554 update_engine[1474]: I20241213 01:32:28.131464 1474 update_attempter.cc:643] Scheduling an action processor start. Dec 13 01:32:28.131554 update_engine[1474]: I20241213 01:32:28.131493 1474 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 13 01:32:28.131746 update_engine[1474]: I20241213 01:32:28.131631 1474 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Dec 13 01:32:28.132040 update_engine[1474]: I20241213 01:32:28.131783 1474 omaha_request_action.cc:271] Posting an Omaha request to disabled Dec 13 01:32:28.132040 update_engine[1474]: I20241213 01:32:28.131812 1474 omaha_request_action.cc:272] Request: Dec 13 01:32:28.132040 update_engine[1474]: Dec 13 01:32:28.132040 update_engine[1474]: Dec 13 01:32:28.132040 update_engine[1474]: Dec 13 01:32:28.132040 update_engine[1474]: Dec 13 01:32:28.132040 update_engine[1474]: Dec 13 01:32:28.132040 update_engine[1474]: Dec 13 01:32:28.132040 update_engine[1474]: Dec 13 01:32:28.132040 update_engine[1474]: Dec 13 01:32:28.132040 update_engine[1474]: I20241213 01:32:28.131827 1474 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 01:32:28.132481 locksmithd[1513]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Dec 13 01:32:28.137945 update_engine[1474]: I20241213 01:32:28.137874 1474 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 01:32:28.138215 update_engine[1474]: I20241213 01:32:28.138185 1474 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 01:32:28.138965 update_engine[1474]: E20241213 01:32:28.138917 1474 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 01:32:28.139047 update_engine[1474]: I20241213 01:32:28.138991 1474 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Dec 13 01:32:28.983487 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2448517707.mount: Deactivated successfully. Dec 13 01:32:29.660319 containerd[1494]: time="2024-12-13T01:32:29.660245804Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:29.662084 containerd[1494]: time="2024-12-13T01:32:29.662032230Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907193" Dec 13 01:32:29.663275 containerd[1494]: time="2024-12-13T01:32:29.663229172Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:29.664409 containerd[1494]: time="2024-12-13T01:32:29.664378373Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.328877743s" Dec 13 01:32:29.664464 containerd[1494]: time="2024-12-13T01:32:29.664411516Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 01:32:29.665871 containerd[1494]: time="2024-12-13T01:32:29.665836028Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 01:32:29.666884 containerd[1494]: time="2024-12-13T01:32:29.666776264Z" level=info msg="CreateContainer within sandbox \"69b9f8ba37133c7c791bea4d705a6caafff1eb20e2655e7e62391e6d49ae1704\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 01:32:29.696292 containerd[1494]: time="2024-12-13T01:32:29.696237638Z" level=info msg="CreateContainer within sandbox \"69b9f8ba37133c7c791bea4d705a6caafff1eb20e2655e7e62391e6d49ae1704\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"d18040f1f596af8ed9bbde2aa6f47c79590647427a792fa8f43053c4bf2c2648\"" Dec 13 01:32:29.696994 containerd[1494]: time="2024-12-13T01:32:29.696926640Z" level=info msg="StartContainer for \"d18040f1f596af8ed9bbde2aa6f47c79590647427a792fa8f43053c4bf2c2648\"" Dec 13 01:32:29.724143 systemd[1]: Started cri-containerd-d18040f1f596af8ed9bbde2aa6f47c79590647427a792fa8f43053c4bf2c2648.scope - libcontainer container d18040f1f596af8ed9bbde2aa6f47c79590647427a792fa8f43053c4bf2c2648. Dec 13 01:32:29.755515 containerd[1494]: time="2024-12-13T01:32:29.755462848Z" level=info msg="StartContainer for \"d18040f1f596af8ed9bbde2aa6f47c79590647427a792fa8f43053c4bf2c2648\" returns successfully" Dec 13 01:32:30.557008 kubelet[2794]: I1213 01:32:30.553968 2794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-7fhqk" podStartSLOduration=3.223575992 podStartE2EDuration="5.553949989s" podCreationTimestamp="2024-12-13 01:32:25 +0000 UTC" firstStartedPulling="2024-12-13 01:32:27.334936868 +0000 UTC m=+16.991021334" lastFinishedPulling="2024-12-13 01:32:29.665310895 +0000 UTC m=+19.321395331" observedRunningTime="2024-12-13 01:32:30.553397685 +0000 UTC m=+20.209482120" watchObservedRunningTime="2024-12-13 01:32:30.553949989 +0000 UTC m=+20.210034424" Dec 13 01:32:36.019385 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3951223973.mount: Deactivated successfully. Dec 13 01:32:37.996705 containerd[1494]: time="2024-12-13T01:32:37.996539031Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:37.998450 containerd[1494]: time="2024-12-13T01:32:37.998386039Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735251" Dec 13 01:32:37.999140 containerd[1494]: time="2024-12-13T01:32:37.999095860Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:38.000822 containerd[1494]: time="2024-12-13T01:32:38.000498171Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.334633991s" Dec 13 01:32:38.000822 containerd[1494]: time="2024-12-13T01:32:38.000530823Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 01:32:38.027970 containerd[1494]: time="2024-12-13T01:32:38.027917643Z" level=info msg="CreateContainer within sandbox \"f2e29207363bfec2ab3ca2c39bb6f63c74b8f0a035281a8f4084709f891e251f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 01:32:38.136568 update_engine[1474]: I20241213 01:32:38.136123 1474 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 01:32:38.138140 update_engine[1474]: I20241213 01:32:38.136731 1474 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 01:32:38.138140 update_engine[1474]: I20241213 01:32:38.137130 1474 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 01:32:38.138450 update_engine[1474]: E20241213 01:32:38.138320 1474 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 01:32:38.138450 update_engine[1474]: I20241213 01:32:38.138417 1474 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Dec 13 01:32:38.168749 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4143315775.mount: Deactivated successfully. Dec 13 01:32:38.171237 containerd[1494]: time="2024-12-13T01:32:38.171149970Z" level=info msg="CreateContainer within sandbox \"f2e29207363bfec2ab3ca2c39bb6f63c74b8f0a035281a8f4084709f891e251f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8c1f0abb81ba1115cc61a4f28c325e677017f9210b158ad4f47011796da078f5\"" Dec 13 01:32:38.172568 containerd[1494]: time="2024-12-13T01:32:38.172427596Z" level=info msg="StartContainer for \"8c1f0abb81ba1115cc61a4f28c325e677017f9210b158ad4f47011796da078f5\"" Dec 13 01:32:38.440284 systemd[1]: Started cri-containerd-8c1f0abb81ba1115cc61a4f28c325e677017f9210b158ad4f47011796da078f5.scope - libcontainer container 8c1f0abb81ba1115cc61a4f28c325e677017f9210b158ad4f47011796da078f5. Dec 13 01:32:38.488450 containerd[1494]: time="2024-12-13T01:32:38.488318377Z" level=info msg="StartContainer for \"8c1f0abb81ba1115cc61a4f28c325e677017f9210b158ad4f47011796da078f5\" returns successfully" Dec 13 01:32:38.511442 systemd[1]: cri-containerd-8c1f0abb81ba1115cc61a4f28c325e677017f9210b158ad4f47011796da078f5.scope: Deactivated successfully. Dec 13 01:32:38.627961 containerd[1494]: time="2024-12-13T01:32:38.600258981Z" level=info msg="shim disconnected" id=8c1f0abb81ba1115cc61a4f28c325e677017f9210b158ad4f47011796da078f5 namespace=k8s.io Dec 13 01:32:38.627961 containerd[1494]: time="2024-12-13T01:32:38.627635701Z" level=warning msg="cleaning up after shim disconnected" id=8c1f0abb81ba1115cc61a4f28c325e677017f9210b158ad4f47011796da078f5 namespace=k8s.io Dec 13 01:32:38.627961 containerd[1494]: time="2024-12-13T01:32:38.627656381Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:32:39.160327 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8c1f0abb81ba1115cc61a4f28c325e677017f9210b158ad4f47011796da078f5-rootfs.mount: Deactivated successfully. Dec 13 01:32:39.561199 containerd[1494]: time="2024-12-13T01:32:39.560645556Z" level=info msg="CreateContainer within sandbox \"f2e29207363bfec2ab3ca2c39bb6f63c74b8f0a035281a8f4084709f891e251f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 01:32:39.616923 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1629247080.mount: Deactivated successfully. Dec 13 01:32:39.626757 containerd[1494]: time="2024-12-13T01:32:39.626614743Z" level=info msg="CreateContainer within sandbox \"f2e29207363bfec2ab3ca2c39bb6f63c74b8f0a035281a8f4084709f891e251f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"36b60ecddf5b8eb073af21412ff17446cebe23965b636b19747c99b3db76b4d0\"" Dec 13 01:32:39.627749 containerd[1494]: time="2024-12-13T01:32:39.627699743Z" level=info msg="StartContainer for \"36b60ecddf5b8eb073af21412ff17446cebe23965b636b19747c99b3db76b4d0\"" Dec 13 01:32:39.680310 systemd[1]: Started cri-containerd-36b60ecddf5b8eb073af21412ff17446cebe23965b636b19747c99b3db76b4d0.scope - libcontainer container 36b60ecddf5b8eb073af21412ff17446cebe23965b636b19747c99b3db76b4d0. Dec 13 01:32:39.717617 containerd[1494]: time="2024-12-13T01:32:39.717378779Z" level=info msg="StartContainer for \"36b60ecddf5b8eb073af21412ff17446cebe23965b636b19747c99b3db76b4d0\" returns successfully" Dec 13 01:32:39.737961 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:32:39.738399 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:32:39.739081 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:32:39.745504 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:32:39.745838 systemd[1]: cri-containerd-36b60ecddf5b8eb073af21412ff17446cebe23965b636b19747c99b3db76b4d0.scope: Deactivated successfully. Dec 13 01:32:39.795432 containerd[1494]: time="2024-12-13T01:32:39.795375985Z" level=info msg="shim disconnected" id=36b60ecddf5b8eb073af21412ff17446cebe23965b636b19747c99b3db76b4d0 namespace=k8s.io Dec 13 01:32:39.796992 containerd[1494]: time="2024-12-13T01:32:39.795658333Z" level=warning msg="cleaning up after shim disconnected" id=36b60ecddf5b8eb073af21412ff17446cebe23965b636b19747c99b3db76b4d0 namespace=k8s.io Dec 13 01:32:39.796992 containerd[1494]: time="2024-12-13T01:32:39.795672080Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:32:39.808816 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:32:39.812800 containerd[1494]: time="2024-12-13T01:32:39.812327958Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:32:39Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 01:32:40.162069 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-36b60ecddf5b8eb073af21412ff17446cebe23965b636b19747c99b3db76b4d0-rootfs.mount: Deactivated successfully. Dec 13 01:32:40.564808 containerd[1494]: time="2024-12-13T01:32:40.564614486Z" level=info msg="CreateContainer within sandbox \"f2e29207363bfec2ab3ca2c39bb6f63c74b8f0a035281a8f4084709f891e251f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 01:32:40.594198 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount422565724.mount: Deactivated successfully. Dec 13 01:32:40.598679 containerd[1494]: time="2024-12-13T01:32:40.598194026Z" level=info msg="CreateContainer within sandbox \"f2e29207363bfec2ab3ca2c39bb6f63c74b8f0a035281a8f4084709f891e251f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8a7b4a77ac3eb4ba78442f1835b5fdc0069a69d191600da5af698a2e35f3f456\"" Dec 13 01:32:40.598871 containerd[1494]: time="2024-12-13T01:32:40.598846621Z" level=info msg="StartContainer for \"8a7b4a77ac3eb4ba78442f1835b5fdc0069a69d191600da5af698a2e35f3f456\"" Dec 13 01:32:40.640234 systemd[1]: Started cri-containerd-8a7b4a77ac3eb4ba78442f1835b5fdc0069a69d191600da5af698a2e35f3f456.scope - libcontainer container 8a7b4a77ac3eb4ba78442f1835b5fdc0069a69d191600da5af698a2e35f3f456. Dec 13 01:32:40.683240 containerd[1494]: time="2024-12-13T01:32:40.683192329Z" level=info msg="StartContainer for \"8a7b4a77ac3eb4ba78442f1835b5fdc0069a69d191600da5af698a2e35f3f456\" returns successfully" Dec 13 01:32:40.691492 systemd[1]: cri-containerd-8a7b4a77ac3eb4ba78442f1835b5fdc0069a69d191600da5af698a2e35f3f456.scope: Deactivated successfully. Dec 13 01:32:40.720309 containerd[1494]: time="2024-12-13T01:32:40.720239099Z" level=info msg="shim disconnected" id=8a7b4a77ac3eb4ba78442f1835b5fdc0069a69d191600da5af698a2e35f3f456 namespace=k8s.io Dec 13 01:32:40.720309 containerd[1494]: time="2024-12-13T01:32:40.720292452Z" level=warning msg="cleaning up after shim disconnected" id=8a7b4a77ac3eb4ba78442f1835b5fdc0069a69d191600da5af698a2e35f3f456 namespace=k8s.io Dec 13 01:32:40.720309 containerd[1494]: time="2024-12-13T01:32:40.720300837Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:32:41.160699 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8a7b4a77ac3eb4ba78442f1835b5fdc0069a69d191600da5af698a2e35f3f456-rootfs.mount: Deactivated successfully. Dec 13 01:32:41.575117 containerd[1494]: time="2024-12-13T01:32:41.574486887Z" level=info msg="CreateContainer within sandbox \"f2e29207363bfec2ab3ca2c39bb6f63c74b8f0a035281a8f4084709f891e251f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 01:32:41.608092 containerd[1494]: time="2024-12-13T01:32:41.605103074Z" level=info msg="CreateContainer within sandbox \"f2e29207363bfec2ab3ca2c39bb6f63c74b8f0a035281a8f4084709f891e251f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7362ad9a82f87cde7a3f60312388cc3af4c18e128f05637871df82890b99d966\"" Dec 13 01:32:41.605783 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3364153271.mount: Deactivated successfully. Dec 13 01:32:41.609489 containerd[1494]: time="2024-12-13T01:32:41.609411959Z" level=info msg="StartContainer for \"7362ad9a82f87cde7a3f60312388cc3af4c18e128f05637871df82890b99d966\"" Dec 13 01:32:41.673301 systemd[1]: Started cri-containerd-7362ad9a82f87cde7a3f60312388cc3af4c18e128f05637871df82890b99d966.scope - libcontainer container 7362ad9a82f87cde7a3f60312388cc3af4c18e128f05637871df82890b99d966. Dec 13 01:32:41.718642 systemd[1]: cri-containerd-7362ad9a82f87cde7a3f60312388cc3af4c18e128f05637871df82890b99d966.scope: Deactivated successfully. Dec 13 01:32:41.720478 containerd[1494]: time="2024-12-13T01:32:41.720329970Z" level=info msg="StartContainer for \"7362ad9a82f87cde7a3f60312388cc3af4c18e128f05637871df82890b99d966\" returns successfully" Dec 13 01:32:41.749921 containerd[1494]: time="2024-12-13T01:32:41.749789356Z" level=info msg="shim disconnected" id=7362ad9a82f87cde7a3f60312388cc3af4c18e128f05637871df82890b99d966 namespace=k8s.io Dec 13 01:32:41.750280 containerd[1494]: time="2024-12-13T01:32:41.750249365Z" level=warning msg="cleaning up after shim disconnected" id=7362ad9a82f87cde7a3f60312388cc3af4c18e128f05637871df82890b99d966 namespace=k8s.io Dec 13 01:32:41.750280 containerd[1494]: time="2024-12-13T01:32:41.750273130Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:32:42.161818 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7362ad9a82f87cde7a3f60312388cc3af4c18e128f05637871df82890b99d966-rootfs.mount: Deactivated successfully. Dec 13 01:32:42.580214 containerd[1494]: time="2024-12-13T01:32:42.579823015Z" level=info msg="CreateContainer within sandbox \"f2e29207363bfec2ab3ca2c39bb6f63c74b8f0a035281a8f4084709f891e251f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 01:32:42.653406 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2655649771.mount: Deactivated successfully. Dec 13 01:32:42.657039 containerd[1494]: time="2024-12-13T01:32:42.656938130Z" level=info msg="CreateContainer within sandbox \"f2e29207363bfec2ab3ca2c39bb6f63c74b8f0a035281a8f4084709f891e251f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1ff0204d139cae43619e2ab27f737a5745b6ff229307f4802cbbac04c63cc550\"" Dec 13 01:32:42.661866 containerd[1494]: time="2024-12-13T01:32:42.659914782Z" level=info msg="StartContainer for \"1ff0204d139cae43619e2ab27f737a5745b6ff229307f4802cbbac04c63cc550\"" Dec 13 01:32:42.718689 systemd[1]: Started cri-containerd-1ff0204d139cae43619e2ab27f737a5745b6ff229307f4802cbbac04c63cc550.scope - libcontainer container 1ff0204d139cae43619e2ab27f737a5745b6ff229307f4802cbbac04c63cc550. Dec 13 01:32:42.767726 containerd[1494]: time="2024-12-13T01:32:42.767685263Z" level=info msg="StartContainer for \"1ff0204d139cae43619e2ab27f737a5745b6ff229307f4802cbbac04c63cc550\" returns successfully" Dec 13 01:32:42.923194 kubelet[2794]: I1213 01:32:42.923149 2794 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 01:32:42.957998 kubelet[2794]: I1213 01:32:42.957938 2794 topology_manager.go:215] "Topology Admit Handler" podUID="22895f47-88d9-4179-ad38-350c424b08f5" podNamespace="kube-system" podName="coredns-7db6d8ff4d-tldkx" Dec 13 01:32:42.978514 kubelet[2794]: I1213 01:32:42.978478 2794 topology_manager.go:215] "Topology Admit Handler" podUID="f817291d-c2c5-4564-affc-8f2c7821747d" podNamespace="kube-system" podName="coredns-7db6d8ff4d-4rwlm" Dec 13 01:32:43.004777 systemd[1]: Created slice kubepods-burstable-podf817291d_c2c5_4564_affc_8f2c7821747d.slice - libcontainer container kubepods-burstable-podf817291d_c2c5_4564_affc_8f2c7821747d.slice. Dec 13 01:32:43.033126 systemd[1]: Created slice kubepods-burstable-pod22895f47_88d9_4179_ad38_350c424b08f5.slice - libcontainer container kubepods-burstable-pod22895f47_88d9_4179_ad38_350c424b08f5.slice. Dec 13 01:32:43.102668 kubelet[2794]: I1213 01:32:43.102627 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2s65\" (UniqueName: \"kubernetes.io/projected/22895f47-88d9-4179-ad38-350c424b08f5-kube-api-access-t2s65\") pod \"coredns-7db6d8ff4d-tldkx\" (UID: \"22895f47-88d9-4179-ad38-350c424b08f5\") " pod="kube-system/coredns-7db6d8ff4d-tldkx" Dec 13 01:32:43.102668 kubelet[2794]: I1213 01:32:43.102666 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f817291d-c2c5-4564-affc-8f2c7821747d-config-volume\") pod \"coredns-7db6d8ff4d-4rwlm\" (UID: \"f817291d-c2c5-4564-affc-8f2c7821747d\") " pod="kube-system/coredns-7db6d8ff4d-4rwlm" Dec 13 01:32:43.102844 kubelet[2794]: I1213 01:32:43.102686 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/22895f47-88d9-4179-ad38-350c424b08f5-config-volume\") pod \"coredns-7db6d8ff4d-tldkx\" (UID: \"22895f47-88d9-4179-ad38-350c424b08f5\") " pod="kube-system/coredns-7db6d8ff4d-tldkx" Dec 13 01:32:43.102844 kubelet[2794]: I1213 01:32:43.102700 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9z9sr\" (UniqueName: \"kubernetes.io/projected/f817291d-c2c5-4564-affc-8f2c7821747d-kube-api-access-9z9sr\") pod \"coredns-7db6d8ff4d-4rwlm\" (UID: \"f817291d-c2c5-4564-affc-8f2c7821747d\") " pod="kube-system/coredns-7db6d8ff4d-4rwlm" Dec 13 01:32:43.333579 containerd[1494]: time="2024-12-13T01:32:43.332765945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4rwlm,Uid:f817291d-c2c5-4564-affc-8f2c7821747d,Namespace:kube-system,Attempt:0,}" Dec 13 01:32:43.337792 containerd[1494]: time="2024-12-13T01:32:43.337758730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tldkx,Uid:22895f47-88d9-4179-ad38-350c424b08f5,Namespace:kube-system,Attempt:0,}" Dec 13 01:32:43.602717 kubelet[2794]: I1213 01:32:43.601914 2794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-q2fvh" podStartSLOduration=8.287215232 podStartE2EDuration="18.60189402s" podCreationTimestamp="2024-12-13 01:32:25 +0000 UTC" firstStartedPulling="2024-12-13 01:32:27.686932465 +0000 UTC m=+17.343016911" lastFinishedPulling="2024-12-13 01:32:38.001611263 +0000 UTC m=+27.657695699" observedRunningTime="2024-12-13 01:32:43.601221132 +0000 UTC m=+33.257305607" watchObservedRunningTime="2024-12-13 01:32:43.60189402 +0000 UTC m=+33.257978455" Dec 13 01:32:45.143034 systemd-networkd[1387]: cilium_host: Link UP Dec 13 01:32:45.145183 systemd-networkd[1387]: cilium_net: Link UP Dec 13 01:32:45.145549 systemd-networkd[1387]: cilium_net: Gained carrier Dec 13 01:32:45.147477 systemd-networkd[1387]: cilium_host: Gained carrier Dec 13 01:32:45.332620 systemd-networkd[1387]: cilium_vxlan: Link UP Dec 13 01:32:45.333794 systemd-networkd[1387]: cilium_vxlan: Gained carrier Dec 13 01:32:45.691291 systemd-networkd[1387]: cilium_net: Gained IPv6LL Dec 13 01:32:45.746315 kernel: NET: Registered PF_ALG protocol family Dec 13 01:32:45.882293 systemd-networkd[1387]: cilium_host: Gained IPv6LL Dec 13 01:32:46.523684 systemd-networkd[1387]: cilium_vxlan: Gained IPv6LL Dec 13 01:32:46.530646 systemd-networkd[1387]: lxc_health: Link UP Dec 13 01:32:46.538438 systemd-networkd[1387]: lxc_health: Gained carrier Dec 13 01:32:46.926452 systemd-networkd[1387]: lxcd02e77aa7895: Link UP Dec 13 01:32:46.935919 systemd-networkd[1387]: lxc773df0f036c4: Link UP Dec 13 01:32:46.944119 kernel: eth0: renamed from tmp9fbaf Dec 13 01:32:46.950019 kernel: eth0: renamed from tmp154ce Dec 13 01:32:46.959486 systemd-networkd[1387]: lxcd02e77aa7895: Gained carrier Dec 13 01:32:46.961597 systemd-networkd[1387]: lxc773df0f036c4: Gained carrier Dec 13 01:32:47.802178 systemd-networkd[1387]: lxc_health: Gained IPv6LL Dec 13 01:32:48.143554 update_engine[1474]: I20241213 01:32:48.143015 1474 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 01:32:48.145513 update_engine[1474]: I20241213 01:32:48.145215 1474 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 01:32:48.145513 update_engine[1474]: I20241213 01:32:48.145460 1474 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 01:32:48.146157 update_engine[1474]: E20241213 01:32:48.146093 1474 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 01:32:48.146157 update_engine[1474]: I20241213 01:32:48.146136 1474 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Dec 13 01:32:48.954191 systemd-networkd[1387]: lxc773df0f036c4: Gained IPv6LL Dec 13 01:32:49.018210 systemd-networkd[1387]: lxcd02e77aa7895: Gained IPv6LL Dec 13 01:32:50.623399 containerd[1494]: time="2024-12-13T01:32:50.623188422Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:32:50.623399 containerd[1494]: time="2024-12-13T01:32:50.623253297Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:32:50.623399 containerd[1494]: time="2024-12-13T01:32:50.623265360Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:50.625951 containerd[1494]: time="2024-12-13T01:32:50.625767891Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:50.667164 systemd[1]: Started cri-containerd-9fbaf8ed387541eb1595b6343db9defca501376ae528e78dfba0b45be555a0ba.scope - libcontainer container 9fbaf8ed387541eb1595b6343db9defca501376ae528e78dfba0b45be555a0ba. Dec 13 01:32:50.721934 containerd[1494]: time="2024-12-13T01:32:50.720988599Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:32:50.721934 containerd[1494]: time="2024-12-13T01:32:50.721040459Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:32:50.721934 containerd[1494]: time="2024-12-13T01:32:50.721053615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:50.721934 containerd[1494]: time="2024-12-13T01:32:50.721129400Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:50.761546 containerd[1494]: time="2024-12-13T01:32:50.761476201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tldkx,Uid:22895f47-88d9-4179-ad38-350c424b08f5,Namespace:kube-system,Attempt:0,} returns sandbox id \"9fbaf8ed387541eb1595b6343db9defca501376ae528e78dfba0b45be555a0ba\"" Dec 13 01:32:50.786186 systemd[1]: Started cri-containerd-154ce6a063ceaa872684aeceb3ef6d4a0430d90193addb447d97d2aab45f9482.scope - libcontainer container 154ce6a063ceaa872684aeceb3ef6d4a0430d90193addb447d97d2aab45f9482. Dec 13 01:32:50.794644 containerd[1494]: time="2024-12-13T01:32:50.794441561Z" level=info msg="CreateContainer within sandbox \"9fbaf8ed387541eb1595b6343db9defca501376ae528e78dfba0b45be555a0ba\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:32:50.852840 containerd[1494]: time="2024-12-13T01:32:50.852688072Z" level=info msg="CreateContainer within sandbox \"9fbaf8ed387541eb1595b6343db9defca501376ae528e78dfba0b45be555a0ba\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8c005aeb4346227d64d6ed4633b1e5f25c384989e454fe2174510ece7066264d\"" Dec 13 01:32:50.856998 containerd[1494]: time="2024-12-13T01:32:50.855924553Z" level=info msg="StartContainer for \"8c005aeb4346227d64d6ed4633b1e5f25c384989e454fe2174510ece7066264d\"" Dec 13 01:32:50.881549 containerd[1494]: time="2024-12-13T01:32:50.880635588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4rwlm,Uid:f817291d-c2c5-4564-affc-8f2c7821747d,Namespace:kube-system,Attempt:0,} returns sandbox id \"154ce6a063ceaa872684aeceb3ef6d4a0430d90193addb447d97d2aab45f9482\"" Dec 13 01:32:50.884543 containerd[1494]: time="2024-12-13T01:32:50.884495157Z" level=info msg="CreateContainer within sandbox \"154ce6a063ceaa872684aeceb3ef6d4a0430d90193addb447d97d2aab45f9482\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:32:50.900306 systemd[1]: Started cri-containerd-8c005aeb4346227d64d6ed4633b1e5f25c384989e454fe2174510ece7066264d.scope - libcontainer container 8c005aeb4346227d64d6ed4633b1e5f25c384989e454fe2174510ece7066264d. Dec 13 01:32:50.904110 containerd[1494]: time="2024-12-13T01:32:50.903939376Z" level=info msg="CreateContainer within sandbox \"154ce6a063ceaa872684aeceb3ef6d4a0430d90193addb447d97d2aab45f9482\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"621dc7376a7d3888e88c52b011c385255cb0cd8b6de28e58212d844274dac1d2\"" Dec 13 01:32:50.907270 containerd[1494]: time="2024-12-13T01:32:50.907223148Z" level=info msg="StartContainer for \"621dc7376a7d3888e88c52b011c385255cb0cd8b6de28e58212d844274dac1d2\"" Dec 13 01:32:50.941427 systemd[1]: Started cri-containerd-621dc7376a7d3888e88c52b011c385255cb0cd8b6de28e58212d844274dac1d2.scope - libcontainer container 621dc7376a7d3888e88c52b011c385255cb0cd8b6de28e58212d844274dac1d2. Dec 13 01:32:50.956920 containerd[1494]: time="2024-12-13T01:32:50.956860689Z" level=info msg="StartContainer for \"8c005aeb4346227d64d6ed4633b1e5f25c384989e454fe2174510ece7066264d\" returns successfully" Dec 13 01:32:50.983758 containerd[1494]: time="2024-12-13T01:32:50.983719141Z" level=info msg="StartContainer for \"621dc7376a7d3888e88c52b011c385255cb0cd8b6de28e58212d844274dac1d2\" returns successfully" Dec 13 01:32:51.643684 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount334640147.mount: Deactivated successfully. Dec 13 01:32:51.647031 kubelet[2794]: I1213 01:32:51.644941 2794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-tldkx" podStartSLOduration=26.644910589 podStartE2EDuration="26.644910589s" podCreationTimestamp="2024-12-13 01:32:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:32:51.630844604 +0000 UTC m=+41.286929070" watchObservedRunningTime="2024-12-13 01:32:51.644910589 +0000 UTC m=+41.300995045" Dec 13 01:32:51.658007 kubelet[2794]: I1213 01:32:51.657308 2794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-4rwlm" podStartSLOduration=26.65728461 podStartE2EDuration="26.65728461s" podCreationTimestamp="2024-12-13 01:32:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:32:51.656651111 +0000 UTC m=+41.312735577" watchObservedRunningTime="2024-12-13 01:32:51.65728461 +0000 UTC m=+41.313369056" Dec 13 01:32:57.364943 kubelet[2794]: I1213 01:32:57.364648 2794 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:32:58.129377 update_engine[1474]: I20241213 01:32:58.129199 1474 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 01:32:58.130342 update_engine[1474]: I20241213 01:32:58.129845 1474 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 01:32:58.130342 update_engine[1474]: I20241213 01:32:58.130314 1474 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 01:32:58.131359 update_engine[1474]: E20241213 01:32:58.131282 1474 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 01:32:58.131533 update_engine[1474]: I20241213 01:32:58.131377 1474 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 13 01:32:58.131533 update_engine[1474]: I20241213 01:32:58.131395 1474 omaha_request_action.cc:617] Omaha request response: Dec 13 01:32:58.131685 update_engine[1474]: E20241213 01:32:58.131620 1474 omaha_request_action.cc:636] Omaha request network transfer failed. Dec 13 01:32:58.185615 update_engine[1474]: I20241213 01:32:58.185453 1474 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Dec 13 01:32:58.185615 update_engine[1474]: I20241213 01:32:58.185519 1474 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 01:32:58.185615 update_engine[1474]: I20241213 01:32:58.185534 1474 update_attempter.cc:306] Processing Done. Dec 13 01:32:58.190059 update_engine[1474]: E20241213 01:32:58.189145 1474 update_attempter.cc:619] Update failed. Dec 13 01:32:58.190059 update_engine[1474]: I20241213 01:32:58.189183 1474 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Dec 13 01:32:58.190059 update_engine[1474]: I20241213 01:32:58.189194 1474 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Dec 13 01:32:58.190059 update_engine[1474]: I20241213 01:32:58.189206 1474 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Dec 13 01:32:58.190059 update_engine[1474]: I20241213 01:32:58.189388 1474 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 13 01:32:58.190059 update_engine[1474]: I20241213 01:32:58.189452 1474 omaha_request_action.cc:271] Posting an Omaha request to disabled Dec 13 01:32:58.190059 update_engine[1474]: I20241213 01:32:58.189467 1474 omaha_request_action.cc:272] Request: Dec 13 01:32:58.190059 update_engine[1474]: Dec 13 01:32:58.190059 update_engine[1474]: Dec 13 01:32:58.190059 update_engine[1474]: Dec 13 01:32:58.190059 update_engine[1474]: Dec 13 01:32:58.190059 update_engine[1474]: Dec 13 01:32:58.190059 update_engine[1474]: Dec 13 01:32:58.190059 update_engine[1474]: I20241213 01:32:58.189477 1474 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 01:32:58.190059 update_engine[1474]: I20241213 01:32:58.189774 1474 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 01:32:58.192638 update_engine[1474]: I20241213 01:32:58.190109 1474 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 01:32:58.192638 update_engine[1474]: E20241213 01:32:58.191249 1474 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 01:32:58.192638 update_engine[1474]: I20241213 01:32:58.191344 1474 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 13 01:32:58.192638 update_engine[1474]: I20241213 01:32:58.191361 1474 omaha_request_action.cc:617] Omaha request response: Dec 13 01:32:58.192638 update_engine[1474]: I20241213 01:32:58.191377 1474 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 01:32:58.192638 update_engine[1474]: I20241213 01:32:58.191391 1474 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 01:32:58.192638 update_engine[1474]: I20241213 01:32:58.191406 1474 update_attempter.cc:306] Processing Done. Dec 13 01:32:58.192638 update_engine[1474]: I20241213 01:32:58.191421 1474 update_attempter.cc:310] Error event sent. Dec 13 01:32:58.192638 update_engine[1474]: I20241213 01:32:58.191452 1474 update_check_scheduler.cc:74] Next update check in 40m34s Dec 13 01:32:58.206848 locksmithd[1513]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Dec 13 01:32:58.206848 locksmithd[1513]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Dec 13 01:36:58.667003 systemd[1]: Started sshd@7-78.47.95.53:22-147.75.109.163:41366.service - OpenSSH per-connection server daemon (147.75.109.163:41366). Dec 13 01:36:59.701435 sshd[4194]: Accepted publickey for core from 147.75.109.163 port 41366 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:36:59.703814 sshd[4194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:36:59.709560 systemd-logind[1473]: New session 8 of user core. Dec 13 01:36:59.714149 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 01:37:01.115680 sshd[4194]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:01.123494 systemd[1]: sshd@7-78.47.95.53:22-147.75.109.163:41366.service: Deactivated successfully. Dec 13 01:37:01.127768 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 01:37:01.129936 systemd-logind[1473]: Session 8 logged out. Waiting for processes to exit. Dec 13 01:37:01.132691 systemd-logind[1473]: Removed session 8. Dec 13 01:37:06.302508 systemd[1]: Started sshd@8-78.47.95.53:22-147.75.109.163:57236.service - OpenSSH per-connection server daemon (147.75.109.163:57236). Dec 13 01:37:07.308076 sshd[4212]: Accepted publickey for core from 147.75.109.163 port 57236 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:37:07.309926 sshd[4212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:07.314462 systemd-logind[1473]: New session 9 of user core. Dec 13 01:37:07.320116 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 01:37:08.143177 sshd[4212]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:08.152243 systemd[1]: sshd@8-78.47.95.53:22-147.75.109.163:57236.service: Deactivated successfully. Dec 13 01:37:08.157232 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 01:37:08.163868 systemd-logind[1473]: Session 9 logged out. Waiting for processes to exit. Dec 13 01:37:08.166303 systemd-logind[1473]: Removed session 9. Dec 13 01:37:13.324505 systemd[1]: Started sshd@9-78.47.95.53:22-147.75.109.163:57252.service - OpenSSH per-connection server daemon (147.75.109.163:57252). Dec 13 01:37:14.330405 sshd[4228]: Accepted publickey for core from 147.75.109.163 port 57252 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:37:14.333600 sshd[4228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:14.341531 systemd-logind[1473]: New session 10 of user core. Dec 13 01:37:14.347216 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 01:37:15.143139 sshd[4228]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:15.151576 systemd[1]: sshd@9-78.47.95.53:22-147.75.109.163:57252.service: Deactivated successfully. Dec 13 01:37:15.156283 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 01:37:15.157971 systemd-logind[1473]: Session 10 logged out. Waiting for processes to exit. Dec 13 01:37:15.159855 systemd-logind[1473]: Removed session 10. Dec 13 01:37:15.325795 systemd[1]: Started sshd@10-78.47.95.53:22-147.75.109.163:57258.service - OpenSSH per-connection server daemon (147.75.109.163:57258). Dec 13 01:37:16.343660 sshd[4242]: Accepted publickey for core from 147.75.109.163 port 57258 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:37:16.347260 sshd[4242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:16.356402 systemd-logind[1473]: New session 11 of user core. Dec 13 01:37:16.363289 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 01:37:17.229270 sshd[4242]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:17.239785 systemd[1]: sshd@10-78.47.95.53:22-147.75.109.163:57258.service: Deactivated successfully. Dec 13 01:37:17.245809 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 01:37:17.247463 systemd-logind[1473]: Session 11 logged out. Waiting for processes to exit. Dec 13 01:37:17.249306 systemd-logind[1473]: Removed session 11. Dec 13 01:37:17.409443 systemd[1]: Started sshd@11-78.47.95.53:22-147.75.109.163:55736.service - OpenSSH per-connection server daemon (147.75.109.163:55736). Dec 13 01:37:18.438722 sshd[4253]: Accepted publickey for core from 147.75.109.163 port 55736 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:37:18.442504 sshd[4253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:18.452865 systemd-logind[1473]: New session 12 of user core. Dec 13 01:37:18.460552 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 01:37:19.247675 sshd[4253]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:19.256454 systemd[1]: sshd@11-78.47.95.53:22-147.75.109.163:55736.service: Deactivated successfully. Dec 13 01:37:19.261698 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 01:37:19.263148 systemd-logind[1473]: Session 12 logged out. Waiting for processes to exit. Dec 13 01:37:19.265441 systemd-logind[1473]: Removed session 12. Dec 13 01:37:24.426450 systemd[1]: Started sshd@12-78.47.95.53:22-147.75.109.163:55748.service - OpenSSH per-connection server daemon (147.75.109.163:55748). Dec 13 01:37:25.425133 sshd[4266]: Accepted publickey for core from 147.75.109.163 port 55748 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:37:25.428542 sshd[4266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:25.437242 systemd-logind[1473]: New session 13 of user core. Dec 13 01:37:25.443237 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 01:37:26.209062 sshd[4266]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:26.216949 systemd[1]: sshd@12-78.47.95.53:22-147.75.109.163:55748.service: Deactivated successfully. Dec 13 01:37:26.222230 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 01:37:26.223616 systemd-logind[1473]: Session 13 logged out. Waiting for processes to exit. Dec 13 01:37:26.225865 systemd-logind[1473]: Removed session 13. Dec 13 01:37:26.387526 systemd[1]: Started sshd@13-78.47.95.53:22-147.75.109.163:33792.service - OpenSSH per-connection server daemon (147.75.109.163:33792). Dec 13 01:37:27.373401 sshd[4279]: Accepted publickey for core from 147.75.109.163 port 33792 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:37:27.377457 sshd[4279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:27.385602 systemd-logind[1473]: New session 14 of user core. Dec 13 01:37:27.396332 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 01:37:28.457877 sshd[4279]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:28.471946 systemd[1]: sshd@13-78.47.95.53:22-147.75.109.163:33792.service: Deactivated successfully. Dec 13 01:37:28.477765 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 01:37:28.479258 systemd-logind[1473]: Session 14 logged out. Waiting for processes to exit. Dec 13 01:37:28.481590 systemd-logind[1473]: Removed session 14. Dec 13 01:37:28.642074 systemd[1]: Started sshd@14-78.47.95.53:22-147.75.109.163:33796.service - OpenSSH per-connection server daemon (147.75.109.163:33796). Dec 13 01:37:29.668076 sshd[4292]: Accepted publickey for core from 147.75.109.163 port 33796 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:37:29.671677 sshd[4292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:29.681753 systemd-logind[1473]: New session 15 of user core. Dec 13 01:37:29.690236 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 01:37:32.220250 sshd[4292]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:32.227771 systemd[1]: sshd@14-78.47.95.53:22-147.75.109.163:33796.service: Deactivated successfully. Dec 13 01:37:32.232548 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 01:37:32.234129 systemd-logind[1473]: Session 15 logged out. Waiting for processes to exit. Dec 13 01:37:32.236451 systemd-logind[1473]: Removed session 15. Dec 13 01:37:32.392238 systemd[1]: Started sshd@15-78.47.95.53:22-147.75.109.163:33798.service - OpenSSH per-connection server daemon (147.75.109.163:33798). Dec 13 01:37:33.393192 sshd[4310]: Accepted publickey for core from 147.75.109.163 port 33798 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:37:33.396708 sshd[4310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:33.407081 systemd-logind[1473]: New session 16 of user core. Dec 13 01:37:33.414320 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 01:37:34.396297 sshd[4310]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:34.403554 systemd[1]: sshd@15-78.47.95.53:22-147.75.109.163:33798.service: Deactivated successfully. Dec 13 01:37:34.408537 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 01:37:34.410101 systemd-logind[1473]: Session 16 logged out. Waiting for processes to exit. Dec 13 01:37:34.411971 systemd-logind[1473]: Removed session 16. Dec 13 01:37:34.577626 systemd[1]: Started sshd@16-78.47.95.53:22-147.75.109.163:33810.service - OpenSSH per-connection server daemon (147.75.109.163:33810). Dec 13 01:37:35.579294 sshd[4321]: Accepted publickey for core from 147.75.109.163 port 33810 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:37:35.582698 sshd[4321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:35.592317 systemd-logind[1473]: New session 17 of user core. Dec 13 01:37:35.602557 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 01:37:36.354438 sshd[4321]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:36.364495 systemd-logind[1473]: Session 17 logged out. Waiting for processes to exit. Dec 13 01:37:36.365445 systemd[1]: sshd@16-78.47.95.53:22-147.75.109.163:33810.service: Deactivated successfully. Dec 13 01:37:36.370567 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 01:37:36.377246 systemd-logind[1473]: Removed session 17. Dec 13 01:37:41.537295 systemd[1]: Started sshd@17-78.47.95.53:22-147.75.109.163:49212.service - OpenSSH per-connection server daemon (147.75.109.163:49212). Dec 13 01:37:42.525951 sshd[4336]: Accepted publickey for core from 147.75.109.163 port 49212 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:37:42.529289 sshd[4336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:42.538597 systemd-logind[1473]: New session 18 of user core. Dec 13 01:37:42.545205 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 01:37:43.314717 sshd[4336]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:43.323473 systemd[1]: sshd@17-78.47.95.53:22-147.75.109.163:49212.service: Deactivated successfully. Dec 13 01:37:43.328895 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 01:37:43.330633 systemd-logind[1473]: Session 18 logged out. Waiting for processes to exit. Dec 13 01:37:43.333560 systemd-logind[1473]: Removed session 18. Dec 13 01:37:44.485022 systemd[1]: Started sshd@18-78.47.95.53:22-45.148.10.203:47240.service - OpenSSH per-connection server daemon (45.148.10.203:47240). Dec 13 01:37:44.583052 sshd[4349]: Connection closed by authenticating user root 45.148.10.203 port 47240 [preauth] Dec 13 01:37:44.587553 systemd[1]: sshd@18-78.47.95.53:22-45.148.10.203:47240.service: Deactivated successfully. Dec 13 01:37:48.492413 systemd[1]: Started sshd@19-78.47.95.53:22-147.75.109.163:39568.service - OpenSSH per-connection server daemon (147.75.109.163:39568). Dec 13 01:37:48.618758 systemd[1]: Started sshd@20-78.47.95.53:22-45.148.10.203:47262.service - OpenSSH per-connection server daemon (45.148.10.203:47262). Dec 13 01:37:48.721218 sshd[4357]: Connection closed by authenticating user root 45.148.10.203 port 47262 [preauth] Dec 13 01:37:48.727656 systemd[1]: sshd@20-78.47.95.53:22-45.148.10.203:47262.service: Deactivated successfully. Dec 13 01:37:49.497410 sshd[4354]: Accepted publickey for core from 147.75.109.163 port 39568 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:37:49.500742 sshd[4354]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:49.509999 systemd-logind[1473]: New session 19 of user core. Dec 13 01:37:49.520274 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 01:37:50.305332 sshd[4354]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:50.317373 systemd[1]: sshd@19-78.47.95.53:22-147.75.109.163:39568.service: Deactivated successfully. Dec 13 01:37:50.322780 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 01:37:50.325141 systemd-logind[1473]: Session 19 logged out. Waiting for processes to exit. Dec 13 01:37:50.327368 systemd-logind[1473]: Removed session 19. Dec 13 01:37:50.482539 systemd[1]: Started sshd@21-78.47.95.53:22-147.75.109.163:39574.service - OpenSSH per-connection server daemon (147.75.109.163:39574). Dec 13 01:37:51.499034 sshd[4372]: Accepted publickey for core from 147.75.109.163 port 39574 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:37:51.502372 sshd[4372]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:51.511953 systemd-logind[1473]: New session 20 of user core. Dec 13 01:37:51.519331 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 01:37:53.626749 containerd[1494]: time="2024-12-13T01:37:53.626621256Z" level=info msg="StopContainer for \"d18040f1f596af8ed9bbde2aa6f47c79590647427a792fa8f43053c4bf2c2648\" with timeout 30 (s)" Dec 13 01:37:53.629886 containerd[1494]: time="2024-12-13T01:37:53.628564985Z" level=info msg="Stop container \"d18040f1f596af8ed9bbde2aa6f47c79590647427a792fa8f43053c4bf2c2648\" with signal terminated" Dec 13 01:37:53.689363 systemd[1]: cri-containerd-d18040f1f596af8ed9bbde2aa6f47c79590647427a792fa8f43053c4bf2c2648.scope: Deactivated successfully. Dec 13 01:37:53.720458 containerd[1494]: time="2024-12-13T01:37:53.720122628Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:37:53.733782 containerd[1494]: time="2024-12-13T01:37:53.733118732Z" level=info msg="StopContainer for \"1ff0204d139cae43619e2ab27f737a5745b6ff229307f4802cbbac04c63cc550\" with timeout 2 (s)" Dec 13 01:37:53.734513 containerd[1494]: time="2024-12-13T01:37:53.734470069Z" level=info msg="Stop container \"1ff0204d139cae43619e2ab27f737a5745b6ff229307f4802cbbac04c63cc550\" with signal terminated" Dec 13 01:37:53.740864 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d18040f1f596af8ed9bbde2aa6f47c79590647427a792fa8f43053c4bf2c2648-rootfs.mount: Deactivated successfully. Dec 13 01:37:53.753212 containerd[1494]: time="2024-12-13T01:37:53.752926655Z" level=info msg="shim disconnected" id=d18040f1f596af8ed9bbde2aa6f47c79590647427a792fa8f43053c4bf2c2648 namespace=k8s.io Dec 13 01:37:53.753212 containerd[1494]: time="2024-12-13T01:37:53.753107243Z" level=warning msg="cleaning up after shim disconnected" id=d18040f1f596af8ed9bbde2aa6f47c79590647427a792fa8f43053c4bf2c2648 namespace=k8s.io Dec 13 01:37:53.753212 containerd[1494]: time="2024-12-13T01:37:53.753216674Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:37:53.753884 systemd-networkd[1387]: lxc_health: Link DOWN Dec 13 01:37:53.753889 systemd-networkd[1387]: lxc_health: Lost carrier Dec 13 01:37:53.790606 systemd[1]: cri-containerd-1ff0204d139cae43619e2ab27f737a5745b6ff229307f4802cbbac04c63cc550.scope: Deactivated successfully. Dec 13 01:37:53.791310 systemd[1]: cri-containerd-1ff0204d139cae43619e2ab27f737a5745b6ff229307f4802cbbac04c63cc550.scope: Consumed 8.850s CPU time. Dec 13 01:37:53.801315 containerd[1494]: time="2024-12-13T01:37:53.801238331Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:37:53Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 01:37:53.808497 containerd[1494]: time="2024-12-13T01:37:53.808377660Z" level=info msg="StopContainer for \"d18040f1f596af8ed9bbde2aa6f47c79590647427a792fa8f43053c4bf2c2648\" returns successfully" Dec 13 01:37:53.809628 containerd[1494]: time="2024-12-13T01:37:53.809577144Z" level=info msg="StopPodSandbox for \"69b9f8ba37133c7c791bea4d705a6caafff1eb20e2655e7e62391e6d49ae1704\"" Dec 13 01:37:53.810179 containerd[1494]: time="2024-12-13T01:37:53.809862654Z" level=info msg="Container to stop \"d18040f1f596af8ed9bbde2aa6f47c79590647427a792fa8f43053c4bf2c2648\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:37:53.813906 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-69b9f8ba37133c7c791bea4d705a6caafff1eb20e2655e7e62391e6d49ae1704-shm.mount: Deactivated successfully. Dec 13 01:37:53.839916 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1ff0204d139cae43619e2ab27f737a5745b6ff229307f4802cbbac04c63cc550-rootfs.mount: Deactivated successfully. Dec 13 01:37:53.840743 systemd[1]: cri-containerd-69b9f8ba37133c7c791bea4d705a6caafff1eb20e2655e7e62391e6d49ae1704.scope: Deactivated successfully. Dec 13 01:37:53.856962 containerd[1494]: time="2024-12-13T01:37:53.856322258Z" level=info msg="shim disconnected" id=1ff0204d139cae43619e2ab27f737a5745b6ff229307f4802cbbac04c63cc550 namespace=k8s.io Dec 13 01:37:53.856962 containerd[1494]: time="2024-12-13T01:37:53.856397153Z" level=warning msg="cleaning up after shim disconnected" id=1ff0204d139cae43619e2ab27f737a5745b6ff229307f4802cbbac04c63cc550 namespace=k8s.io Dec 13 01:37:53.856962 containerd[1494]: time="2024-12-13T01:37:53.856409777Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:37:53.872930 containerd[1494]: time="2024-12-13T01:37:53.872869934Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:37:53Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 01:37:53.874679 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-69b9f8ba37133c7c791bea4d705a6caafff1eb20e2655e7e62391e6d49ae1704-rootfs.mount: Deactivated successfully. Dec 13 01:37:53.881714 containerd[1494]: time="2024-12-13T01:37:53.880470311Z" level=info msg="shim disconnected" id=69b9f8ba37133c7c791bea4d705a6caafff1eb20e2655e7e62391e6d49ae1704 namespace=k8s.io Dec 13 01:37:53.881714 containerd[1494]: time="2024-12-13T01:37:53.880540567Z" level=warning msg="cleaning up after shim disconnected" id=69b9f8ba37133c7c791bea4d705a6caafff1eb20e2655e7e62391e6d49ae1704 namespace=k8s.io Dec 13 01:37:53.881714 containerd[1494]: time="2024-12-13T01:37:53.880554323Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:37:53.882875 containerd[1494]: time="2024-12-13T01:37:53.882333185Z" level=info msg="StopContainer for \"1ff0204d139cae43619e2ab27f737a5745b6ff229307f4802cbbac04c63cc550\" returns successfully" Dec 13 01:37:53.884338 containerd[1494]: time="2024-12-13T01:37:53.884319765Z" level=info msg="StopPodSandbox for \"f2e29207363bfec2ab3ca2c39bb6f63c74b8f0a035281a8f4084709f891e251f\"" Dec 13 01:37:53.884429 containerd[1494]: time="2024-12-13T01:37:53.884414819Z" level=info msg="Container to stop \"36b60ecddf5b8eb073af21412ff17446cebe23965b636b19747c99b3db76b4d0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:37:53.884490 containerd[1494]: time="2024-12-13T01:37:53.884477149Z" level=info msg="Container to stop \"7362ad9a82f87cde7a3f60312388cc3af4c18e128f05637871df82890b99d966\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:37:53.884666 containerd[1494]: time="2024-12-13T01:37:53.884528478Z" level=info msg="Container to stop \"1ff0204d139cae43619e2ab27f737a5745b6ff229307f4802cbbac04c63cc550\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:37:53.884666 containerd[1494]: time="2024-12-13T01:37:53.884542265Z" level=info msg="Container to stop \"8c1f0abb81ba1115cc61a4f28c325e677017f9210b158ad4f47011796da078f5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:37:53.884666 containerd[1494]: time="2024-12-13T01:37:53.884553095Z" level=info msg="Container to stop \"8a7b4a77ac3eb4ba78442f1835b5fdc0069a69d191600da5af698a2e35f3f456\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:37:53.892996 systemd[1]: cri-containerd-f2e29207363bfec2ab3ca2c39bb6f63c74b8f0a035281a8f4084709f891e251f.scope: Deactivated successfully. Dec 13 01:37:53.919518 containerd[1494]: time="2024-12-13T01:37:53.918762594Z" level=info msg="TearDown network for sandbox \"69b9f8ba37133c7c791bea4d705a6caafff1eb20e2655e7e62391e6d49ae1704\" successfully" Dec 13 01:37:53.919518 containerd[1494]: time="2024-12-13T01:37:53.918815786Z" level=info msg="StopPodSandbox for \"69b9f8ba37133c7c791bea4d705a6caafff1eb20e2655e7e62391e6d49ae1704\" returns successfully" Dec 13 01:37:53.937475 containerd[1494]: time="2024-12-13T01:37:53.937387454Z" level=info msg="shim disconnected" id=f2e29207363bfec2ab3ca2c39bb6f63c74b8f0a035281a8f4084709f891e251f namespace=k8s.io Dec 13 01:37:53.937475 containerd[1494]: time="2024-12-13T01:37:53.937458001Z" level=warning msg="cleaning up after shim disconnected" id=f2e29207363bfec2ab3ca2c39bb6f63c74b8f0a035281a8f4084709f891e251f namespace=k8s.io Dec 13 01:37:53.937475 containerd[1494]: time="2024-12-13T01:37:53.937468121Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:37:53.959179 containerd[1494]: time="2024-12-13T01:37:53.959071546Z" level=info msg="TearDown network for sandbox \"f2e29207363bfec2ab3ca2c39bb6f63c74b8f0a035281a8f4084709f891e251f\" successfully" Dec 13 01:37:53.959179 containerd[1494]: time="2024-12-13T01:37:53.959155236Z" level=info msg="StopPodSandbox for \"f2e29207363bfec2ab3ca2c39bb6f63c74b8f0a035281a8f4084709f891e251f\" returns successfully" Dec 13 01:37:54.122367 kubelet[2794]: I1213 01:37:54.122294 2794 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e29260e-d0a9-434d-8eb2-02da89d8fc6e-lib-modules\") pod \"3e29260e-d0a9-434d-8eb2-02da89d8fc6e\" (UID: \"3e29260e-d0a9-434d-8eb2-02da89d8fc6e\") " Dec 13 01:37:54.123325 kubelet[2794]: I1213 01:37:54.123212 2794 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3e29260e-d0a9-434d-8eb2-02da89d8fc6e-hostproc\") pod \"3e29260e-d0a9-434d-8eb2-02da89d8fc6e\" (UID: \"3e29260e-d0a9-434d-8eb2-02da89d8fc6e\") " Dec 13 01:37:54.123325 kubelet[2794]: I1213 01:37:54.123315 2794 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3e29260e-d0a9-434d-8eb2-02da89d8fc6e-host-proc-sys-kernel\") pod \"3e29260e-d0a9-434d-8eb2-02da89d8fc6e\" (UID: \"3e29260e-d0a9-434d-8eb2-02da89d8fc6e\") " Dec 13 01:37:54.123447 kubelet[2794]: I1213 01:37:54.123352 2794 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3e29260e-d0a9-434d-8eb2-02da89d8fc6e-etc-cni-netd\") pod \"3e29260e-d0a9-434d-8eb2-02da89d8fc6e\" (UID: \"3e29260e-d0a9-434d-8eb2-02da89d8fc6e\") " Dec 13 01:37:54.123447 kubelet[2794]: I1213 01:37:54.123438 2794 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2mqj\" (UniqueName: \"kubernetes.io/projected/3e29260e-d0a9-434d-8eb2-02da89d8fc6e-kube-api-access-z2mqj\") pod \"3e29260e-d0a9-434d-8eb2-02da89d8fc6e\" (UID: \"3e29260e-d0a9-434d-8eb2-02da89d8fc6e\") " Dec 13 01:37:54.123586 kubelet[2794]: I1213 01:37:54.123537 2794 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3e29260e-d0a9-434d-8eb2-02da89d8fc6e-cilium-config-path\") pod \"3e29260e-d0a9-434d-8eb2-02da89d8fc6e\" (UID: \"3e29260e-d0a9-434d-8eb2-02da89d8fc6e\") " Dec 13 01:37:54.124292 kubelet[2794]: I1213 01:37:54.123608 2794 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txljb\" (UniqueName: \"kubernetes.io/projected/f789b555-b05f-4899-811a-0ec6fe0479b8-kube-api-access-txljb\") pod \"f789b555-b05f-4899-811a-0ec6fe0479b8\" (UID: \"f789b555-b05f-4899-811a-0ec6fe0479b8\") " Dec 13 01:37:54.124292 kubelet[2794]: I1213 01:37:54.123767 2794 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3e29260e-d0a9-434d-8eb2-02da89d8fc6e-xtables-lock\") pod \"3e29260e-d0a9-434d-8eb2-02da89d8fc6e\" (UID: \"3e29260e-d0a9-434d-8eb2-02da89d8fc6e\") " Dec 13 01:37:54.124292 kubelet[2794]: I1213 01:37:54.123801 2794 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3e29260e-d0a9-434d-8eb2-02da89d8fc6e-cilium-run\") pod \"3e29260e-d0a9-434d-8eb2-02da89d8fc6e\" (UID: \"3e29260e-d0a9-434d-8eb2-02da89d8fc6e\") " Dec 13 01:37:54.124292 kubelet[2794]: I1213 01:37:54.123831 2794 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3e29260e-d0a9-434d-8eb2-02da89d8fc6e-bpf-maps\") pod \"3e29260e-d0a9-434d-8eb2-02da89d8fc6e\" (UID: \"3e29260e-d0a9-434d-8eb2-02da89d8fc6e\") " Dec 13 01:37:54.124292 kubelet[2794]: I1213 01:37:54.123864 2794 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3e29260e-d0a9-434d-8eb2-02da89d8fc6e-host-proc-sys-net\") pod \"3e29260e-d0a9-434d-8eb2-02da89d8fc6e\" (UID: \"3e29260e-d0a9-434d-8eb2-02da89d8fc6e\") " Dec 13 01:37:54.124292 kubelet[2794]: I1213 01:37:54.123901 2794 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3e29260e-d0a9-434d-8eb2-02da89d8fc6e-hubble-tls\") pod \"3e29260e-d0a9-434d-8eb2-02da89d8fc6e\" (UID: \"3e29260e-d0a9-434d-8eb2-02da89d8fc6e\") " Dec 13 01:37:54.125163 kubelet[2794]: I1213 01:37:54.123943 2794 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3e29260e-d0a9-434d-8eb2-02da89d8fc6e-clustermesh-secrets\") pod \"3e29260e-d0a9-434d-8eb2-02da89d8fc6e\" (UID: \"3e29260e-d0a9-434d-8eb2-02da89d8fc6e\") " Dec 13 01:37:54.125163 kubelet[2794]: I1213 01:37:54.124001 2794 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3e29260e-d0a9-434d-8eb2-02da89d8fc6e-cni-path\") pod \"3e29260e-d0a9-434d-8eb2-02da89d8fc6e\" (UID: \"3e29260e-d0a9-434d-8eb2-02da89d8fc6e\") " Dec 13 01:37:54.125163 kubelet[2794]: I1213 01:37:54.124040 2794 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f789b555-b05f-4899-811a-0ec6fe0479b8-cilium-config-path\") pod \"f789b555-b05f-4899-811a-0ec6fe0479b8\" (UID: \"f789b555-b05f-4899-811a-0ec6fe0479b8\") " Dec 13 01:37:54.125163 kubelet[2794]: I1213 01:37:54.124073 2794 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3e29260e-d0a9-434d-8eb2-02da89d8fc6e-cilium-cgroup\") pod \"3e29260e-d0a9-434d-8eb2-02da89d8fc6e\" (UID: \"3e29260e-d0a9-434d-8eb2-02da89d8fc6e\") " Dec 13 01:37:54.125553 kubelet[2794]: I1213 01:37:54.122546 2794 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e29260e-d0a9-434d-8eb2-02da89d8fc6e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "3e29260e-d0a9-434d-8eb2-02da89d8fc6e" (UID: "3e29260e-d0a9-434d-8eb2-02da89d8fc6e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:37:54.125553 kubelet[2794]: I1213 01:37:54.125541 2794 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e29260e-d0a9-434d-8eb2-02da89d8fc6e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "3e29260e-d0a9-434d-8eb2-02da89d8fc6e" (UID: "3e29260e-d0a9-434d-8eb2-02da89d8fc6e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:37:54.125754 kubelet[2794]: I1213 01:37:54.125595 2794 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e29260e-d0a9-434d-8eb2-02da89d8fc6e-hostproc" (OuterVolumeSpecName: "hostproc") pod "3e29260e-d0a9-434d-8eb2-02da89d8fc6e" (UID: "3e29260e-d0a9-434d-8eb2-02da89d8fc6e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:37:54.125754 kubelet[2794]: I1213 01:37:54.125630 2794 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e29260e-d0a9-434d-8eb2-02da89d8fc6e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "3e29260e-d0a9-434d-8eb2-02da89d8fc6e" (UID: "3e29260e-d0a9-434d-8eb2-02da89d8fc6e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:37:54.125754 kubelet[2794]: I1213 01:37:54.125663 2794 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e29260e-d0a9-434d-8eb2-02da89d8fc6e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "3e29260e-d0a9-434d-8eb2-02da89d8fc6e" (UID: "3e29260e-d0a9-434d-8eb2-02da89d8fc6e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:37:54.126215 kubelet[2794]: I1213 01:37:54.124200 2794 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e29260e-d0a9-434d-8eb2-02da89d8fc6e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "3e29260e-d0a9-434d-8eb2-02da89d8fc6e" (UID: "3e29260e-d0a9-434d-8eb2-02da89d8fc6e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:37:54.126215 kubelet[2794]: I1213 01:37:54.125891 2794 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e29260e-d0a9-434d-8eb2-02da89d8fc6e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "3e29260e-d0a9-434d-8eb2-02da89d8fc6e" (UID: "3e29260e-d0a9-434d-8eb2-02da89d8fc6e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:37:54.126215 kubelet[2794]: I1213 01:37:54.125925 2794 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e29260e-d0a9-434d-8eb2-02da89d8fc6e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "3e29260e-d0a9-434d-8eb2-02da89d8fc6e" (UID: "3e29260e-d0a9-434d-8eb2-02da89d8fc6e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:37:54.126215 kubelet[2794]: I1213 01:37:54.125957 2794 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e29260e-d0a9-434d-8eb2-02da89d8fc6e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "3e29260e-d0a9-434d-8eb2-02da89d8fc6e" (UID: "3e29260e-d0a9-434d-8eb2-02da89d8fc6e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:37:54.133171 kubelet[2794]: I1213 01:37:54.131077 2794 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e29260e-d0a9-434d-8eb2-02da89d8fc6e-cni-path" (OuterVolumeSpecName: "cni-path") pod "3e29260e-d0a9-434d-8eb2-02da89d8fc6e" (UID: "3e29260e-d0a9-434d-8eb2-02da89d8fc6e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:37:54.148460 kubelet[2794]: I1213 01:37:54.148374 2794 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e29260e-d0a9-434d-8eb2-02da89d8fc6e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "3e29260e-d0a9-434d-8eb2-02da89d8fc6e" (UID: "3e29260e-d0a9-434d-8eb2-02da89d8fc6e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 01:37:54.150790 kubelet[2794]: I1213 01:37:54.150751 2794 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e29260e-d0a9-434d-8eb2-02da89d8fc6e-kube-api-access-z2mqj" (OuterVolumeSpecName: "kube-api-access-z2mqj") pod "3e29260e-d0a9-434d-8eb2-02da89d8fc6e" (UID: "3e29260e-d0a9-434d-8eb2-02da89d8fc6e"). InnerVolumeSpecName "kube-api-access-z2mqj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:37:54.155166 kubelet[2794]: I1213 01:37:54.155073 2794 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e29260e-d0a9-434d-8eb2-02da89d8fc6e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "3e29260e-d0a9-434d-8eb2-02da89d8fc6e" (UID: "3e29260e-d0a9-434d-8eb2-02da89d8fc6e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:37:54.157093 kubelet[2794]: I1213 01:37:54.156934 2794 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f789b555-b05f-4899-811a-0ec6fe0479b8-kube-api-access-txljb" (OuterVolumeSpecName: "kube-api-access-txljb") pod "f789b555-b05f-4899-811a-0ec6fe0479b8" (UID: "f789b555-b05f-4899-811a-0ec6fe0479b8"). InnerVolumeSpecName "kube-api-access-txljb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:37:54.158425 kubelet[2794]: I1213 01:37:54.158390 2794 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f789b555-b05f-4899-811a-0ec6fe0479b8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f789b555-b05f-4899-811a-0ec6fe0479b8" (UID: "f789b555-b05f-4899-811a-0ec6fe0479b8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 01:37:54.160557 kubelet[2794]: I1213 01:37:54.160492 2794 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e29260e-d0a9-434d-8eb2-02da89d8fc6e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3e29260e-d0a9-434d-8eb2-02da89d8fc6e" (UID: "3e29260e-d0a9-434d-8eb2-02da89d8fc6e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 01:37:54.228103 kubelet[2794]: I1213 01:37:54.227963 2794 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3e29260e-d0a9-434d-8eb2-02da89d8fc6e-xtables-lock\") on node \"ci-4081-2-1-2-54968c70cf\" DevicePath \"\"" Dec 13 01:37:54.228103 kubelet[2794]: I1213 01:37:54.228076 2794 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-z2mqj\" (UniqueName: \"kubernetes.io/projected/3e29260e-d0a9-434d-8eb2-02da89d8fc6e-kube-api-access-z2mqj\") on node \"ci-4081-2-1-2-54968c70cf\" DevicePath \"\"" Dec 13 01:37:54.228103 kubelet[2794]: I1213 01:37:54.228097 2794 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3e29260e-d0a9-434d-8eb2-02da89d8fc6e-cilium-config-path\") on node \"ci-4081-2-1-2-54968c70cf\" DevicePath \"\"" Dec 13 01:37:54.228103 kubelet[2794]: I1213 01:37:54.228116 2794 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-txljb\" (UniqueName: \"kubernetes.io/projected/f789b555-b05f-4899-811a-0ec6fe0479b8-kube-api-access-txljb\") on node \"ci-4081-2-1-2-54968c70cf\" DevicePath \"\"" Dec 13 01:37:54.228602 kubelet[2794]: I1213 01:37:54.228139 2794 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3e29260e-d0a9-434d-8eb2-02da89d8fc6e-cilium-run\") on node \"ci-4081-2-1-2-54968c70cf\" DevicePath \"\"" Dec 13 01:37:54.228602 kubelet[2794]: I1213 01:37:54.228162 2794 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3e29260e-d0a9-434d-8eb2-02da89d8fc6e-bpf-maps\") on node \"ci-4081-2-1-2-54968c70cf\" DevicePath \"\"" Dec 13 01:37:54.228602 kubelet[2794]: I1213 01:37:54.228178 2794 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3e29260e-d0a9-434d-8eb2-02da89d8fc6e-host-proc-sys-net\") on node \"ci-4081-2-1-2-54968c70cf\" DevicePath \"\"" Dec 13 01:37:54.228602 kubelet[2794]: I1213 01:37:54.228206 2794 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3e29260e-d0a9-434d-8eb2-02da89d8fc6e-cni-path\") on node \"ci-4081-2-1-2-54968c70cf\" DevicePath \"\"" Dec 13 01:37:54.228602 kubelet[2794]: I1213 01:37:54.228226 2794 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f789b555-b05f-4899-811a-0ec6fe0479b8-cilium-config-path\") on node \"ci-4081-2-1-2-54968c70cf\" DevicePath \"\"" Dec 13 01:37:54.228602 kubelet[2794]: I1213 01:37:54.228246 2794 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3e29260e-d0a9-434d-8eb2-02da89d8fc6e-hubble-tls\") on node \"ci-4081-2-1-2-54968c70cf\" DevicePath \"\"" Dec 13 01:37:54.228602 kubelet[2794]: I1213 01:37:54.228264 2794 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3e29260e-d0a9-434d-8eb2-02da89d8fc6e-clustermesh-secrets\") on node \"ci-4081-2-1-2-54968c70cf\" DevicePath \"\"" Dec 13 01:37:54.228602 kubelet[2794]: I1213 01:37:54.228288 2794 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3e29260e-d0a9-434d-8eb2-02da89d8fc6e-cilium-cgroup\") on node \"ci-4081-2-1-2-54968c70cf\" DevicePath \"\"" Dec 13 01:37:54.229434 kubelet[2794]: I1213 01:37:54.228305 2794 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e29260e-d0a9-434d-8eb2-02da89d8fc6e-lib-modules\") on node \"ci-4081-2-1-2-54968c70cf\" DevicePath \"\"" Dec 13 01:37:54.229434 kubelet[2794]: I1213 01:37:54.228323 2794 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3e29260e-d0a9-434d-8eb2-02da89d8fc6e-etc-cni-netd\") on node \"ci-4081-2-1-2-54968c70cf\" DevicePath \"\"" Dec 13 01:37:54.229434 kubelet[2794]: I1213 01:37:54.228340 2794 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3e29260e-d0a9-434d-8eb2-02da89d8fc6e-hostproc\") on node \"ci-4081-2-1-2-54968c70cf\" DevicePath \"\"" Dec 13 01:37:54.229434 kubelet[2794]: I1213 01:37:54.228357 2794 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3e29260e-d0a9-434d-8eb2-02da89d8fc6e-host-proc-sys-kernel\") on node \"ci-4081-2-1-2-54968c70cf\" DevicePath \"\"" Dec 13 01:37:54.396084 kubelet[2794]: I1213 01:37:54.395750 2794 scope.go:117] "RemoveContainer" containerID="d18040f1f596af8ed9bbde2aa6f47c79590647427a792fa8f43053c4bf2c2648" Dec 13 01:37:54.401598 containerd[1494]: time="2024-12-13T01:37:54.400176078Z" level=info msg="RemoveContainer for \"d18040f1f596af8ed9bbde2aa6f47c79590647427a792fa8f43053c4bf2c2648\"" Dec 13 01:37:54.412628 containerd[1494]: time="2024-12-13T01:37:54.412577958Z" level=info msg="RemoveContainer for \"d18040f1f596af8ed9bbde2aa6f47c79590647427a792fa8f43053c4bf2c2648\" returns successfully" Dec 13 01:37:54.417655 kubelet[2794]: I1213 01:37:54.417592 2794 scope.go:117] "RemoveContainer" containerID="d18040f1f596af8ed9bbde2aa6f47c79590647427a792fa8f43053c4bf2c2648" Dec 13 01:37:54.426132 systemd[1]: Removed slice kubepods-besteffort-podf789b555_b05f_4899_811a_0ec6fe0479b8.slice - libcontainer container kubepods-besteffort-podf789b555_b05f_4899_811a_0ec6fe0479b8.slice. Dec 13 01:37:54.446839 systemd[1]: Removed slice kubepods-burstable-pod3e29260e_d0a9_434d_8eb2_02da89d8fc6e.slice - libcontainer container kubepods-burstable-pod3e29260e_d0a9_434d_8eb2_02da89d8fc6e.slice. Dec 13 01:37:54.447362 systemd[1]: kubepods-burstable-pod3e29260e_d0a9_434d_8eb2_02da89d8fc6e.slice: Consumed 8.985s CPU time. Dec 13 01:37:54.455937 containerd[1494]: time="2024-12-13T01:37:54.432432262Z" level=error msg="ContainerStatus for \"d18040f1f596af8ed9bbde2aa6f47c79590647427a792fa8f43053c4bf2c2648\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d18040f1f596af8ed9bbde2aa6f47c79590647427a792fa8f43053c4bf2c2648\": not found" Dec 13 01:37:54.478432 kubelet[2794]: E1213 01:37:54.478361 2794 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d18040f1f596af8ed9bbde2aa6f47c79590647427a792fa8f43053c4bf2c2648\": not found" containerID="d18040f1f596af8ed9bbde2aa6f47c79590647427a792fa8f43053c4bf2c2648" Dec 13 01:37:54.478682 kubelet[2794]: I1213 01:37:54.478468 2794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d18040f1f596af8ed9bbde2aa6f47c79590647427a792fa8f43053c4bf2c2648"} err="failed to get container status \"d18040f1f596af8ed9bbde2aa6f47c79590647427a792fa8f43053c4bf2c2648\": rpc error: code = NotFound desc = an error occurred when try to find container \"d18040f1f596af8ed9bbde2aa6f47c79590647427a792fa8f43053c4bf2c2648\": not found" Dec 13 01:37:54.478682 kubelet[2794]: I1213 01:37:54.478579 2794 scope.go:117] "RemoveContainer" containerID="1ff0204d139cae43619e2ab27f737a5745b6ff229307f4802cbbac04c63cc550" Dec 13 01:37:54.481485 containerd[1494]: time="2024-12-13T01:37:54.481425215Z" level=info msg="RemoveContainer for \"1ff0204d139cae43619e2ab27f737a5745b6ff229307f4802cbbac04c63cc550\"" Dec 13 01:37:54.487548 containerd[1494]: time="2024-12-13T01:37:54.487501664Z" level=info msg="RemoveContainer for \"1ff0204d139cae43619e2ab27f737a5745b6ff229307f4802cbbac04c63cc550\" returns successfully" Dec 13 01:37:54.487767 kubelet[2794]: I1213 01:37:54.487733 2794 scope.go:117] "RemoveContainer" containerID="7362ad9a82f87cde7a3f60312388cc3af4c18e128f05637871df82890b99d966" Dec 13 01:37:54.489341 containerd[1494]: time="2024-12-13T01:37:54.489296637Z" level=info msg="RemoveContainer for \"7362ad9a82f87cde7a3f60312388cc3af4c18e128f05637871df82890b99d966\"" Dec 13 01:37:54.494158 containerd[1494]: time="2024-12-13T01:37:54.494104460Z" level=info msg="RemoveContainer for \"7362ad9a82f87cde7a3f60312388cc3af4c18e128f05637871df82890b99d966\" returns successfully" Dec 13 01:37:54.494408 kubelet[2794]: I1213 01:37:54.494359 2794 scope.go:117] "RemoveContainer" containerID="8a7b4a77ac3eb4ba78442f1835b5fdc0069a69d191600da5af698a2e35f3f456" Dec 13 01:37:54.496523 containerd[1494]: time="2024-12-13T01:37:54.496476745Z" level=info msg="RemoveContainer for \"8a7b4a77ac3eb4ba78442f1835b5fdc0069a69d191600da5af698a2e35f3f456\"" Dec 13 01:37:54.503355 containerd[1494]: time="2024-12-13T01:37:54.503303492Z" level=info msg="RemoveContainer for \"8a7b4a77ac3eb4ba78442f1835b5fdc0069a69d191600da5af698a2e35f3f456\" returns successfully" Dec 13 01:37:54.503516 kubelet[2794]: I1213 01:37:54.503483 2794 scope.go:117] "RemoveContainer" containerID="36b60ecddf5b8eb073af21412ff17446cebe23965b636b19747c99b3db76b4d0" Dec 13 01:37:54.505040 containerd[1494]: time="2024-12-13T01:37:54.504962451Z" level=info msg="RemoveContainer for \"36b60ecddf5b8eb073af21412ff17446cebe23965b636b19747c99b3db76b4d0\"" Dec 13 01:37:54.510241 containerd[1494]: time="2024-12-13T01:37:54.510182319Z" level=info msg="RemoveContainer for \"36b60ecddf5b8eb073af21412ff17446cebe23965b636b19747c99b3db76b4d0\" returns successfully" Dec 13 01:37:54.510398 kubelet[2794]: I1213 01:37:54.510364 2794 scope.go:117] "RemoveContainer" containerID="8c1f0abb81ba1115cc61a4f28c325e677017f9210b158ad4f47011796da078f5" Dec 13 01:37:54.512114 containerd[1494]: time="2024-12-13T01:37:54.511886015Z" level=info msg="RemoveContainer for \"8c1f0abb81ba1115cc61a4f28c325e677017f9210b158ad4f47011796da078f5\"" Dec 13 01:37:54.517539 containerd[1494]: time="2024-12-13T01:37:54.517463904Z" level=info msg="RemoveContainer for \"8c1f0abb81ba1115cc61a4f28c325e677017f9210b158ad4f47011796da078f5\" returns successfully" Dec 13 01:37:54.517900 kubelet[2794]: I1213 01:37:54.517786 2794 scope.go:117] "RemoveContainer" containerID="1ff0204d139cae43619e2ab27f737a5745b6ff229307f4802cbbac04c63cc550" Dec 13 01:37:54.518213 containerd[1494]: time="2024-12-13T01:37:54.518142252Z" level=error msg="ContainerStatus for \"1ff0204d139cae43619e2ab27f737a5745b6ff229307f4802cbbac04c63cc550\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1ff0204d139cae43619e2ab27f737a5745b6ff229307f4802cbbac04c63cc550\": not found" Dec 13 01:37:54.518547 kubelet[2794]: E1213 01:37:54.518478 2794 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1ff0204d139cae43619e2ab27f737a5745b6ff229307f4802cbbac04c63cc550\": not found" containerID="1ff0204d139cae43619e2ab27f737a5745b6ff229307f4802cbbac04c63cc550" Dec 13 01:37:54.518610 kubelet[2794]: I1213 01:37:54.518545 2794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1ff0204d139cae43619e2ab27f737a5745b6ff229307f4802cbbac04c63cc550"} err="failed to get container status \"1ff0204d139cae43619e2ab27f737a5745b6ff229307f4802cbbac04c63cc550\": rpc error: code = NotFound desc = an error occurred when try to find container \"1ff0204d139cae43619e2ab27f737a5745b6ff229307f4802cbbac04c63cc550\": not found" Dec 13 01:37:54.518610 kubelet[2794]: I1213 01:37:54.518594 2794 scope.go:117] "RemoveContainer" containerID="7362ad9a82f87cde7a3f60312388cc3af4c18e128f05637871df82890b99d966" Dec 13 01:37:54.518968 containerd[1494]: time="2024-12-13T01:37:54.518881557Z" level=error msg="ContainerStatus for \"7362ad9a82f87cde7a3f60312388cc3af4c18e128f05637871df82890b99d966\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7362ad9a82f87cde7a3f60312388cc3af4c18e128f05637871df82890b99d966\": not found" Dec 13 01:37:54.519404 kubelet[2794]: E1213 01:37:54.519207 2794 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7362ad9a82f87cde7a3f60312388cc3af4c18e128f05637871df82890b99d966\": not found" containerID="7362ad9a82f87cde7a3f60312388cc3af4c18e128f05637871df82890b99d966" Dec 13 01:37:54.519404 kubelet[2794]: I1213 01:37:54.519254 2794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7362ad9a82f87cde7a3f60312388cc3af4c18e128f05637871df82890b99d966"} err="failed to get container status \"7362ad9a82f87cde7a3f60312388cc3af4c18e128f05637871df82890b99d966\": rpc error: code = NotFound desc = an error occurred when try to find container \"7362ad9a82f87cde7a3f60312388cc3af4c18e128f05637871df82890b99d966\": not found" Dec 13 01:37:54.519404 kubelet[2794]: I1213 01:37:54.519287 2794 scope.go:117] "RemoveContainer" containerID="8a7b4a77ac3eb4ba78442f1835b5fdc0069a69d191600da5af698a2e35f3f456" Dec 13 01:37:54.519896 containerd[1494]: time="2024-12-13T01:37:54.519777366Z" level=error msg="ContainerStatus for \"8a7b4a77ac3eb4ba78442f1835b5fdc0069a69d191600da5af698a2e35f3f456\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8a7b4a77ac3eb4ba78442f1835b5fdc0069a69d191600da5af698a2e35f3f456\": not found" Dec 13 01:37:54.520057 kubelet[2794]: E1213 01:37:54.519959 2794 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8a7b4a77ac3eb4ba78442f1835b5fdc0069a69d191600da5af698a2e35f3f456\": not found" containerID="8a7b4a77ac3eb4ba78442f1835b5fdc0069a69d191600da5af698a2e35f3f456" Dec 13 01:37:54.520057 kubelet[2794]: I1213 01:37:54.520025 2794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8a7b4a77ac3eb4ba78442f1835b5fdc0069a69d191600da5af698a2e35f3f456"} err="failed to get container status \"8a7b4a77ac3eb4ba78442f1835b5fdc0069a69d191600da5af698a2e35f3f456\": rpc error: code = NotFound desc = an error occurred when try to find container \"8a7b4a77ac3eb4ba78442f1835b5fdc0069a69d191600da5af698a2e35f3f456\": not found" Dec 13 01:37:54.520057 kubelet[2794]: I1213 01:37:54.520048 2794 scope.go:117] "RemoveContainer" containerID="36b60ecddf5b8eb073af21412ff17446cebe23965b636b19747c99b3db76b4d0" Dec 13 01:37:54.520443 containerd[1494]: time="2024-12-13T01:37:54.520343367Z" level=error msg="ContainerStatus for \"36b60ecddf5b8eb073af21412ff17446cebe23965b636b19747c99b3db76b4d0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"36b60ecddf5b8eb073af21412ff17446cebe23965b636b19747c99b3db76b4d0\": not found" Dec 13 01:37:54.520551 kubelet[2794]: E1213 01:37:54.520504 2794 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"36b60ecddf5b8eb073af21412ff17446cebe23965b636b19747c99b3db76b4d0\": not found" containerID="36b60ecddf5b8eb073af21412ff17446cebe23965b636b19747c99b3db76b4d0" Dec 13 01:37:54.520599 kubelet[2794]: I1213 01:37:54.520540 2794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"36b60ecddf5b8eb073af21412ff17446cebe23965b636b19747c99b3db76b4d0"} err="failed to get container status \"36b60ecddf5b8eb073af21412ff17446cebe23965b636b19747c99b3db76b4d0\": rpc error: code = NotFound desc = an error occurred when try to find container \"36b60ecddf5b8eb073af21412ff17446cebe23965b636b19747c99b3db76b4d0\": not found" Dec 13 01:37:54.520599 kubelet[2794]: I1213 01:37:54.520565 2794 scope.go:117] "RemoveContainer" containerID="8c1f0abb81ba1115cc61a4f28c325e677017f9210b158ad4f47011796da078f5" Dec 13 01:37:54.520841 containerd[1494]: time="2024-12-13T01:37:54.520787985Z" level=error msg="ContainerStatus for \"8c1f0abb81ba1115cc61a4f28c325e677017f9210b158ad4f47011796da078f5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8c1f0abb81ba1115cc61a4f28c325e677017f9210b158ad4f47011796da078f5\": not found" Dec 13 01:37:54.521025 kubelet[2794]: E1213 01:37:54.520953 2794 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8c1f0abb81ba1115cc61a4f28c325e677017f9210b158ad4f47011796da078f5\": not found" containerID="8c1f0abb81ba1115cc61a4f28c325e677017f9210b158ad4f47011796da078f5" Dec 13 01:37:54.521146 kubelet[2794]: I1213 01:37:54.521021 2794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8c1f0abb81ba1115cc61a4f28c325e677017f9210b158ad4f47011796da078f5"} err="failed to get container status \"8c1f0abb81ba1115cc61a4f28c325e677017f9210b158ad4f47011796da078f5\": rpc error: code = NotFound desc = an error occurred when try to find container \"8c1f0abb81ba1115cc61a4f28c325e677017f9210b158ad4f47011796da078f5\": not found" Dec 13 01:37:54.683260 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f2e29207363bfec2ab3ca2c39bb6f63c74b8f0a035281a8f4084709f891e251f-rootfs.mount: Deactivated successfully. Dec 13 01:37:54.683497 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f2e29207363bfec2ab3ca2c39bb6f63c74b8f0a035281a8f4084709f891e251f-shm.mount: Deactivated successfully. Dec 13 01:37:54.683679 systemd[1]: var-lib-kubelet-pods-3e29260e\x2dd0a9\x2d434d\x2d8eb2\x2d02da89d8fc6e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 01:37:54.683852 systemd[1]: var-lib-kubelet-pods-3e29260e\x2dd0a9\x2d434d\x2d8eb2\x2d02da89d8fc6e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 01:37:54.684622 systemd[1]: var-lib-kubelet-pods-f789b555\x2db05f\x2d4899\x2d811a\x2d0ec6fe0479b8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtxljb.mount: Deactivated successfully. Dec 13 01:37:54.684916 systemd[1]: var-lib-kubelet-pods-3e29260e\x2dd0a9\x2d434d\x2d8eb2\x2d02da89d8fc6e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dz2mqj.mount: Deactivated successfully. Dec 13 01:37:55.633557 kubelet[2794]: E1213 01:37:55.633480 2794 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 01:37:55.679293 sshd[4372]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:55.688324 systemd[1]: sshd@21-78.47.95.53:22-147.75.109.163:39574.service: Deactivated successfully. Dec 13 01:37:55.693094 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 01:37:55.697100 systemd-logind[1473]: Session 20 logged out. Waiting for processes to exit. Dec 13 01:37:55.700439 systemd-logind[1473]: Removed session 20. Dec 13 01:37:55.861892 systemd[1]: Started sshd@22-78.47.95.53:22-147.75.109.163:39576.service - OpenSSH per-connection server daemon (147.75.109.163:39576). Dec 13 01:37:56.467727 kubelet[2794]: I1213 01:37:56.467646 2794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e29260e-d0a9-434d-8eb2-02da89d8fc6e" path="/var/lib/kubelet/pods/3e29260e-d0a9-434d-8eb2-02da89d8fc6e/volumes" Dec 13 01:37:56.469626 kubelet[2794]: I1213 01:37:56.469584 2794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f789b555-b05f-4899-811a-0ec6fe0479b8" path="/var/lib/kubelet/pods/f789b555-b05f-4899-811a-0ec6fe0479b8/volumes" Dec 13 01:37:56.862731 sshd[4538]: Accepted publickey for core from 147.75.109.163 port 39576 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:37:56.866791 sshd[4538]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:56.878637 systemd-logind[1473]: New session 21 of user core. Dec 13 01:37:56.885507 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 01:37:58.009828 kubelet[2794]: I1213 01:37:58.009643 2794 topology_manager.go:215] "Topology Admit Handler" podUID="577e4a68-14c1-474f-afc0-67ca9c2ed295" podNamespace="kube-system" podName="cilium-ll5x6" Dec 13 01:37:58.016377 kubelet[2794]: E1213 01:37:58.016302 2794 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3e29260e-d0a9-434d-8eb2-02da89d8fc6e" containerName="mount-cgroup" Dec 13 01:37:58.016377 kubelet[2794]: E1213 01:37:58.016398 2794 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3e29260e-d0a9-434d-8eb2-02da89d8fc6e" containerName="apply-sysctl-overwrites" Dec 13 01:37:58.016634 kubelet[2794]: E1213 01:37:58.016414 2794 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3e29260e-d0a9-434d-8eb2-02da89d8fc6e" containerName="mount-bpf-fs" Dec 13 01:37:58.016634 kubelet[2794]: E1213 01:37:58.016442 2794 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3e29260e-d0a9-434d-8eb2-02da89d8fc6e" containerName="clean-cilium-state" Dec 13 01:37:58.016634 kubelet[2794]: E1213 01:37:58.016453 2794 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3e29260e-d0a9-434d-8eb2-02da89d8fc6e" containerName="cilium-agent" Dec 13 01:37:58.016634 kubelet[2794]: E1213 01:37:58.016467 2794 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f789b555-b05f-4899-811a-0ec6fe0479b8" containerName="cilium-operator" Dec 13 01:37:58.028547 kubelet[2794]: I1213 01:37:58.028472 2794 memory_manager.go:354] "RemoveStaleState removing state" podUID="f789b555-b05f-4899-811a-0ec6fe0479b8" containerName="cilium-operator" Dec 13 01:37:58.028547 kubelet[2794]: I1213 01:37:58.028543 2794 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e29260e-d0a9-434d-8eb2-02da89d8fc6e" containerName="cilium-agent" Dec 13 01:37:58.070590 systemd[1]: Created slice kubepods-burstable-pod577e4a68_14c1_474f_afc0_67ca9c2ed295.slice - libcontainer container kubepods-burstable-pod577e4a68_14c1_474f_afc0_67ca9c2ed295.slice. Dec 13 01:37:58.154482 kubelet[2794]: I1213 01:37:58.154239 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/577e4a68-14c1-474f-afc0-67ca9c2ed295-etc-cni-netd\") pod \"cilium-ll5x6\" (UID: \"577e4a68-14c1-474f-afc0-67ca9c2ed295\") " pod="kube-system/cilium-ll5x6" Dec 13 01:37:58.154482 kubelet[2794]: I1213 01:37:58.154302 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/577e4a68-14c1-474f-afc0-67ca9c2ed295-cilium-ipsec-secrets\") pod \"cilium-ll5x6\" (UID: \"577e4a68-14c1-474f-afc0-67ca9c2ed295\") " pod="kube-system/cilium-ll5x6" Dec 13 01:37:58.154482 kubelet[2794]: I1213 01:37:58.154330 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/577e4a68-14c1-474f-afc0-67ca9c2ed295-hubble-tls\") pod \"cilium-ll5x6\" (UID: \"577e4a68-14c1-474f-afc0-67ca9c2ed295\") " pod="kube-system/cilium-ll5x6" Dec 13 01:37:58.154482 kubelet[2794]: I1213 01:37:58.154351 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/577e4a68-14c1-474f-afc0-67ca9c2ed295-cilium-run\") pod \"cilium-ll5x6\" (UID: \"577e4a68-14c1-474f-afc0-67ca9c2ed295\") " pod="kube-system/cilium-ll5x6" Dec 13 01:37:58.154482 kubelet[2794]: I1213 01:37:58.154370 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/577e4a68-14c1-474f-afc0-67ca9c2ed295-host-proc-sys-kernel\") pod \"cilium-ll5x6\" (UID: \"577e4a68-14c1-474f-afc0-67ca9c2ed295\") " pod="kube-system/cilium-ll5x6" Dec 13 01:37:58.154482 kubelet[2794]: I1213 01:37:58.154390 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/577e4a68-14c1-474f-afc0-67ca9c2ed295-cni-path\") pod \"cilium-ll5x6\" (UID: \"577e4a68-14c1-474f-afc0-67ca9c2ed295\") " pod="kube-system/cilium-ll5x6" Dec 13 01:37:58.155177 kubelet[2794]: I1213 01:37:58.154409 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/577e4a68-14c1-474f-afc0-67ca9c2ed295-clustermesh-secrets\") pod \"cilium-ll5x6\" (UID: \"577e4a68-14c1-474f-afc0-67ca9c2ed295\") " pod="kube-system/cilium-ll5x6" Dec 13 01:37:58.155177 kubelet[2794]: I1213 01:37:58.154430 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/577e4a68-14c1-474f-afc0-67ca9c2ed295-bpf-maps\") pod \"cilium-ll5x6\" (UID: \"577e4a68-14c1-474f-afc0-67ca9c2ed295\") " pod="kube-system/cilium-ll5x6" Dec 13 01:37:58.155177 kubelet[2794]: I1213 01:37:58.154528 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/577e4a68-14c1-474f-afc0-67ca9c2ed295-lib-modules\") pod \"cilium-ll5x6\" (UID: \"577e4a68-14c1-474f-afc0-67ca9c2ed295\") " pod="kube-system/cilium-ll5x6" Dec 13 01:37:58.155177 kubelet[2794]: I1213 01:37:58.154617 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/577e4a68-14c1-474f-afc0-67ca9c2ed295-xtables-lock\") pod \"cilium-ll5x6\" (UID: \"577e4a68-14c1-474f-afc0-67ca9c2ed295\") " pod="kube-system/cilium-ll5x6" Dec 13 01:37:58.155177 kubelet[2794]: I1213 01:37:58.154665 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/577e4a68-14c1-474f-afc0-67ca9c2ed295-cilium-config-path\") pod \"cilium-ll5x6\" (UID: \"577e4a68-14c1-474f-afc0-67ca9c2ed295\") " pod="kube-system/cilium-ll5x6" Dec 13 01:37:58.155177 kubelet[2794]: I1213 01:37:58.154713 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/577e4a68-14c1-474f-afc0-67ca9c2ed295-cilium-cgroup\") pod \"cilium-ll5x6\" (UID: \"577e4a68-14c1-474f-afc0-67ca9c2ed295\") " pod="kube-system/cilium-ll5x6" Dec 13 01:37:58.155345 kubelet[2794]: I1213 01:37:58.154749 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9467\" (UniqueName: \"kubernetes.io/projected/577e4a68-14c1-474f-afc0-67ca9c2ed295-kube-api-access-v9467\") pod \"cilium-ll5x6\" (UID: \"577e4a68-14c1-474f-afc0-67ca9c2ed295\") " pod="kube-system/cilium-ll5x6" Dec 13 01:37:58.155345 kubelet[2794]: I1213 01:37:58.154796 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/577e4a68-14c1-474f-afc0-67ca9c2ed295-host-proc-sys-net\") pod \"cilium-ll5x6\" (UID: \"577e4a68-14c1-474f-afc0-67ca9c2ed295\") " pod="kube-system/cilium-ll5x6" Dec 13 01:37:58.155345 kubelet[2794]: I1213 01:37:58.154853 2794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/577e4a68-14c1-474f-afc0-67ca9c2ed295-hostproc\") pod \"cilium-ll5x6\" (UID: \"577e4a68-14c1-474f-afc0-67ca9c2ed295\") " pod="kube-system/cilium-ll5x6" Dec 13 01:37:58.224863 sshd[4538]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:58.235917 systemd[1]: sshd@22-78.47.95.53:22-147.75.109.163:39576.service: Deactivated successfully. Dec 13 01:37:58.242073 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 01:37:58.244298 systemd-logind[1473]: Session 21 logged out. Waiting for processes to exit. Dec 13 01:37:58.247223 systemd-logind[1473]: Removed session 21. Dec 13 01:37:58.336085 kubelet[2794]: I1213 01:37:58.335097 2794 setters.go:580] "Node became not ready" node="ci-4081-2-1-2-54968c70cf" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T01:37:58Z","lastTransitionTime":"2024-12-13T01:37:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 01:37:58.377837 containerd[1494]: time="2024-12-13T01:37:58.377797639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ll5x6,Uid:577e4a68-14c1-474f-afc0-67ca9c2ed295,Namespace:kube-system,Attempt:0,}" Dec 13 01:37:58.411284 systemd[1]: Started sshd@23-78.47.95.53:22-147.75.109.163:35280.service - OpenSSH per-connection server daemon (147.75.109.163:35280). Dec 13 01:37:58.434192 containerd[1494]: time="2024-12-13T01:37:58.434026855Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:37:58.434192 containerd[1494]: time="2024-12-13T01:37:58.434113351Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:37:58.434192 containerd[1494]: time="2024-12-13T01:37:58.434123891Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:37:58.436397 containerd[1494]: time="2024-12-13T01:37:58.436104732Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:37:58.488196 systemd[1]: Started cri-containerd-3e82a4d40e1a9a5e3b2170c13b350a8d38296ee260a6991500fcac5fbcf0c5d3.scope - libcontainer container 3e82a4d40e1a9a5e3b2170c13b350a8d38296ee260a6991500fcac5fbcf0c5d3. Dec 13 01:37:58.522135 containerd[1494]: time="2024-12-13T01:37:58.521911604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ll5x6,Uid:577e4a68-14c1-474f-afc0-67ca9c2ed295,Namespace:kube-system,Attempt:0,} returns sandbox id \"3e82a4d40e1a9a5e3b2170c13b350a8d38296ee260a6991500fcac5fbcf0c5d3\"" Dec 13 01:37:58.525855 containerd[1494]: time="2024-12-13T01:37:58.525795645Z" level=info msg="CreateContainer within sandbox \"3e82a4d40e1a9a5e3b2170c13b350a8d38296ee260a6991500fcac5fbcf0c5d3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 01:37:58.540699 containerd[1494]: time="2024-12-13T01:37:58.540650380Z" level=info msg="CreateContainer within sandbox \"3e82a4d40e1a9a5e3b2170c13b350a8d38296ee260a6991500fcac5fbcf0c5d3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e74142a2e64da408e4ab150cba6c641017876695438ab514c3ddcfa7a25ed96a\"" Dec 13 01:37:58.541914 containerd[1494]: time="2024-12-13T01:37:58.541719682Z" level=info msg="StartContainer for \"e74142a2e64da408e4ab150cba6c641017876695438ab514c3ddcfa7a25ed96a\"" Dec 13 01:37:58.574129 systemd[1]: Started cri-containerd-e74142a2e64da408e4ab150cba6c641017876695438ab514c3ddcfa7a25ed96a.scope - libcontainer container e74142a2e64da408e4ab150cba6c641017876695438ab514c3ddcfa7a25ed96a. Dec 13 01:37:58.608918 containerd[1494]: time="2024-12-13T01:37:58.608664158Z" level=info msg="StartContainer for \"e74142a2e64da408e4ab150cba6c641017876695438ab514c3ddcfa7a25ed96a\" returns successfully" Dec 13 01:37:58.633823 systemd[1]: cri-containerd-e74142a2e64da408e4ab150cba6c641017876695438ab514c3ddcfa7a25ed96a.scope: Deactivated successfully. Dec 13 01:37:58.685719 containerd[1494]: time="2024-12-13T01:37:58.685534450Z" level=info msg="shim disconnected" id=e74142a2e64da408e4ab150cba6c641017876695438ab514c3ddcfa7a25ed96a namespace=k8s.io Dec 13 01:37:58.685719 containerd[1494]: time="2024-12-13T01:37:58.685612951Z" level=warning msg="cleaning up after shim disconnected" id=e74142a2e64da408e4ab150cba6c641017876695438ab514c3ddcfa7a25ed96a namespace=k8s.io Dec 13 01:37:58.685719 containerd[1494]: time="2024-12-13T01:37:58.685629153Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:37:59.422777 sshd[4557]: Accepted publickey for core from 147.75.109.163 port 35280 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:37:59.428402 sshd[4557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:59.436522 containerd[1494]: time="2024-12-13T01:37:59.436431610Z" level=info msg="CreateContainer within sandbox \"3e82a4d40e1a9a5e3b2170c13b350a8d38296ee260a6991500fcac5fbcf0c5d3\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 01:37:59.454145 systemd-logind[1473]: New session 22 of user core. Dec 13 01:37:59.460658 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 01:37:59.491278 containerd[1494]: time="2024-12-13T01:37:59.490852591Z" level=info msg="CreateContainer within sandbox \"3e82a4d40e1a9a5e3b2170c13b350a8d38296ee260a6991500fcac5fbcf0c5d3\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3dedb6705824c1dbc314126f3081f960090103ecd3a92b96ceee66e691e38c45\"" Dec 13 01:37:59.500023 kubelet[2794]: E1213 01:37:59.497139 2794 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-tldkx" podUID="22895f47-88d9-4179-ad38-350c424b08f5" Dec 13 01:37:59.500663 containerd[1494]: time="2024-12-13T01:37:59.498614815Z" level=info msg="StartContainer for \"3dedb6705824c1dbc314126f3081f960090103ecd3a92b96ceee66e691e38c45\"" Dec 13 01:37:59.560187 systemd[1]: Started cri-containerd-3dedb6705824c1dbc314126f3081f960090103ecd3a92b96ceee66e691e38c45.scope - libcontainer container 3dedb6705824c1dbc314126f3081f960090103ecd3a92b96ceee66e691e38c45. Dec 13 01:37:59.608911 containerd[1494]: time="2024-12-13T01:37:59.608776423Z" level=info msg="StartContainer for \"3dedb6705824c1dbc314126f3081f960090103ecd3a92b96ceee66e691e38c45\" returns successfully" Dec 13 01:37:59.626063 systemd[1]: cri-containerd-3dedb6705824c1dbc314126f3081f960090103ecd3a92b96ceee66e691e38c45.scope: Deactivated successfully. Dec 13 01:37:59.668857 containerd[1494]: time="2024-12-13T01:37:59.668756077Z" level=info msg="shim disconnected" id=3dedb6705824c1dbc314126f3081f960090103ecd3a92b96ceee66e691e38c45 namespace=k8s.io Dec 13 01:37:59.668857 containerd[1494]: time="2024-12-13T01:37:59.668810391Z" level=warning msg="cleaning up after shim disconnected" id=3dedb6705824c1dbc314126f3081f960090103ecd3a92b96ceee66e691e38c45 namespace=k8s.io Dec 13 01:37:59.668857 containerd[1494]: time="2024-12-13T01:37:59.668818277Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:38:00.104022 sshd[4557]: pam_unix(sshd:session): session closed for user core Dec 13 01:38:00.109696 systemd[1]: sshd@23-78.47.95.53:22-147.75.109.163:35280.service: Deactivated successfully. Dec 13 01:38:00.113946 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 01:38:00.116812 systemd-logind[1473]: Session 22 logged out. Waiting for processes to exit. Dec 13 01:38:00.118340 systemd-logind[1473]: Removed session 22. Dec 13 01:38:00.272814 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3dedb6705824c1dbc314126f3081f960090103ecd3a92b96ceee66e691e38c45-rootfs.mount: Deactivated successfully. Dec 13 01:38:00.279299 systemd[1]: Started sshd@24-78.47.95.53:22-147.75.109.163:35292.service - OpenSSH per-connection server daemon (147.75.109.163:35292). Dec 13 01:38:00.439416 containerd[1494]: time="2024-12-13T01:38:00.439360472Z" level=info msg="CreateContainer within sandbox \"3e82a4d40e1a9a5e3b2170c13b350a8d38296ee260a6991500fcac5fbcf0c5d3\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 01:38:00.476024 containerd[1494]: time="2024-12-13T01:38:00.473685680Z" level=info msg="CreateContainer within sandbox \"3e82a4d40e1a9a5e3b2170c13b350a8d38296ee260a6991500fcac5fbcf0c5d3\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d71279ef593926d9e3ab82c9f89272485cbdf7b573842af6f2ae965957b2826e\"" Dec 13 01:38:00.476024 containerd[1494]: time="2024-12-13T01:38:00.474876626Z" level=info msg="StartContainer for \"d71279ef593926d9e3ab82c9f89272485cbdf7b573842af6f2ae965957b2826e\"" Dec 13 01:38:00.554292 systemd[1]: Started cri-containerd-d71279ef593926d9e3ab82c9f89272485cbdf7b573842af6f2ae965957b2826e.scope - libcontainer container d71279ef593926d9e3ab82c9f89272485cbdf7b573842af6f2ae965957b2826e. Dec 13 01:38:00.591384 containerd[1494]: time="2024-12-13T01:38:00.591344978Z" level=info msg="StartContainer for \"d71279ef593926d9e3ab82c9f89272485cbdf7b573842af6f2ae965957b2826e\" returns successfully" Dec 13 01:38:00.597808 systemd[1]: cri-containerd-d71279ef593926d9e3ab82c9f89272485cbdf7b573842af6f2ae965957b2826e.scope: Deactivated successfully. Dec 13 01:38:00.626403 containerd[1494]: time="2024-12-13T01:38:00.626330509Z" level=info msg="shim disconnected" id=d71279ef593926d9e3ab82c9f89272485cbdf7b573842af6f2ae965957b2826e namespace=k8s.io Dec 13 01:38:00.626903 containerd[1494]: time="2024-12-13T01:38:00.626500337Z" level=warning msg="cleaning up after shim disconnected" id=d71279ef593926d9e3ab82c9f89272485cbdf7b573842af6f2ae965957b2826e namespace=k8s.io Dec 13 01:38:00.626903 containerd[1494]: time="2024-12-13T01:38:00.626512891Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:38:00.642771 kubelet[2794]: E1213 01:38:00.642669 2794 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 01:38:01.272924 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d71279ef593926d9e3ab82c9f89272485cbdf7b573842af6f2ae965957b2826e-rootfs.mount: Deactivated successfully. Dec 13 01:38:01.284153 sshd[4728]: Accepted publickey for core from 147.75.109.163 port 35292 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:38:01.286696 sshd[4728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:38:01.296384 systemd-logind[1473]: New session 23 of user core. Dec 13 01:38:01.307380 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 13 01:38:01.442311 containerd[1494]: time="2024-12-13T01:38:01.442121704Z" level=info msg="CreateContainer within sandbox \"3e82a4d40e1a9a5e3b2170c13b350a8d38296ee260a6991500fcac5fbcf0c5d3\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 01:38:01.456019 kubelet[2794]: E1213 01:38:01.453715 2794 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-tldkx" podUID="22895f47-88d9-4179-ad38-350c424b08f5" Dec 13 01:38:01.477504 containerd[1494]: time="2024-12-13T01:38:01.477415563Z" level=info msg="CreateContainer within sandbox \"3e82a4d40e1a9a5e3b2170c13b350a8d38296ee260a6991500fcac5fbcf0c5d3\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"70a2b7cac6a49bbc8069b0b8e7fc4ed2998cbb58455ffe8bc14cecbc246d84ab\"" Dec 13 01:38:01.478508 containerd[1494]: time="2024-12-13T01:38:01.478345818Z" level=info msg="StartContainer for \"70a2b7cac6a49bbc8069b0b8e7fc4ed2998cbb58455ffe8bc14cecbc246d84ab\"" Dec 13 01:38:01.531245 systemd[1]: Started cri-containerd-70a2b7cac6a49bbc8069b0b8e7fc4ed2998cbb58455ffe8bc14cecbc246d84ab.scope - libcontainer container 70a2b7cac6a49bbc8069b0b8e7fc4ed2998cbb58455ffe8bc14cecbc246d84ab. Dec 13 01:38:01.575467 systemd[1]: cri-containerd-70a2b7cac6a49bbc8069b0b8e7fc4ed2998cbb58455ffe8bc14cecbc246d84ab.scope: Deactivated successfully. Dec 13 01:38:01.578942 containerd[1494]: time="2024-12-13T01:38:01.578471138Z" level=info msg="StartContainer for \"70a2b7cac6a49bbc8069b0b8e7fc4ed2998cbb58455ffe8bc14cecbc246d84ab\" returns successfully" Dec 13 01:38:01.625359 containerd[1494]: time="2024-12-13T01:38:01.625270623Z" level=info msg="shim disconnected" id=70a2b7cac6a49bbc8069b0b8e7fc4ed2998cbb58455ffe8bc14cecbc246d84ab namespace=k8s.io Dec 13 01:38:01.625359 containerd[1494]: time="2024-12-13T01:38:01.625338434Z" level=warning msg="cleaning up after shim disconnected" id=70a2b7cac6a49bbc8069b0b8e7fc4ed2998cbb58455ffe8bc14cecbc246d84ab namespace=k8s.io Dec 13 01:38:01.625359 containerd[1494]: time="2024-12-13T01:38:01.625354866Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:38:02.274273 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-70a2b7cac6a49bbc8069b0b8e7fc4ed2998cbb58455ffe8bc14cecbc246d84ab-rootfs.mount: Deactivated successfully. Dec 13 01:38:02.449304 containerd[1494]: time="2024-12-13T01:38:02.448757425Z" level=info msg="CreateContainer within sandbox \"3e82a4d40e1a9a5e3b2170c13b350a8d38296ee260a6991500fcac5fbcf0c5d3\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 01:38:02.491430 containerd[1494]: time="2024-12-13T01:38:02.491259983Z" level=info msg="CreateContainer within sandbox \"3e82a4d40e1a9a5e3b2170c13b350a8d38296ee260a6991500fcac5fbcf0c5d3\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"69aaa92748e68cbf726454248c6b8ce3844f010efa1a9f7f28b2af10dcb199d4\"" Dec 13 01:38:02.493058 containerd[1494]: time="2024-12-13T01:38:02.492174186Z" level=info msg="StartContainer for \"69aaa92748e68cbf726454248c6b8ce3844f010efa1a9f7f28b2af10dcb199d4\"" Dec 13 01:38:02.539289 systemd[1]: Started cri-containerd-69aaa92748e68cbf726454248c6b8ce3844f010efa1a9f7f28b2af10dcb199d4.scope - libcontainer container 69aaa92748e68cbf726454248c6b8ce3844f010efa1a9f7f28b2af10dcb199d4. Dec 13 01:38:02.575371 containerd[1494]: time="2024-12-13T01:38:02.575278044Z" level=info msg="StartContainer for \"69aaa92748e68cbf726454248c6b8ce3844f010efa1a9f7f28b2af10dcb199d4\" returns successfully" Dec 13 01:38:03.246146 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 01:38:03.288103 kernel: jitterentropy: Initialization failed with host not compliant with requirements: 9 Dec 13 01:38:03.309147 kernel: DRBG: Continuing without Jitter RNG Dec 13 01:38:03.452808 kubelet[2794]: E1213 01:38:03.452752 2794 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-tldkx" podUID="22895f47-88d9-4179-ad38-350c424b08f5" Dec 13 01:38:04.273059 systemd[1]: run-containerd-runc-k8s.io-69aaa92748e68cbf726454248c6b8ce3844f010efa1a9f7f28b2af10dcb199d4-runc.oLL0G0.mount: Deactivated successfully. Dec 13 01:38:05.453212 kubelet[2794]: E1213 01:38:05.453151 2794 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-tldkx" podUID="22895f47-88d9-4179-ad38-350c424b08f5" Dec 13 01:38:06.474635 systemd[1]: run-containerd-runc-k8s.io-69aaa92748e68cbf726454248c6b8ce3844f010efa1a9f7f28b2af10dcb199d4-runc.ornywU.mount: Deactivated successfully. Dec 13 01:38:06.601592 systemd-networkd[1387]: lxc_health: Link UP Dec 13 01:38:06.609236 systemd-networkd[1387]: lxc_health: Gained carrier Dec 13 01:38:08.417113 kubelet[2794]: I1213 01:38:08.415792 2794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-ll5x6" podStartSLOduration=11.41575724 podStartE2EDuration="11.41575724s" podCreationTimestamp="2024-12-13 01:37:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:38:03.473254548 +0000 UTC m=+353.129338994" watchObservedRunningTime="2024-12-13 01:38:08.41575724 +0000 UTC m=+358.071841706" Dec 13 01:38:08.570586 systemd-networkd[1387]: lxc_health: Gained IPv6LL Dec 13 01:38:08.730789 systemd[1]: run-containerd-runc-k8s.io-69aaa92748e68cbf726454248c6b8ce3844f010efa1a9f7f28b2af10dcb199d4-runc.COLGyL.mount: Deactivated successfully. Dec 13 01:38:10.492016 containerd[1494]: time="2024-12-13T01:38:10.491726616Z" level=info msg="StopPodSandbox for \"f2e29207363bfec2ab3ca2c39bb6f63c74b8f0a035281a8f4084709f891e251f\"" Dec 13 01:38:10.492016 containerd[1494]: time="2024-12-13T01:38:10.491957922Z" level=info msg="TearDown network for sandbox \"f2e29207363bfec2ab3ca2c39bb6f63c74b8f0a035281a8f4084709f891e251f\" successfully" Dec 13 01:38:10.492016 containerd[1494]: time="2024-12-13T01:38:10.491969504Z" level=info msg="StopPodSandbox for \"f2e29207363bfec2ab3ca2c39bb6f63c74b8f0a035281a8f4084709f891e251f\" returns successfully" Dec 13 01:38:10.494176 containerd[1494]: time="2024-12-13T01:38:10.493051201Z" level=info msg="RemovePodSandbox for \"f2e29207363bfec2ab3ca2c39bb6f63c74b8f0a035281a8f4084709f891e251f\"" Dec 13 01:38:10.496475 containerd[1494]: time="2024-12-13T01:38:10.496448608Z" level=info msg="Forcibly stopping sandbox \"f2e29207363bfec2ab3ca2c39bb6f63c74b8f0a035281a8f4084709f891e251f\"" Dec 13 01:38:10.496589 containerd[1494]: time="2024-12-13T01:38:10.496573628Z" level=info msg="TearDown network for sandbox \"f2e29207363bfec2ab3ca2c39bb6f63c74b8f0a035281a8f4084709f891e251f\" successfully" Dec 13 01:38:10.504640 containerd[1494]: time="2024-12-13T01:38:10.504526245Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f2e29207363bfec2ab3ca2c39bb6f63c74b8f0a035281a8f4084709f891e251f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:38:10.504640 containerd[1494]: time="2024-12-13T01:38:10.504566934Z" level=info msg="RemovePodSandbox \"f2e29207363bfec2ab3ca2c39bb6f63c74b8f0a035281a8f4084709f891e251f\" returns successfully" Dec 13 01:38:10.505018 containerd[1494]: time="2024-12-13T01:38:10.505000089Z" level=info msg="StopPodSandbox for \"69b9f8ba37133c7c791bea4d705a6caafff1eb20e2655e7e62391e6d49ae1704\"" Dec 13 01:38:10.505151 containerd[1494]: time="2024-12-13T01:38:10.505135821Z" level=info msg="TearDown network for sandbox \"69b9f8ba37133c7c791bea4d705a6caafff1eb20e2655e7e62391e6d49ae1704\" successfully" Dec 13 01:38:10.505204 containerd[1494]: time="2024-12-13T01:38:10.505192932Z" level=info msg="StopPodSandbox for \"69b9f8ba37133c7c791bea4d705a6caafff1eb20e2655e7e62391e6d49ae1704\" returns successfully" Dec 13 01:38:10.505484 containerd[1494]: time="2024-12-13T01:38:10.505467341Z" level=info msg="RemovePodSandbox for \"69b9f8ba37133c7c791bea4d705a6caafff1eb20e2655e7e62391e6d49ae1704\"" Dec 13 01:38:10.505604 containerd[1494]: time="2024-12-13T01:38:10.505590929Z" level=info msg="Forcibly stopping sandbox \"69b9f8ba37133c7c791bea4d705a6caafff1eb20e2655e7e62391e6d49ae1704\"" Dec 13 01:38:10.508595 containerd[1494]: time="2024-12-13T01:38:10.506041298Z" level=info msg="TearDown network for sandbox \"69b9f8ba37133c7c791bea4d705a6caafff1eb20e2655e7e62391e6d49ae1704\" successfully" Dec 13 01:38:10.511795 containerd[1494]: time="2024-12-13T01:38:10.511774392Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"69b9f8ba37133c7c791bea4d705a6caafff1eb20e2655e7e62391e6d49ae1704\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:38:10.511922 containerd[1494]: time="2024-12-13T01:38:10.511895927Z" level=info msg="RemovePodSandbox \"69b9f8ba37133c7c791bea4d705a6caafff1eb20e2655e7e62391e6d49ae1704\" returns successfully" Dec 13 01:38:11.005385 kubelet[2794]: E1213 01:38:11.005317 2794 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 127.0.0.1:38708->127.0.0.1:46053: read tcp 127.0.0.1:38708->127.0.0.1:46053: read: connection reset by peer Dec 13 01:38:13.350378 sshd[4728]: pam_unix(sshd:session): session closed for user core Dec 13 01:38:13.360261 systemd[1]: sshd@24-78.47.95.53:22-147.75.109.163:35292.service: Deactivated successfully. Dec 13 01:38:13.367271 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 01:38:13.368362 systemd-logind[1473]: Session 23 logged out. Waiting for processes to exit. Dec 13 01:38:13.369817 systemd-logind[1473]: Removed session 23. Dec 13 01:38:28.364394 systemd[1]: cri-containerd-3d6b5b545d29266a581a56656cda99960dd7e4dbf93526f95f3250b941701923.scope: Deactivated successfully. Dec 13 01:38:28.366200 systemd[1]: cri-containerd-3d6b5b545d29266a581a56656cda99960dd7e4dbf93526f95f3250b941701923.scope: Consumed 8.139s CPU time, 24.5M memory peak, 0B memory swap peak. Dec 13 01:38:28.415481 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3d6b5b545d29266a581a56656cda99960dd7e4dbf93526f95f3250b941701923-rootfs.mount: Deactivated successfully. Dec 13 01:38:28.421936 containerd[1494]: time="2024-12-13T01:38:28.421831637Z" level=info msg="shim disconnected" id=3d6b5b545d29266a581a56656cda99960dd7e4dbf93526f95f3250b941701923 namespace=k8s.io Dec 13 01:38:28.422946 containerd[1494]: time="2024-12-13T01:38:28.422885491Z" level=warning msg="cleaning up after shim disconnected" id=3d6b5b545d29266a581a56656cda99960dd7e4dbf93526f95f3250b941701923 namespace=k8s.io Dec 13 01:38:28.422946 containerd[1494]: time="2024-12-13T01:38:28.422927798Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:38:28.510909 kubelet[2794]: I1213 01:38:28.510307 2794 scope.go:117] "RemoveContainer" containerID="3d6b5b545d29266a581a56656cda99960dd7e4dbf93526f95f3250b941701923" Dec 13 01:38:28.514608 containerd[1494]: time="2024-12-13T01:38:28.514550639Z" level=info msg="CreateContainer within sandbox \"1d2fa38872706e7e1a4fc298ddee51d531bbff2039afa70929519ed821f92d7a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Dec 13 01:38:28.541126 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3074377738.mount: Deactivated successfully. Dec 13 01:38:28.543246 containerd[1494]: time="2024-12-13T01:38:28.543141285Z" level=info msg="CreateContainer within sandbox \"1d2fa38872706e7e1a4fc298ddee51d531bbff2039afa70929519ed821f92d7a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"186d13bc9eb31278846c5de2548ae07f9c88df44e19635908e69634429dba172\"" Dec 13 01:38:28.543895 containerd[1494]: time="2024-12-13T01:38:28.543827869Z" level=info msg="StartContainer for \"186d13bc9eb31278846c5de2548ae07f9c88df44e19635908e69634429dba172\"" Dec 13 01:38:28.588240 systemd[1]: Started cri-containerd-186d13bc9eb31278846c5de2548ae07f9c88df44e19635908e69634429dba172.scope - libcontainer container 186d13bc9eb31278846c5de2548ae07f9c88df44e19635908e69634429dba172. Dec 13 01:38:28.656358 containerd[1494]: time="2024-12-13T01:38:28.656302203Z" level=info msg="StartContainer for \"186d13bc9eb31278846c5de2548ae07f9c88df44e19635908e69634429dba172\" returns successfully" Dec 13 01:38:28.800933 kubelet[2794]: E1213 01:38:28.800757 2794 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:58724->10.0.0.2:2379: read: connection timed out" Dec 13 01:38:28.812199 systemd[1]: cri-containerd-e701100d820c4842a2ed3cb925e5231144f0fedab69933a49399a1dcf8c18261.scope: Deactivated successfully. Dec 13 01:38:28.812608 systemd[1]: cri-containerd-e701100d820c4842a2ed3cb925e5231144f0fedab69933a49399a1dcf8c18261.scope: Consumed 2.282s CPU time, 17.7M memory peak, 0B memory swap peak. Dec 13 01:38:28.865141 containerd[1494]: time="2024-12-13T01:38:28.864465283Z" level=info msg="shim disconnected" id=e701100d820c4842a2ed3cb925e5231144f0fedab69933a49399a1dcf8c18261 namespace=k8s.io Dec 13 01:38:28.865141 containerd[1494]: time="2024-12-13T01:38:28.864657459Z" level=warning msg="cleaning up after shim disconnected" id=e701100d820c4842a2ed3cb925e5231144f0fedab69933a49399a1dcf8c18261 namespace=k8s.io Dec 13 01:38:28.865141 containerd[1494]: time="2024-12-13T01:38:28.864673417Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:38:29.415570 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e701100d820c4842a2ed3cb925e5231144f0fedab69933a49399a1dcf8c18261-rootfs.mount: Deactivated successfully. Dec 13 01:38:29.518416 kubelet[2794]: I1213 01:38:29.518364 2794 scope.go:117] "RemoveContainer" containerID="e701100d820c4842a2ed3cb925e5231144f0fedab69933a49399a1dcf8c18261" Dec 13 01:38:29.528126 containerd[1494]: time="2024-12-13T01:38:29.528072628Z" level=info msg="CreateContainer within sandbox \"4e0117636c40e79a11b1ddead901660e36304806d3b6b41ba00b55da04b2d8b6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Dec 13 01:38:29.551848 containerd[1494]: time="2024-12-13T01:38:29.550807646Z" level=info msg="CreateContainer within sandbox \"4e0117636c40e79a11b1ddead901660e36304806d3b6b41ba00b55da04b2d8b6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"6a3470a9d2be41b103b3c5f5743a91acd90332eb52653e7c4606b1d7f581904f\"" Dec 13 01:38:29.553019 containerd[1494]: time="2024-12-13T01:38:29.552488978Z" level=info msg="StartContainer for \"6a3470a9d2be41b103b3c5f5743a91acd90332eb52653e7c4606b1d7f581904f\"" Dec 13 01:38:29.592102 systemd[1]: Started cri-containerd-6a3470a9d2be41b103b3c5f5743a91acd90332eb52653e7c4606b1d7f581904f.scope - libcontainer container 6a3470a9d2be41b103b3c5f5743a91acd90332eb52653e7c4606b1d7f581904f. Dec 13 01:38:29.636307 containerd[1494]: time="2024-12-13T01:38:29.636264930Z" level=info msg="StartContainer for \"6a3470a9d2be41b103b3c5f5743a91acd90332eb52653e7c4606b1d7f581904f\" returns successfully" Dec 13 01:38:30.414366 systemd[1]: run-containerd-runc-k8s.io-6a3470a9d2be41b103b3c5f5743a91acd90332eb52653e7c4606b1d7f581904f-runc.BIT5sr.mount: Deactivated successfully. Dec 13 01:38:34.444869 kubelet[2794]: E1213 01:38:34.430722 2794 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:58538->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-2-1-2-54968c70cf.181098d6981401e3 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-2-1-2-54968c70cf,UID:0c8124d3e0a67cd251f9bd0dfeb3af63,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-2-1-2-54968c70cf,},FirstTimestamp:2024-12-13 01:38:23.979512291 +0000 UTC m=+373.635596778,LastTimestamp:2024-12-13 01:38:23.979512291 +0000 UTC m=+373.635596778,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-2-1-2-54968c70cf,}" Dec 13 01:38:38.802057 kubelet[2794]: E1213 01:38:38.801824 2794 controller.go:195] "Failed to update lease" err="Put \"https://78.47.95.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-2-54968c70cf?timeout=10s\": context deadline exceeded" Dec 13 01:38:38.981848 kubelet[2794]: I1213 01:38:38.981737 2794 status_manager.go:853] "Failed to get status for pod" podUID="493c7585bd5509d4bc020bd265649454" pod="kube-system/kube-controller-manager-ci-4081-2-1-2-54968c70cf" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:58642->10.0.0.2:2379: read: connection timed out"