Jan 30 05:25:42.113974 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 30 05:25:42.113998 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 05:25:42.114007 kernel: BIOS-provided physical RAM map: Jan 30 05:25:42.114014 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 30 05:25:42.114020 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 30 05:25:42.114026 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 30 05:25:42.114034 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007cfdbfff] usable Jan 30 05:25:42.114040 kernel: BIOS-e820: [mem 0x000000007cfdc000-0x000000007cffffff] reserved Jan 30 05:25:42.114049 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 30 05:25:42.114055 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 30 05:25:42.114062 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 30 05:25:42.114068 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 30 05:25:42.114074 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 30 05:25:42.114081 kernel: NX (Execute Disable) protection: active Jan 30 05:25:42.114091 kernel: APIC: Static calls initialized Jan 30 05:25:42.114099 kernel: SMBIOS 3.0.0 present. Jan 30 05:25:42.114106 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Jan 30 05:25:42.114113 kernel: Hypervisor detected: KVM Jan 30 05:25:42.114120 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 30 05:25:42.114128 kernel: kvm-clock: using sched offset of 3748681366 cycles Jan 30 05:25:42.114135 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 30 05:25:42.114143 kernel: tsc: Detected 2495.310 MHz processor Jan 30 05:25:42.114150 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 05:25:42.114161 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 05:25:42.114168 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 Jan 30 05:25:42.114176 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 30 05:25:42.114183 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 05:25:42.114191 kernel: Using GB pages for direct mapping Jan 30 05:25:42.114198 kernel: ACPI: Early table checksum verification disabled Jan 30 05:25:42.114206 kernel: ACPI: RSDP 0x00000000000F51F0 000014 (v00 BOCHS ) Jan 30 05:25:42.114213 kernel: ACPI: RSDT 0x000000007CFE265D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 05:25:42.114220 kernel: ACPI: FACP 0x000000007CFE244D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 05:25:42.114231 kernel: ACPI: DSDT 0x000000007CFE0040 00240D (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 05:25:42.114238 kernel: ACPI: FACS 0x000000007CFE0000 000040 Jan 30 05:25:42.114245 kernel: ACPI: APIC 0x000000007CFE2541 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 05:25:42.114253 kernel: ACPI: HPET 0x000000007CFE25C1 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 05:25:42.114260 kernel: ACPI: MCFG 0x000000007CFE25F9 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 05:25:42.114268 kernel: ACPI: WAET 0x000000007CFE2635 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 05:25:42.114275 kernel: ACPI: Reserving FACP table memory at [mem 0x7cfe244d-0x7cfe2540] Jan 30 05:25:42.114282 kernel: ACPI: Reserving DSDT table memory at [mem 0x7cfe0040-0x7cfe244c] Jan 30 05:25:42.114295 kernel: ACPI: Reserving FACS table memory at [mem 0x7cfe0000-0x7cfe003f] Jan 30 05:25:42.114303 kernel: ACPI: Reserving APIC table memory at [mem 0x7cfe2541-0x7cfe25c0] Jan 30 05:25:42.114310 kernel: ACPI: Reserving HPET table memory at [mem 0x7cfe25c1-0x7cfe25f8] Jan 30 05:25:42.114317 kernel: ACPI: Reserving MCFG table memory at [mem 0x7cfe25f9-0x7cfe2634] Jan 30 05:25:42.114334 kernel: ACPI: Reserving WAET table memory at [mem 0x7cfe2635-0x7cfe265c] Jan 30 05:25:42.114342 kernel: No NUMA configuration found Jan 30 05:25:42.114351 kernel: Faking a node at [mem 0x0000000000000000-0x000000007cfdbfff] Jan 30 05:25:42.114359 kernel: NODE_DATA(0) allocated [mem 0x7cfd6000-0x7cfdbfff] Jan 30 05:25:42.114367 kernel: Zone ranges: Jan 30 05:25:42.114374 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 05:25:42.114381 kernel: DMA32 [mem 0x0000000001000000-0x000000007cfdbfff] Jan 30 05:25:42.114389 kernel: Normal empty Jan 30 05:25:42.114396 kernel: Movable zone start for each node Jan 30 05:25:42.114404 kernel: Early memory node ranges Jan 30 05:25:42.114411 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 30 05:25:42.114418 kernel: node 0: [mem 0x0000000000100000-0x000000007cfdbfff] Jan 30 05:25:42.114429 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007cfdbfff] Jan 30 05:25:42.114436 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 05:25:42.114443 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 30 05:25:42.114451 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 30 05:25:42.114458 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 30 05:25:42.114465 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 30 05:25:42.114473 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 30 05:25:42.114480 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 30 05:25:42.114488 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 30 05:25:42.114498 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 05:25:42.114506 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 30 05:25:42.114513 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 30 05:25:42.114521 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 05:25:42.114528 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 30 05:25:42.114536 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 30 05:25:42.114543 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 30 05:25:42.114551 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 30 05:25:42.114558 kernel: Booting paravirtualized kernel on KVM Jan 30 05:25:42.114569 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 05:25:42.114576 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 30 05:25:42.114584 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 30 05:25:42.114591 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 30 05:25:42.114599 kernel: pcpu-alloc: [0] 0 1 Jan 30 05:25:42.114606 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 30 05:25:42.114615 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 05:25:42.114623 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 05:25:42.114634 kernel: random: crng init done Jan 30 05:25:42.114641 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 05:25:42.114649 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 30 05:25:42.114656 kernel: Fallback order for Node 0: 0 Jan 30 05:25:42.114664 kernel: Built 1 zonelists, mobility grouping on. Total pages: 503708 Jan 30 05:25:42.114671 kernel: Policy zone: DMA32 Jan 30 05:25:42.114679 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 05:25:42.114687 kernel: Memory: 1922052K/2047464K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 125152K reserved, 0K cma-reserved) Jan 30 05:25:42.114695 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 30 05:25:42.114705 kernel: ftrace: allocating 37921 entries in 149 pages Jan 30 05:25:42.114712 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 05:25:42.114720 kernel: Dynamic Preempt: voluntary Jan 30 05:25:42.114727 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 05:25:42.114735 kernel: rcu: RCU event tracing is enabled. Jan 30 05:25:42.114743 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 30 05:25:42.114751 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 05:25:42.114758 kernel: Rude variant of Tasks RCU enabled. Jan 30 05:25:42.114766 kernel: Tracing variant of Tasks RCU enabled. Jan 30 05:25:42.114773 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 05:25:42.114784 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 30 05:25:42.114791 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 30 05:25:42.114798 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 05:25:42.114806 kernel: Console: colour VGA+ 80x25 Jan 30 05:25:42.114813 kernel: printk: console [tty0] enabled Jan 30 05:25:42.114821 kernel: printk: console [ttyS0] enabled Jan 30 05:25:42.114828 kernel: ACPI: Core revision 20230628 Jan 30 05:25:42.114835 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 30 05:25:42.114843 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 05:25:42.114853 kernel: x2apic enabled Jan 30 05:25:42.114860 kernel: APIC: Switched APIC routing to: physical x2apic Jan 30 05:25:42.114868 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 30 05:25:42.114875 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 30 05:25:42.114882 kernel: Calibrating delay loop (skipped) preset value.. 4990.62 BogoMIPS (lpj=2495310) Jan 30 05:25:42.114890 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 30 05:25:42.114897 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 30 05:25:42.114905 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 30 05:25:42.114922 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 05:25:42.114994 kernel: Spectre V2 : Mitigation: Retpolines Jan 30 05:25:42.115002 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 05:25:42.115013 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 30 05:25:42.115020 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 30 05:25:42.115028 kernel: RETBleed: Mitigation: untrained return thunk Jan 30 05:25:42.115036 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 30 05:25:42.115044 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 30 05:25:42.115052 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 30 05:25:42.115063 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 30 05:25:42.115070 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 30 05:25:42.115078 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 30 05:25:42.115086 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 30 05:25:42.115094 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 30 05:25:42.115102 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 30 05:25:42.115109 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 30 05:25:42.115119 kernel: Freeing SMP alternatives memory: 32K Jan 30 05:25:42.115127 kernel: pid_max: default: 32768 minimum: 301 Jan 30 05:25:42.115135 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 05:25:42.115142 kernel: landlock: Up and running. Jan 30 05:25:42.115150 kernel: SELinux: Initializing. Jan 30 05:25:42.115158 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 30 05:25:42.115166 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 30 05:25:42.115174 kernel: smpboot: CPU0: AMD EPYC Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 30 05:25:42.115182 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 05:25:42.115192 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 05:25:42.115200 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 05:25:42.115208 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 30 05:25:42.115216 kernel: ... version: 0 Jan 30 05:25:42.115224 kernel: ... bit width: 48 Jan 30 05:25:42.115231 kernel: ... generic registers: 6 Jan 30 05:25:42.115239 kernel: ... value mask: 0000ffffffffffff Jan 30 05:25:42.115247 kernel: ... max period: 00007fffffffffff Jan 30 05:25:42.115254 kernel: ... fixed-purpose events: 0 Jan 30 05:25:42.115264 kernel: ... event mask: 000000000000003f Jan 30 05:25:42.115272 kernel: signal: max sigframe size: 1776 Jan 30 05:25:42.115280 kernel: rcu: Hierarchical SRCU implementation. Jan 30 05:25:42.115287 kernel: rcu: Max phase no-delay instances is 400. Jan 30 05:25:42.115295 kernel: smp: Bringing up secondary CPUs ... Jan 30 05:25:42.115303 kernel: smpboot: x86: Booting SMP configuration: Jan 30 05:25:42.115310 kernel: .... node #0, CPUs: #1 Jan 30 05:25:42.115318 kernel: smp: Brought up 1 node, 2 CPUs Jan 30 05:25:42.115335 kernel: smpboot: Max logical packages: 1 Jan 30 05:25:42.115342 kernel: smpboot: Total of 2 processors activated (9981.24 BogoMIPS) Jan 30 05:25:42.115353 kernel: devtmpfs: initialized Jan 30 05:25:42.115361 kernel: x86/mm: Memory block size: 128MB Jan 30 05:25:42.115368 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 05:25:42.115376 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 30 05:25:42.115384 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 05:25:42.115392 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 05:25:42.115400 kernel: audit: initializing netlink subsys (disabled) Jan 30 05:25:42.115408 kernel: audit: type=2000 audit(1738214740.666:1): state=initialized audit_enabled=0 res=1 Jan 30 05:25:42.115416 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 05:25:42.115427 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 05:25:42.115435 kernel: cpuidle: using governor menu Jan 30 05:25:42.115443 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 05:25:42.115450 kernel: dca service started, version 1.12.1 Jan 30 05:25:42.115458 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 30 05:25:42.115466 kernel: PCI: Using configuration type 1 for base access Jan 30 05:25:42.115474 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 05:25:42.115483 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 05:25:42.115490 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 05:25:42.115501 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 05:25:42.115509 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 05:25:42.115517 kernel: ACPI: Added _OSI(Module Device) Jan 30 05:25:42.115525 kernel: ACPI: Added _OSI(Processor Device) Jan 30 05:25:42.115533 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 05:25:42.115541 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 05:25:42.115549 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 05:25:42.115557 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 30 05:25:42.115565 kernel: ACPI: Interpreter enabled Jan 30 05:25:42.115575 kernel: ACPI: PM: (supports S0 S5) Jan 30 05:25:42.115583 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 05:25:42.115591 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 05:25:42.115599 kernel: PCI: Using E820 reservations for host bridge windows Jan 30 05:25:42.115607 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 30 05:25:42.115615 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 05:25:42.115823 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 30 05:25:42.116011 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 30 05:25:42.116148 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 30 05:25:42.116160 kernel: PCI host bridge to bus 0000:00 Jan 30 05:25:42.116293 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 30 05:25:42.116419 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 30 05:25:42.116531 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 30 05:25:42.116642 kernel: pci_bus 0000:00: root bus resource [mem 0x7d000000-0xafffffff window] Jan 30 05:25:42.116758 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 30 05:25:42.116870 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 30 05:25:42.117019 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 05:25:42.117168 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 30 05:25:42.117308 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 Jan 30 05:25:42.117453 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfb800000-0xfbffffff pref] Jan 30 05:25:42.117576 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfd200000-0xfd203fff 64bit pref] Jan 30 05:25:42.117704 kernel: pci 0000:00:01.0: reg 0x20: [mem 0xfea10000-0xfea10fff] Jan 30 05:25:42.117827 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea00000-0xfea0ffff pref] Jan 30 05:25:42.118036 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 30 05:25:42.118174 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 30 05:25:42.118295 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea11000-0xfea11fff] Jan 30 05:25:42.118442 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 30 05:25:42.118569 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea12000-0xfea12fff] Jan 30 05:25:42.118700 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 30 05:25:42.118822 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea13000-0xfea13fff] Jan 30 05:25:42.118977 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 30 05:25:42.119101 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea14000-0xfea14fff] Jan 30 05:25:42.119241 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 30 05:25:42.119381 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea15000-0xfea15fff] Jan 30 05:25:42.119517 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 30 05:25:42.119640 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea16000-0xfea16fff] Jan 30 05:25:42.119774 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 30 05:25:42.119896 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea17000-0xfea17fff] Jan 30 05:25:42.120068 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 30 05:25:42.120198 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea18000-0xfea18fff] Jan 30 05:25:42.120352 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Jan 30 05:25:42.120479 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfea19000-0xfea19fff] Jan 30 05:25:42.120616 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 30 05:25:42.120738 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 30 05:25:42.120872 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 30 05:25:42.121055 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc040-0xc05f] Jan 30 05:25:42.121179 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea1a000-0xfea1afff] Jan 30 05:25:42.121320 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 30 05:25:42.121452 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 30 05:25:42.121588 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Jan 30 05:25:42.121712 kernel: pci 0000:01:00.0: reg 0x14: [mem 0xfe880000-0xfe880fff] Jan 30 05:25:42.121837 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Jan 30 05:25:42.122062 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfe800000-0xfe87ffff pref] Jan 30 05:25:42.122184 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 30 05:25:42.122308 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Jan 30 05:25:42.122444 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Jan 30 05:25:42.122581 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 30 05:25:42.122722 kernel: pci 0000:02:00.0: reg 0x10: [mem 0xfe600000-0xfe603fff 64bit] Jan 30 05:25:42.122857 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 30 05:25:42.123017 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Jan 30 05:25:42.123138 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 30 05:25:42.123275 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Jan 30 05:25:42.123414 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfe400000-0xfe400fff] Jan 30 05:25:42.123540 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xfcc00000-0xfcc03fff 64bit pref] Jan 30 05:25:42.123661 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 30 05:25:42.123779 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Jan 30 05:25:42.123902 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 30 05:25:42.124079 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Jan 30 05:25:42.124208 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Jan 30 05:25:42.124338 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 30 05:25:42.124459 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Jan 30 05:25:42.124579 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 30 05:25:42.124719 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 30 05:25:42.124856 kernel: pci 0000:05:00.0: reg 0x14: [mem 0xfe000000-0xfe000fff] Jan 30 05:25:42.125020 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xfc800000-0xfc803fff 64bit pref] Jan 30 05:25:42.125145 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 30 05:25:42.125267 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Jan 30 05:25:42.125602 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 30 05:25:42.125742 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Jan 30 05:25:42.125872 kernel: pci 0000:06:00.0: reg 0x14: [mem 0xfde00000-0xfde00fff] Jan 30 05:25:42.127463 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xfc600000-0xfc603fff 64bit pref] Jan 30 05:25:42.127721 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 30 05:25:42.127911 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Jan 30 05:25:42.128128 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 30 05:25:42.128146 kernel: acpiphp: Slot [0] registered Jan 30 05:25:42.128399 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Jan 30 05:25:42.128632 kernel: pci 0000:07:00.0: reg 0x14: [mem 0xfdc80000-0xfdc80fff] Jan 30 05:25:42.128846 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xfc400000-0xfc403fff 64bit pref] Jan 30 05:25:42.129098 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfdc00000-0xfdc7ffff pref] Jan 30 05:25:42.129289 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 30 05:25:42.129492 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Jan 30 05:25:42.129678 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 30 05:25:42.129695 kernel: acpiphp: Slot [0-2] registered Jan 30 05:25:42.129882 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 30 05:25:42.130098 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Jan 30 05:25:42.130281 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 30 05:25:42.130304 kernel: acpiphp: Slot [0-3] registered Jan 30 05:25:42.130510 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 30 05:25:42.130694 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 30 05:25:42.130876 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 30 05:25:42.130893 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 30 05:25:42.130906 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 30 05:25:42.130919 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 30 05:25:42.133632 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 30 05:25:42.133649 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 30 05:25:42.133669 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 30 05:25:42.133681 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 30 05:25:42.133694 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 30 05:25:42.133707 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 30 05:25:42.133719 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 30 05:25:42.133731 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 30 05:25:42.133744 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 30 05:25:42.133756 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 30 05:25:42.133768 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 30 05:25:42.133784 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 30 05:25:42.133796 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 30 05:25:42.133808 kernel: iommu: Default domain type: Translated Jan 30 05:25:42.133821 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 05:25:42.133833 kernel: PCI: Using ACPI for IRQ routing Jan 30 05:25:42.133845 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 30 05:25:42.133858 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 30 05:25:42.133870 kernel: e820: reserve RAM buffer [mem 0x7cfdc000-0x7fffffff] Jan 30 05:25:42.134115 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 30 05:25:42.134308 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 30 05:25:42.134526 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 30 05:25:42.134546 kernel: vgaarb: loaded Jan 30 05:25:42.134559 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 30 05:25:42.134572 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 30 05:25:42.134584 kernel: clocksource: Switched to clocksource kvm-clock Jan 30 05:25:42.134596 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 05:25:42.134609 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 05:25:42.134627 kernel: pnp: PnP ACPI init Jan 30 05:25:42.134825 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 30 05:25:42.134844 kernel: pnp: PnP ACPI: found 5 devices Jan 30 05:25:42.134857 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 05:25:42.134870 kernel: NET: Registered PF_INET protocol family Jan 30 05:25:42.134882 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 05:25:42.134895 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 30 05:25:42.134908 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 05:25:42.134920 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 30 05:25:42.134978 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 30 05:25:42.134990 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 30 05:25:42.135003 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 30 05:25:42.135015 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 30 05:25:42.135027 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 05:25:42.135040 kernel: NET: Registered PF_XDP protocol family Jan 30 05:25:42.135225 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 30 05:25:42.135426 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 30 05:25:42.135614 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 30 05:25:42.135794 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] Jan 30 05:25:42.136002 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] Jan 30 05:25:42.136185 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] Jan 30 05:25:42.136394 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 30 05:25:42.136576 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Jan 30 05:25:42.136756 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Jan 30 05:25:42.137706 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 30 05:25:42.137904 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Jan 30 05:25:42.138121 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 30 05:25:42.138309 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 30 05:25:42.138506 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Jan 30 05:25:42.138703 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 30 05:25:42.138895 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 30 05:25:42.142866 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Jan 30 05:25:42.143111 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 30 05:25:42.143308 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 30 05:25:42.143509 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Jan 30 05:25:42.143692 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 30 05:25:42.143883 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 30 05:25:42.144154 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Jan 30 05:25:42.144385 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 30 05:25:42.144595 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 30 05:25:42.144807 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Jan 30 05:25:42.145868 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Jan 30 05:25:42.146090 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 30 05:25:42.146272 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 30 05:25:42.146465 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Jan 30 05:25:42.146644 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Jan 30 05:25:42.146826 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 30 05:25:42.147034 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 30 05:25:42.147236 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Jan 30 05:25:42.147518 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 30 05:25:42.147705 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 30 05:25:42.147887 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 30 05:25:42.152094 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 30 05:25:42.152359 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 30 05:25:42.152532 kernel: pci_bus 0000:00: resource 7 [mem 0x7d000000-0xafffffff window] Jan 30 05:25:42.152696 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 30 05:25:42.152858 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 30 05:25:42.153092 kernel: pci_bus 0000:01: resource 1 [mem 0xfe800000-0xfe9fffff] Jan 30 05:25:42.153272 kernel: pci_bus 0000:01: resource 2 [mem 0xfd000000-0xfd1fffff 64bit pref] Jan 30 05:25:42.153482 kernel: pci_bus 0000:02: resource 1 [mem 0xfe600000-0xfe7fffff] Jan 30 05:25:42.153657 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jan 30 05:25:42.153848 kernel: pci_bus 0000:03: resource 1 [mem 0xfe400000-0xfe5fffff] Jan 30 05:25:42.154078 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 30 05:25:42.154270 kernel: pci_bus 0000:04: resource 1 [mem 0xfe200000-0xfe3fffff] Jan 30 05:25:42.154460 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 30 05:25:42.154657 kernel: pci_bus 0000:05: resource 1 [mem 0xfe000000-0xfe1fffff] Jan 30 05:25:42.154862 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 30 05:25:42.156191 kernel: pci_bus 0000:06: resource 1 [mem 0xfde00000-0xfdffffff] Jan 30 05:25:42.156414 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 30 05:25:42.156615 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Jan 30 05:25:42.156789 kernel: pci_bus 0000:07: resource 1 [mem 0xfdc00000-0xfddfffff] Jan 30 05:25:42.158591 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 30 05:25:42.158726 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Jan 30 05:25:42.158842 kernel: pci_bus 0000:08: resource 1 [mem 0xfda00000-0xfdbfffff] Jan 30 05:25:42.158973 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 30 05:25:42.159100 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Jan 30 05:25:42.159215 kernel: pci_bus 0000:09: resource 1 [mem 0xfd800000-0xfd9fffff] Jan 30 05:25:42.159340 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 30 05:25:42.159357 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 30 05:25:42.159367 kernel: PCI: CLS 0 bytes, default 64 Jan 30 05:25:42.159376 kernel: Initialise system trusted keyrings Jan 30 05:25:42.159385 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 30 05:25:42.159393 kernel: Key type asymmetric registered Jan 30 05:25:42.159402 kernel: Asymmetric key parser 'x509' registered Jan 30 05:25:42.159410 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 05:25:42.159419 kernel: io scheduler mq-deadline registered Jan 30 05:25:42.159428 kernel: io scheduler kyber registered Jan 30 05:25:42.159440 kernel: io scheduler bfq registered Jan 30 05:25:42.159561 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jan 30 05:25:42.159682 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jan 30 05:25:42.159799 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jan 30 05:25:42.159982 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jan 30 05:25:42.160112 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jan 30 05:25:42.160251 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jan 30 05:25:42.160404 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jan 30 05:25:42.160524 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jan 30 05:25:42.160650 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jan 30 05:25:42.160772 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jan 30 05:25:42.160893 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jan 30 05:25:42.162261 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jan 30 05:25:42.162405 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jan 30 05:25:42.162525 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jan 30 05:25:42.162646 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jan 30 05:25:42.162764 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jan 30 05:25:42.162780 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 30 05:25:42.162898 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Jan 30 05:25:42.163043 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Jan 30 05:25:42.163987 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 05:25:42.163997 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Jan 30 05:25:42.164006 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 05:25:42.164015 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 05:25:42.164023 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 30 05:25:42.164036 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 30 05:25:42.164045 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 30 05:25:42.164053 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 30 05:25:42.164192 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 30 05:25:42.164342 kernel: rtc_cmos 00:03: registered as rtc0 Jan 30 05:25:42.164460 kernel: rtc_cmos 00:03: setting system clock to 2025-01-30T05:25:41 UTC (1738214741) Jan 30 05:25:42.164571 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 30 05:25:42.164582 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 30 05:25:42.164595 kernel: NET: Registered PF_INET6 protocol family Jan 30 05:25:42.164604 kernel: Segment Routing with IPv6 Jan 30 05:25:42.164613 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 05:25:42.164621 kernel: NET: Registered PF_PACKET protocol family Jan 30 05:25:42.164630 kernel: Key type dns_resolver registered Jan 30 05:25:42.164638 kernel: IPI shorthand broadcast: enabled Jan 30 05:25:42.164647 kernel: sched_clock: Marking stable (1506015673, 143537678)->(1715080445, -65527094) Jan 30 05:25:42.164655 kernel: registered taskstats version 1 Jan 30 05:25:42.164664 kernel: Loading compiled-in X.509 certificates Jan 30 05:25:42.164675 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 30 05:25:42.164683 kernel: Key type .fscrypt registered Jan 30 05:25:42.164692 kernel: Key type fscrypt-provisioning registered Jan 30 05:25:42.164700 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 05:25:42.164709 kernel: ima: Allocated hash algorithm: sha1 Jan 30 05:25:42.164717 kernel: ima: No architecture policies found Jan 30 05:25:42.164726 kernel: clk: Disabling unused clocks Jan 30 05:25:42.164734 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 30 05:25:42.164743 kernel: Write protecting the kernel read-only data: 36864k Jan 30 05:25:42.164757 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 30 05:25:42.164770 kernel: Run /init as init process Jan 30 05:25:42.164781 kernel: with arguments: Jan 30 05:25:42.164792 kernel: /init Jan 30 05:25:42.164804 kernel: with environment: Jan 30 05:25:42.164813 kernel: HOME=/ Jan 30 05:25:42.164821 kernel: TERM=linux Jan 30 05:25:42.164830 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 05:25:42.164841 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 05:25:42.164857 systemd[1]: Detected virtualization kvm. Jan 30 05:25:42.164866 systemd[1]: Detected architecture x86-64. Jan 30 05:25:42.164875 systemd[1]: Running in initrd. Jan 30 05:25:42.164884 systemd[1]: No hostname configured, using default hostname. Jan 30 05:25:42.164893 systemd[1]: Hostname set to . Jan 30 05:25:42.164902 systemd[1]: Initializing machine ID from VM UUID. Jan 30 05:25:42.164911 systemd[1]: Queued start job for default target initrd.target. Jan 30 05:25:42.164922 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 05:25:42.165968 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 05:25:42.165979 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 05:25:42.165989 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 05:25:42.165998 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 05:25:42.166008 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 05:25:42.166019 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 05:25:42.166033 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 05:25:42.166043 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 05:25:42.166052 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 05:25:42.166062 systemd[1]: Reached target paths.target - Path Units. Jan 30 05:25:42.166071 systemd[1]: Reached target slices.target - Slice Units. Jan 30 05:25:42.166080 systemd[1]: Reached target swap.target - Swaps. Jan 30 05:25:42.166090 systemd[1]: Reached target timers.target - Timer Units. Jan 30 05:25:42.166099 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 05:25:42.166112 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 05:25:42.166121 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 05:25:42.166132 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 05:25:42.166141 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 05:25:42.166151 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 05:25:42.166161 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 05:25:42.166170 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 05:25:42.166180 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 05:25:42.166189 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 05:25:42.166202 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 05:25:42.166211 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 05:25:42.166220 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 05:25:42.166255 systemd-journald[187]: Collecting audit messages is disabled. Jan 30 05:25:42.166283 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 05:25:42.166292 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 05:25:42.166301 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 05:25:42.166311 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 05:25:42.166320 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 05:25:42.166342 systemd-journald[187]: Journal started Jan 30 05:25:42.166364 systemd-journald[187]: Runtime Journal (/run/log/journal/9f9bfb3f0a2b4aa99565c95068f94f86) is 4.8M, max 38.4M, 33.6M free. Jan 30 05:25:42.153986 systemd-modules-load[189]: Inserted module 'overlay' Jan 30 05:25:42.212576 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 05:25:42.212630 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 05:25:42.212677 kernel: Bridge firewalling registered Jan 30 05:25:42.188792 systemd-modules-load[189]: Inserted module 'br_netfilter' Jan 30 05:25:42.215680 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 05:25:42.215316 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 05:25:42.220951 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 05:25:42.234397 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 05:25:42.237788 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 05:25:42.246161 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 05:25:42.249171 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 05:25:42.255660 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 05:25:42.259111 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 05:25:42.263056 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 05:25:42.277738 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 05:25:42.278504 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 05:25:42.283887 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 05:25:42.292352 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 05:25:42.298297 dracut-cmdline[214]: dracut-dracut-053 Jan 30 05:25:42.303205 dracut-cmdline[214]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 05:25:42.329668 systemd-resolved[222]: Positive Trust Anchors: Jan 30 05:25:42.329687 systemd-resolved[222]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 05:25:42.329718 systemd-resolved[222]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 05:25:42.335891 systemd-resolved[222]: Defaulting to hostname 'linux'. Jan 30 05:25:42.337146 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 05:25:42.337894 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 05:25:42.392989 kernel: SCSI subsystem initialized Jan 30 05:25:42.402974 kernel: Loading iSCSI transport class v2.0-870. Jan 30 05:25:42.421972 kernel: iscsi: registered transport (tcp) Jan 30 05:25:42.451053 kernel: iscsi: registered transport (qla4xxx) Jan 30 05:25:42.451169 kernel: QLogic iSCSI HBA Driver Jan 30 05:25:42.527624 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 05:25:42.535153 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 05:25:42.591673 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 05:25:42.591766 kernel: device-mapper: uevent: version 1.0.3 Jan 30 05:25:42.591790 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 05:25:42.646964 kernel: raid6: avx2x4 gen() 18476 MB/s Jan 30 05:25:42.664967 kernel: raid6: avx2x2 gen() 19220 MB/s Jan 30 05:25:42.682098 kernel: raid6: avx2x1 gen() 22735 MB/s Jan 30 05:25:42.682146 kernel: raid6: using algorithm avx2x1 gen() 22735 MB/s Jan 30 05:25:42.700181 kernel: raid6: .... xor() 15730 MB/s, rmw enabled Jan 30 05:25:42.700217 kernel: raid6: using avx2x2 recovery algorithm Jan 30 05:25:42.721983 kernel: xor: automatically using best checksumming function avx Jan 30 05:25:42.870979 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 05:25:42.883662 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 05:25:42.889140 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 05:25:42.903594 systemd-udevd[404]: Using default interface naming scheme 'v255'. Jan 30 05:25:42.908509 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 05:25:42.918133 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 05:25:42.932490 dracut-pre-trigger[410]: rd.md=0: removing MD RAID activation Jan 30 05:25:42.963129 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 05:25:42.969081 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 05:25:43.044702 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 05:25:43.054086 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 05:25:43.071036 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 05:25:43.075120 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 05:25:43.076572 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 05:25:43.077055 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 05:25:43.083496 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 05:25:43.104573 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 05:25:43.170677 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 05:25:43.176634 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 05:25:43.186769 kernel: ACPI: bus type USB registered Jan 30 05:25:43.186800 kernel: usbcore: registered new interface driver usbfs Jan 30 05:25:43.186821 kernel: usbcore: registered new interface driver hub Jan 30 05:25:43.186832 kernel: usbcore: registered new device driver usb Jan 30 05:25:43.176767 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 05:25:43.187982 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 05:25:43.189494 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 05:25:43.189661 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 05:25:43.191071 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 05:25:43.210100 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 05:25:43.247950 kernel: AVX2 version of gcm_enc/dec engaged. Jan 30 05:25:43.248033 kernel: AES CTR mode by8 optimization enabled Jan 30 05:25:43.248044 kernel: scsi host0: Virtio SCSI HBA Jan 30 05:25:43.250939 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jan 30 05:25:43.252665 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 30 05:25:43.257304 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jan 30 05:25:43.258473 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 30 05:25:43.259203 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 30 05:25:43.260036 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jan 30 05:25:43.260821 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jan 30 05:25:43.261610 kernel: hub 1-0:1.0: USB hub found Jan 30 05:25:43.262844 kernel: hub 1-0:1.0: 4 ports detected Jan 30 05:25:43.263606 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 30 05:25:43.265889 kernel: hub 2-0:1.0: USB hub found Jan 30 05:25:43.266588 kernel: hub 2-0:1.0: 4 ports detected Jan 30 05:25:43.267847 kernel: libata version 3.00 loaded. Jan 30 05:25:43.319352 kernel: ahci 0000:00:1f.2: version 3.0 Jan 30 05:25:43.347609 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 30 05:25:43.347628 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 30 05:25:43.347781 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 30 05:25:43.347916 kernel: scsi host1: ahci Jan 30 05:25:43.348089 kernel: scsi host2: ahci Jan 30 05:25:43.348235 kernel: scsi host3: ahci Jan 30 05:25:43.348396 kernel: scsi host4: ahci Jan 30 05:25:43.348544 kernel: scsi host5: ahci Jan 30 05:25:43.348682 kernel: scsi host6: ahci Jan 30 05:25:43.348820 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a100 irq 51 Jan 30 05:25:43.348831 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a180 irq 51 Jan 30 05:25:43.348842 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a200 irq 51 Jan 30 05:25:43.348853 kernel: sd 0:0:0:0: Power-on or device reset occurred Jan 30 05:25:43.363781 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a280 irq 51 Jan 30 05:25:43.363795 kernel: sd 0:0:0:0: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Jan 30 05:25:43.364042 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a300 irq 51 Jan 30 05:25:43.364054 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 30 05:25:43.364207 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a380 irq 51 Jan 30 05:25:43.364219 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Jan 30 05:25:43.364377 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 30 05:25:43.364526 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 05:25:43.364542 kernel: GPT:17805311 != 80003071 Jan 30 05:25:43.364552 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 05:25:43.364563 kernel: GPT:17805311 != 80003071 Jan 30 05:25:43.364573 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 05:25:43.364584 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 05:25:43.364594 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 30 05:25:43.315954 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 05:25:43.330169 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 05:25:43.355417 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 05:25:43.494055 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 30 05:25:43.642985 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 30 05:25:43.659567 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 30 05:25:43.659636 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 30 05:25:43.659953 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 30 05:25:43.665073 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 30 05:25:43.670114 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 30 05:25:43.670151 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 30 05:25:43.670166 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 30 05:25:43.674230 kernel: ata1.00: applying bridge limits Jan 30 05:25:43.675098 kernel: ata1.00: configured for UDMA/100 Jan 30 05:25:43.680972 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 30 05:25:43.704666 kernel: usbcore: registered new interface driver usbhid Jan 30 05:25:43.704754 kernel: usbhid: USB HID core driver Jan 30 05:25:43.716074 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Jan 30 05:25:43.720950 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jan 30 05:25:43.739951 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 30 05:25:43.748956 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 30 05:25:43.748973 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Jan 30 05:25:43.763994 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jan 30 05:25:43.777944 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (454) Jan 30 05:25:43.783958 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (461) Jan 30 05:25:43.791102 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jan 30 05:25:43.801894 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jan 30 05:25:43.803158 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jan 30 05:25:43.807405 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 05:25:43.815516 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 30 05:25:43.821978 disk-uuid[573]: Primary Header is updated. Jan 30 05:25:43.821978 disk-uuid[573]: Secondary Entries is updated. Jan 30 05:25:43.821978 disk-uuid[573]: Secondary Header is updated. Jan 30 05:25:44.837184 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 05:25:44.840103 disk-uuid[575]: The operation has completed successfully. Jan 30 05:25:44.944692 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 05:25:44.944897 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 05:25:44.958073 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 05:25:44.983055 sh[586]: Success Jan 30 05:25:45.006240 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 30 05:25:45.088478 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 05:25:45.105112 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 05:25:45.109214 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 05:25:45.149585 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 30 05:25:45.149700 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 05:25:45.153418 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 05:25:45.156974 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 05:25:45.161544 kernel: BTRFS info (device dm-0): using free space tree Jan 30 05:25:45.175021 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 30 05:25:45.179329 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 05:25:45.182185 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 05:25:45.191442 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 05:25:45.198548 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 05:25:45.220210 kernel: BTRFS info (device sda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 05:25:45.220270 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 05:25:45.222399 kernel: BTRFS info (device sda6): using free space tree Jan 30 05:25:45.234215 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 30 05:25:45.234301 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 05:25:45.255022 kernel: BTRFS info (device sda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 05:25:45.254579 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 05:25:45.267949 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 05:25:45.276208 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 05:25:45.387874 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 05:25:45.409256 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 05:25:45.449166 systemd-networkd[767]: lo: Link UP Jan 30 05:25:45.450070 systemd-networkd[767]: lo: Gained carrier Jan 30 05:25:45.453546 systemd-networkd[767]: Enumeration completed Jan 30 05:25:45.454279 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 05:25:45.455759 systemd[1]: Reached target network.target - Network. Jan 30 05:25:45.456129 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 05:25:45.456133 systemd-networkd[767]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 05:25:45.460308 systemd-networkd[767]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 05:25:45.461233 systemd-networkd[767]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 05:25:45.461839 systemd-networkd[767]: eth0: Link UP Jan 30 05:25:45.461842 systemd-networkd[767]: eth0: Gained carrier Jan 30 05:25:45.461849 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 05:25:45.469197 systemd-networkd[767]: eth1: Link UP Jan 30 05:25:45.469203 systemd-networkd[767]: eth1: Gained carrier Jan 30 05:25:45.469210 systemd-networkd[767]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 05:25:45.486095 ignition[680]: Ignition 2.19.0 Jan 30 05:25:45.486109 ignition[680]: Stage: fetch-offline Jan 30 05:25:45.486164 ignition[680]: no configs at "/usr/lib/ignition/base.d" Jan 30 05:25:45.486174 ignition[680]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 05:25:45.486281 ignition[680]: parsed url from cmdline: "" Jan 30 05:25:45.488672 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 05:25:45.486285 ignition[680]: no config URL provided Jan 30 05:25:45.486291 ignition[680]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 05:25:45.486302 ignition[680]: no config at "/usr/lib/ignition/user.ign" Jan 30 05:25:45.486308 ignition[680]: failed to fetch config: resource requires networking Jan 30 05:25:45.486530 ignition[680]: Ignition finished successfully Jan 30 05:25:45.497241 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 30 05:25:45.518895 ignition[775]: Ignition 2.19.0 Jan 30 05:25:45.518915 ignition[775]: Stage: fetch Jan 30 05:25:45.520087 ignition[775]: no configs at "/usr/lib/ignition/base.d" Jan 30 05:25:45.520103 ignition[775]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 05:25:45.520228 ignition[775]: parsed url from cmdline: "" Jan 30 05:25:45.520235 ignition[775]: no config URL provided Jan 30 05:25:45.520244 ignition[775]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 05:25:45.520257 ignition[775]: no config at "/usr/lib/ignition/user.ign" Jan 30 05:25:45.520285 ignition[775]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Jan 30 05:25:45.520515 ignition[775]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 30 05:25:45.537020 systemd-networkd[767]: eth0: DHCPv4 address 49.13.81.87/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 30 05:25:45.561004 systemd-networkd[767]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 05:25:45.720908 ignition[775]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Jan 30 05:25:45.729987 ignition[775]: GET result: OK Jan 30 05:25:45.730232 ignition[775]: parsing config with SHA512: 28ff1cd8d23f6d6cfc05d4c5f1ad3ec103cd56f9ecc93a4caf9124e205a9d0f587cc1b1787548da91722620cc192af2440adc8701d86751cb1f8fa733a9ec89e Jan 30 05:25:45.752537 unknown[775]: fetched base config from "system" Jan 30 05:25:45.752568 unknown[775]: fetched base config from "system" Jan 30 05:25:45.752583 unknown[775]: fetched user config from "hetzner" Jan 30 05:25:45.754320 ignition[775]: fetch: fetch complete Jan 30 05:25:45.754333 ignition[775]: fetch: fetch passed Jan 30 05:25:45.754441 ignition[775]: Ignition finished successfully Jan 30 05:25:45.764510 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 30 05:25:45.776209 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 05:25:45.834243 ignition[783]: Ignition 2.19.0 Jan 30 05:25:45.834269 ignition[783]: Stage: kargs Jan 30 05:25:45.834670 ignition[783]: no configs at "/usr/lib/ignition/base.d" Jan 30 05:25:45.834695 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 05:25:45.837051 ignition[783]: kargs: kargs passed Jan 30 05:25:45.837187 ignition[783]: Ignition finished successfully Jan 30 05:25:45.841088 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 05:25:45.858341 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 05:25:45.904962 ignition[789]: Ignition 2.19.0 Jan 30 05:25:45.906117 ignition[789]: Stage: disks Jan 30 05:25:45.906417 ignition[789]: no configs at "/usr/lib/ignition/base.d" Jan 30 05:25:45.906435 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 05:25:45.909450 ignition[789]: disks: disks passed Jan 30 05:25:45.909568 ignition[789]: Ignition finished successfully Jan 30 05:25:45.912386 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 05:25:45.914725 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 05:25:45.915335 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 05:25:45.916828 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 05:25:45.918865 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 05:25:45.920398 systemd[1]: Reached target basic.target - Basic System. Jan 30 05:25:45.931238 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 05:25:45.970184 systemd-fsck[798]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 30 05:25:45.977252 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 05:25:45.986085 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 05:25:46.115189 kernel: EXT4-fs (sda9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 30 05:25:46.115867 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 05:25:46.117173 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 05:25:46.130126 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 05:25:46.133493 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 05:25:46.137147 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 30 05:25:46.153066 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 05:25:46.153113 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 05:25:46.162161 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (806) Jan 30 05:25:46.163218 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 05:25:46.176160 kernel: BTRFS info (device sda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 05:25:46.176197 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 05:25:46.176211 kernel: BTRFS info (device sda6): using free space tree Jan 30 05:25:46.176225 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 30 05:25:46.176239 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 05:25:46.179128 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 05:25:46.184147 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 05:25:46.268343 initrd-setup-root[833]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 05:25:46.269536 coreos-metadata[808]: Jan 30 05:25:46.268 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Jan 30 05:25:46.271949 coreos-metadata[808]: Jan 30 05:25:46.270 INFO Fetch successful Jan 30 05:25:46.271949 coreos-metadata[808]: Jan 30 05:25:46.270 INFO wrote hostname ci-4081-3-0-c-240f39d8fc to /sysroot/etc/hostname Jan 30 05:25:46.274675 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 05:25:46.277779 initrd-setup-root[841]: cut: /sysroot/etc/group: No such file or directory Jan 30 05:25:46.283211 initrd-setup-root[848]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 05:25:46.288019 initrd-setup-root[855]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 05:25:46.442440 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 05:25:46.451164 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 05:25:46.454348 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 05:25:46.472376 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 05:25:46.478515 kernel: BTRFS info (device sda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 05:25:46.517513 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 05:25:46.529174 systemd-networkd[767]: eth0: Gained IPv6LL Jan 30 05:25:46.534096 ignition[923]: INFO : Ignition 2.19.0 Jan 30 05:25:46.535044 ignition[923]: INFO : Stage: mount Jan 30 05:25:46.535044 ignition[923]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 05:25:46.535044 ignition[923]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 05:25:46.537488 ignition[923]: INFO : mount: mount passed Jan 30 05:25:46.537488 ignition[923]: INFO : Ignition finished successfully Jan 30 05:25:46.539938 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 05:25:46.547018 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 05:25:46.578161 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 05:25:46.596005 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (934) Jan 30 05:25:46.601265 kernel: BTRFS info (device sda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 05:25:46.601389 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 05:25:46.601416 kernel: BTRFS info (device sda6): using free space tree Jan 30 05:25:46.617899 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 30 05:25:46.618017 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 05:25:46.622430 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 05:25:46.726667 ignition[950]: INFO : Ignition 2.19.0 Jan 30 05:25:46.726667 ignition[950]: INFO : Stage: files Jan 30 05:25:46.729758 ignition[950]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 05:25:46.729758 ignition[950]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 05:25:46.729758 ignition[950]: DEBUG : files: compiled without relabeling support, skipping Jan 30 05:25:46.734688 ignition[950]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 05:25:46.734688 ignition[950]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 05:25:46.737883 ignition[950]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 05:25:46.737883 ignition[950]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 05:25:46.737883 ignition[950]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 05:25:46.737883 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 30 05:25:46.735972 unknown[950]: wrote ssh authorized keys file for user: core Jan 30 05:25:46.746261 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 30 05:25:46.746261 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 05:25:46.746261 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 30 05:25:46.851609 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 30 05:25:47.071009 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 05:25:47.073030 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 30 05:25:47.073030 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 30 05:25:47.489341 systemd-networkd[767]: eth1: Gained IPv6LL Jan 30 05:25:47.633898 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Jan 30 05:25:47.864086 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 30 05:25:47.865500 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Jan 30 05:25:47.865500 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 05:25:47.865500 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 05:25:47.865500 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 05:25:47.865500 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 05:25:47.865500 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 05:25:47.865500 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 05:25:47.865500 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 05:25:47.875480 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 05:25:47.875480 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 05:25:47.875480 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 05:25:47.875480 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 05:25:47.875480 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 05:25:47.875480 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 30 05:25:48.401406 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Jan 30 05:25:48.832367 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 05:25:48.832367 ignition[950]: INFO : files: op(d): [started] processing unit "containerd.service" Jan 30 05:25:48.838979 ignition[950]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 30 05:25:48.838979 ignition[950]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 30 05:25:48.838979 ignition[950]: INFO : files: op(d): [finished] processing unit "containerd.service" Jan 30 05:25:48.838979 ignition[950]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Jan 30 05:25:48.838979 ignition[950]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 05:25:48.838979 ignition[950]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 05:25:48.838979 ignition[950]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Jan 30 05:25:48.838979 ignition[950]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Jan 30 05:25:48.838979 ignition[950]: INFO : files: op(11): op(12): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 30 05:25:48.838979 ignition[950]: INFO : files: op(11): op(12): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 30 05:25:48.838979 ignition[950]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Jan 30 05:25:48.838979 ignition[950]: INFO : files: op(13): [started] setting preset to enabled for "prepare-helm.service" Jan 30 05:25:48.838979 ignition[950]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 05:25:48.838979 ignition[950]: INFO : files: createResultFile: createFiles: op(14): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 05:25:48.838979 ignition[950]: INFO : files: createResultFile: createFiles: op(14): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 05:25:48.838979 ignition[950]: INFO : files: files passed Jan 30 05:25:48.838979 ignition[950]: INFO : Ignition finished successfully Jan 30 05:25:48.840676 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 05:25:48.850234 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 05:25:48.856120 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 05:25:48.862518 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 05:25:48.862645 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 05:25:48.891574 initrd-setup-root-after-ignition[980]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 05:25:48.891574 initrd-setup-root-after-ignition[980]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 05:25:48.894442 initrd-setup-root-after-ignition[984]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 05:25:48.897432 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 05:25:48.898207 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 05:25:48.905206 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 05:25:48.931110 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 05:25:48.931247 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 05:25:48.932774 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 05:25:48.933587 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 05:25:48.934775 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 05:25:48.940124 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 05:25:48.983744 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 05:25:48.990119 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 05:25:49.031844 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 05:25:49.035114 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 05:25:49.036684 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 05:25:49.039223 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 05:25:49.039567 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 05:25:49.042347 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 05:25:49.044173 systemd[1]: Stopped target basic.target - Basic System. Jan 30 05:25:49.047087 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 05:25:49.049650 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 05:25:49.051918 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 05:25:49.054694 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 05:25:49.057315 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 05:25:49.060124 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 05:25:49.062963 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 05:25:49.065715 systemd[1]: Stopped target swap.target - Swaps. Jan 30 05:25:49.068335 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 05:25:49.068700 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 05:25:49.071509 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 05:25:49.073460 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 05:25:49.075679 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 05:25:49.077039 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 05:25:49.079861 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 05:25:49.080257 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 05:25:49.083491 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 05:25:49.083827 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 05:25:49.086970 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 05:25:49.087248 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 05:25:49.089867 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 30 05:25:49.090349 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 05:25:49.100463 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 05:25:49.113742 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 05:25:49.114226 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 05:25:49.120303 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 05:25:49.122210 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 05:25:49.123138 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 05:25:49.125794 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 05:25:49.127176 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 05:25:49.144797 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 05:25:49.145011 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 05:25:49.156524 ignition[1004]: INFO : Ignition 2.19.0 Jan 30 05:25:49.158414 ignition[1004]: INFO : Stage: umount Jan 30 05:25:49.158414 ignition[1004]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 05:25:49.158414 ignition[1004]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 05:25:49.163184 ignition[1004]: INFO : umount: umount passed Jan 30 05:25:49.163184 ignition[1004]: INFO : Ignition finished successfully Jan 30 05:25:49.167613 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 05:25:49.168905 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 05:25:49.172262 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 05:25:49.173299 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 05:25:49.175790 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 05:25:49.175883 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 05:25:49.176788 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 30 05:25:49.176879 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 30 05:25:49.178853 systemd[1]: Stopped target network.target - Network. Jan 30 05:25:49.180491 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 05:25:49.180581 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 05:25:49.183859 systemd[1]: Stopped target paths.target - Path Units. Jan 30 05:25:49.185251 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 05:25:49.186976 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 05:25:49.187834 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 05:25:49.189628 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 05:25:49.190992 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 05:25:49.191048 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 05:25:49.192594 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 05:25:49.192637 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 05:25:49.194324 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 05:25:49.194393 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 05:25:49.195717 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 05:25:49.195766 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 05:25:49.197319 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 05:25:49.198779 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 05:25:49.200124 systemd-networkd[767]: eth0: DHCPv6 lease lost Jan 30 05:25:49.201257 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 05:25:49.201809 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 05:25:49.201941 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 05:25:49.203834 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 05:25:49.203922 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 05:25:49.204070 systemd-networkd[767]: eth1: DHCPv6 lease lost Jan 30 05:25:49.208102 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 05:25:49.208491 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 05:25:49.209682 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 05:25:49.209810 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 05:25:49.215141 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 05:25:49.215247 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 05:25:49.224109 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 05:25:49.227106 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 05:25:49.227182 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 05:25:49.228600 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 05:25:49.228649 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 05:25:49.230214 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 05:25:49.230265 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 05:25:49.231645 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 05:25:49.231703 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 05:25:49.233477 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 05:25:49.247806 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 05:25:49.248560 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 05:25:49.252130 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 05:25:49.252182 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 05:25:49.253430 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 05:25:49.253474 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 05:25:49.254076 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 05:25:49.254127 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 05:25:49.255291 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 05:25:49.255352 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 05:25:49.256600 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 05:25:49.256662 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 05:25:49.264192 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 05:25:49.264847 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 05:25:49.264922 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 05:25:49.268346 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 05:25:49.268420 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 05:25:49.272759 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 05:25:49.272898 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 05:25:49.280868 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 05:25:49.281069 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 05:25:49.283178 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 05:25:49.295129 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 05:25:49.305234 systemd[1]: Switching root. Jan 30 05:25:49.389688 systemd-journald[187]: Journal stopped Jan 30 05:25:50.802828 systemd-journald[187]: Received SIGTERM from PID 1 (systemd). Jan 30 05:25:50.802915 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 05:25:50.803012 kernel: SELinux: policy capability open_perms=1 Jan 30 05:25:50.803024 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 05:25:50.803047 kernel: SELinux: policy capability always_check_network=0 Jan 30 05:25:50.803058 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 05:25:50.803069 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 05:25:50.803092 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 05:25:50.803106 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 05:25:50.803117 kernel: audit: type=1403 audit(1738214749.635:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 05:25:50.803129 systemd[1]: Successfully loaded SELinux policy in 54.601ms. Jan 30 05:25:50.803157 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 28.860ms. Jan 30 05:25:50.803171 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 05:25:50.803186 systemd[1]: Detected virtualization kvm. Jan 30 05:25:50.803199 systemd[1]: Detected architecture x86-64. Jan 30 05:25:50.803211 systemd[1]: Detected first boot. Jan 30 05:25:50.803223 systemd[1]: Hostname set to . Jan 30 05:25:50.803237 systemd[1]: Initializing machine ID from VM UUID. Jan 30 05:25:50.803257 zram_generator::config[1065]: No configuration found. Jan 30 05:25:50.803270 systemd[1]: Populated /etc with preset unit settings. Jan 30 05:25:50.803282 systemd[1]: Queued start job for default target multi-user.target. Jan 30 05:25:50.803293 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 30 05:25:50.803306 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 05:25:50.803319 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 05:25:50.803330 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 05:25:50.803344 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 05:25:50.803356 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 05:25:50.803368 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 05:25:50.803380 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 05:25:50.803403 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 05:25:50.803415 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 05:25:50.803427 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 05:25:50.803439 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 05:25:50.803451 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 05:25:50.803465 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 05:25:50.803477 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 05:25:50.803489 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 30 05:25:50.803500 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 05:25:50.803512 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 05:25:50.803526 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 05:25:50.803538 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 05:25:50.803552 systemd[1]: Reached target slices.target - Slice Units. Jan 30 05:25:50.803564 systemd[1]: Reached target swap.target - Swaps. Jan 30 05:25:50.803576 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 05:25:50.803588 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 05:25:50.803600 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 05:25:50.803612 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 05:25:50.803627 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 05:25:50.803651 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 05:25:50.803667 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 05:25:50.803681 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 05:25:50.803695 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 05:25:50.803708 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 05:25:50.803723 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 05:25:50.803736 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 05:25:50.803752 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 05:25:50.803765 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 05:25:50.803776 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 05:25:50.803788 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 05:25:50.803801 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 05:25:50.803814 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 05:25:50.803826 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 05:25:50.803838 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 05:25:50.803850 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 05:25:50.803864 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 05:25:50.803877 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 05:25:50.803889 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 05:25:50.803901 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 05:25:50.803914 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 30 05:25:50.803940 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 30 05:25:50.803952 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 05:25:50.803964 kernel: fuse: init (API version 7.39) Jan 30 05:25:50.803978 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 05:25:50.803990 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 05:25:50.804002 kernel: loop: module loaded Jan 30 05:25:50.804014 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 05:25:50.804025 kernel: ACPI: bus type drm_connector registered Jan 30 05:25:50.804036 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 05:25:50.804049 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 05:25:50.804061 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 05:25:50.804091 systemd-journald[1159]: Collecting audit messages is disabled. Jan 30 05:25:50.804123 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 05:25:50.804136 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 05:25:50.804148 systemd-journald[1159]: Journal started Jan 30 05:25:50.804170 systemd-journald[1159]: Runtime Journal (/run/log/journal/9f9bfb3f0a2b4aa99565c95068f94f86) is 4.8M, max 38.4M, 33.6M free. Jan 30 05:25:50.807895 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 05:25:50.813174 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 05:25:50.814783 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 05:25:50.816227 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 05:25:50.818791 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 05:25:50.819784 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 05:25:50.820080 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 05:25:50.821384 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 05:25:50.821632 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 05:25:50.823291 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 05:25:50.824498 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 05:25:50.824708 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 05:25:50.825804 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 05:25:50.826245 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 05:25:50.827149 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 05:25:50.827480 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 05:25:50.828962 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 05:25:50.829307 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 05:25:50.830694 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 05:25:50.831892 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 05:25:50.833082 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 05:25:50.854162 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 05:25:50.861050 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 05:25:50.869050 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 05:25:50.870187 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 05:25:50.881181 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 05:25:50.891082 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 05:25:50.891719 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 05:25:50.897133 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 05:25:50.898477 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 05:25:50.907871 systemd-journald[1159]: Time spent on flushing to /var/log/journal/9f9bfb3f0a2b4aa99565c95068f94f86 is 63.521ms for 1121 entries. Jan 30 05:25:50.907871 systemd-journald[1159]: System Journal (/var/log/journal/9f9bfb3f0a2b4aa99565c95068f94f86) is 8.0M, max 584.8M, 576.8M free. Jan 30 05:25:51.034108 systemd-journald[1159]: Received client request to flush runtime journal. Jan 30 05:25:50.910736 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 05:25:50.940282 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 05:25:50.955341 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 05:25:50.956065 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 05:25:50.987563 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 05:25:50.988313 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 05:25:51.017559 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 05:25:51.027227 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 05:25:51.037461 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 05:25:51.046820 udevadm[1215]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 30 05:25:51.059713 systemd-tmpfiles[1207]: ACLs are not supported, ignoring. Jan 30 05:25:51.059731 systemd-tmpfiles[1207]: ACLs are not supported, ignoring. Jan 30 05:25:51.060632 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 05:25:51.069629 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 05:25:51.079270 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 05:25:51.115476 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 05:25:51.127190 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 05:25:51.150282 systemd-tmpfiles[1229]: ACLs are not supported, ignoring. Jan 30 05:25:51.150307 systemd-tmpfiles[1229]: ACLs are not supported, ignoring. Jan 30 05:25:51.160537 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 05:25:51.755208 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 05:25:51.764454 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 05:25:51.793757 systemd-udevd[1235]: Using default interface naming scheme 'v255'. Jan 30 05:25:51.836228 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 05:25:51.855102 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 05:25:51.886216 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 05:25:51.914260 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 30 05:25:51.953513 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 05:25:52.042859 systemd-networkd[1240]: lo: Link UP Jan 30 05:25:52.052291 systemd-networkd[1240]: lo: Gained carrier Jan 30 05:25:52.055366 systemd-networkd[1240]: Enumeration completed Jan 30 05:25:52.055829 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 05:25:52.057153 systemd-networkd[1240]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 05:25:52.057160 systemd-networkd[1240]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 05:25:52.058149 systemd-networkd[1240]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 05:25:52.058155 systemd-networkd[1240]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 05:25:52.058743 systemd-networkd[1240]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 05:25:52.058776 systemd-networkd[1240]: eth0: Link UP Jan 30 05:25:52.058780 systemd-networkd[1240]: eth0: Gained carrier Jan 30 05:25:52.058789 systemd-networkd[1240]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 05:25:52.063194 systemd-networkd[1240]: eth1: Link UP Jan 30 05:25:52.063267 systemd-networkd[1240]: eth1: Gained carrier Jan 30 05:25:52.063317 systemd-networkd[1240]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 05:25:52.064471 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 05:25:52.083624 systemd-networkd[1240]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 05:25:52.098042 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1247) Jan 30 05:25:52.110964 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 30 05:25:52.120014 systemd-networkd[1240]: eth0: DHCPv4 address 49.13.81.87/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 30 05:25:52.140960 kernel: ACPI: button: Power Button [PWRF] Jan 30 05:25:52.143990 kernel: mousedev: PS/2 mouse device common for all mice Jan 30 05:25:52.156033 systemd-networkd[1240]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 05:25:52.177005 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 05:25:52.177303 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 05:25:52.184289 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 05:25:52.186582 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 05:25:52.207421 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 05:25:52.214017 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 05:25:52.214066 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 05:25:52.214118 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 05:25:52.225132 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 30 05:25:52.224139 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 05:25:52.224414 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 05:25:52.232440 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 30 05:25:52.234171 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 30 05:25:52.234370 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 30 05:25:52.237446 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 05:25:52.237711 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 05:25:52.239002 kernel: EDAC MC: Ver: 3.0.0 Jan 30 05:25:52.240921 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 05:25:52.250201 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 05:25:52.250609 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 05:25:52.251581 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 05:25:52.287479 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 30 05:25:52.314430 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 05:25:52.315070 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Jan 30 05:25:52.315116 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Jan 30 05:25:52.320093 kernel: Console: switching to colour dummy device 80x25 Jan 30 05:25:52.322206 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 30 05:25:52.322248 kernel: [drm] features: -context_init Jan 30 05:25:52.322268 kernel: [drm] number of scanouts: 1 Jan 30 05:25:52.322303 kernel: [drm] number of cap sets: 0 Jan 30 05:25:52.334838 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Jan 30 05:25:52.342877 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 05:25:52.343319 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 05:25:52.349117 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 05:25:52.353005 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 30 05:25:52.357328 kernel: Console: switching to colour frame buffer device 160x50 Jan 30 05:25:52.365747 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 30 05:25:52.369380 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 05:25:52.369799 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 05:25:52.377082 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 05:25:52.477234 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 05:25:52.482368 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 05:25:52.484555 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 05:25:52.518534 lvm[1303]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 05:25:52.566308 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 05:25:52.566830 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 05:25:52.575479 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 05:25:52.592190 lvm[1307]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 05:25:52.638978 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 05:25:52.641507 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 05:25:52.643746 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 05:25:52.643844 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 05:25:52.644747 systemd[1]: Reached target machines.target - Containers. Jan 30 05:25:52.647879 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 05:25:52.659208 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 05:25:52.663792 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 05:25:52.666250 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 05:25:52.679219 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 05:25:52.685293 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 05:25:52.698193 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 05:25:52.704883 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 05:25:52.723656 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 05:25:52.743056 kernel: loop0: detected capacity change from 0 to 140768 Jan 30 05:25:52.764677 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 05:25:52.765764 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 05:25:52.790156 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 05:25:52.835003 kernel: loop1: detected capacity change from 0 to 8 Jan 30 05:25:52.865346 kernel: loop2: detected capacity change from 0 to 210664 Jan 30 05:25:52.925522 kernel: loop3: detected capacity change from 0 to 142488 Jan 30 05:25:52.990057 kernel: loop4: detected capacity change from 0 to 140768 Jan 30 05:25:53.028111 kernel: loop5: detected capacity change from 0 to 8 Jan 30 05:25:53.049610 kernel: loop6: detected capacity change from 0 to 210664 Jan 30 05:25:53.078031 kernel: loop7: detected capacity change from 0 to 142488 Jan 30 05:25:53.111138 (sd-merge)[1328]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Jan 30 05:25:53.112606 (sd-merge)[1328]: Merged extensions into '/usr'. Jan 30 05:25:53.122624 systemd[1]: Reloading requested from client PID 1315 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 05:25:53.122655 systemd[1]: Reloading... Jan 30 05:25:53.279959 zram_generator::config[1359]: No configuration found. Jan 30 05:25:53.404033 ldconfig[1311]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 05:25:53.442130 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 05:25:53.513244 systemd[1]: Reloading finished in 389 ms. Jan 30 05:25:53.533454 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 05:25:53.538017 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 05:25:53.552226 systemd[1]: Starting ensure-sysext.service... Jan 30 05:25:53.570048 systemd-networkd[1240]: eth0: Gained IPv6LL Jan 30 05:25:53.573291 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 05:25:53.586253 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 05:25:53.591731 systemd[1]: Reloading requested from client PID 1406 ('systemctl') (unit ensure-sysext.service)... Jan 30 05:25:53.592684 systemd[1]: Reloading... Jan 30 05:25:53.625890 systemd-tmpfiles[1407]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 05:25:53.626530 systemd-tmpfiles[1407]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 05:25:53.628285 systemd-tmpfiles[1407]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 05:25:53.628819 systemd-tmpfiles[1407]: ACLs are not supported, ignoring. Jan 30 05:25:53.629002 systemd-tmpfiles[1407]: ACLs are not supported, ignoring. Jan 30 05:25:53.635381 systemd-tmpfiles[1407]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 05:25:53.635438 systemd-tmpfiles[1407]: Skipping /boot Jan 30 05:25:53.661553 systemd-tmpfiles[1407]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 05:25:53.661578 systemd-tmpfiles[1407]: Skipping /boot Jan 30 05:25:53.731042 zram_generator::config[1438]: No configuration found. Jan 30 05:25:53.863785 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 05:25:53.936290 systemd[1]: Reloading finished in 343 ms. Jan 30 05:25:53.957139 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 05:25:53.988693 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 05:25:54.005232 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 05:25:54.015430 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 05:25:54.031652 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 05:25:54.044838 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 05:25:54.055779 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 05:25:54.056625 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 05:25:54.065206 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 05:25:54.081265 systemd-networkd[1240]: eth1: Gained IPv6LL Jan 30 05:25:54.082180 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 05:25:54.101943 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 05:25:54.113026 augenrules[1510]: No rules Jan 30 05:25:54.108522 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 05:25:54.111542 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 05:25:54.120673 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 05:25:54.124738 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 05:25:54.125201 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 05:25:54.129850 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 05:25:54.130549 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 05:25:54.133574 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 05:25:54.133803 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 05:25:54.148910 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 05:25:54.166460 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 05:25:54.167163 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 05:25:54.175302 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 05:25:54.191196 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 05:25:54.203549 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 05:25:54.205597 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 05:25:54.211392 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 05:25:54.214393 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 05:25:54.216394 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 05:25:54.221852 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 05:25:54.222198 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 05:25:54.227267 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 05:25:54.227607 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 05:25:54.234406 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 05:25:54.234714 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 05:25:54.260993 systemd-resolved[1500]: Positive Trust Anchors: Jan 30 05:25:54.261285 systemd-resolved[1500]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 05:25:54.261318 systemd-resolved[1500]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 05:25:54.262809 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 05:25:54.272501 systemd-resolved[1500]: Using system hostname 'ci-4081-3-0-c-240f39d8fc'. Jan 30 05:25:54.278368 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 05:25:54.286558 systemd[1]: Finished ensure-sysext.service. Jan 30 05:25:54.292375 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 05:25:54.301815 systemd[1]: Reached target network.target - Network. Jan 30 05:25:54.304960 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 05:25:54.305592 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 05:25:54.306213 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 05:25:54.306496 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 05:25:54.321133 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 05:25:54.350234 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 05:25:54.355099 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 05:25:54.367099 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 05:25:54.368903 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 05:25:54.377154 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 30 05:25:54.385704 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 05:25:54.385966 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 05:25:54.386788 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 05:25:54.387161 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 05:25:54.389234 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 05:25:54.389473 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 05:25:54.390275 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 05:25:54.390496 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 05:25:54.391262 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 05:25:54.391532 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 05:25:54.403384 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 05:25:54.403488 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 05:25:54.467135 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 30 05:25:54.470141 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 05:25:54.471877 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 05:25:54.472484 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 05:25:54.473225 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 05:25:54.473985 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 05:25:54.474099 systemd[1]: Reached target paths.target - Path Units. Jan 30 05:25:54.474719 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 05:25:54.475711 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 05:25:54.476549 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 05:25:54.477155 systemd[1]: Reached target timers.target - Timer Units. Jan 30 05:25:54.478964 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 05:25:54.482710 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 05:25:54.486138 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 05:25:54.491952 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 05:25:54.492615 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 05:25:54.493360 systemd[1]: Reached target basic.target - Basic System. Jan 30 05:25:54.494182 systemd[1]: System is tainted: cgroupsv1 Jan 30 05:25:54.494302 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 05:25:54.494380 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 05:25:54.497047 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 05:25:54.509232 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 30 05:25:54.521332 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 05:25:54.534875 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 05:25:54.548304 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 05:25:54.549278 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 05:25:54.559482 coreos-metadata[1565]: Jan 30 05:25:54.559 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Jan 30 05:25:54.561252 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 05:25:54.569289 coreos-metadata[1565]: Jan 30 05:25:54.568 INFO Fetch successful Jan 30 05:25:54.571146 coreos-metadata[1565]: Jan 30 05:25:54.570 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Jan 30 05:25:54.572953 jq[1570]: false Jan 30 05:25:54.573234 coreos-metadata[1565]: Jan 30 05:25:54.571 INFO Fetch successful Jan 30 05:25:54.579499 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 05:25:54.605645 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 05:25:55.188809 extend-filesystems[1571]: Found loop4 Jan 30 05:25:55.188809 extend-filesystems[1571]: Found loop5 Jan 30 05:25:55.188809 extend-filesystems[1571]: Found loop6 Jan 30 05:25:55.188809 extend-filesystems[1571]: Found loop7 Jan 30 05:25:55.188809 extend-filesystems[1571]: Found sda Jan 30 05:25:55.188809 extend-filesystems[1571]: Found sda1 Jan 30 05:25:55.188809 extend-filesystems[1571]: Found sda2 Jan 30 05:25:55.188809 extend-filesystems[1571]: Found sda3 Jan 30 05:25:55.188809 extend-filesystems[1571]: Found usr Jan 30 05:25:55.188809 extend-filesystems[1571]: Found sda4 Jan 30 05:25:55.188809 extend-filesystems[1571]: Found sda6 Jan 30 05:25:55.188809 extend-filesystems[1571]: Found sda7 Jan 30 05:25:55.188809 extend-filesystems[1571]: Found sda9 Jan 30 05:25:55.188809 extend-filesystems[1571]: Checking size of /dev/sda9 Jan 30 05:25:55.325737 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Jan 30 05:25:55.325793 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1245) Jan 30 05:25:55.188647 systemd-timesyncd[1554]: Contacted time server 85.215.189.120:123 (0.flatcar.pool.ntp.org). Jan 30 05:25:55.327058 extend-filesystems[1571]: Resized partition /dev/sda9 Jan 30 05:25:55.205549 dbus-daemon[1566]: [system] SELinux support is enabled Jan 30 05:25:55.188734 systemd-timesyncd[1554]: Initial clock synchronization to Thu 2025-01-30 05:25:55.188478 UTC. Jan 30 05:25:55.337404 extend-filesystems[1591]: resize2fs 1.47.1 (20-May-2024) Jan 30 05:25:55.189196 systemd-resolved[1500]: Clock change detected. Flushing caches. Jan 30 05:25:55.196798 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 30 05:25:55.224971 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Jan 30 05:25:55.247210 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 05:25:55.264915 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 05:25:55.275638 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 05:25:55.312961 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 05:25:55.328302 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 05:25:55.347271 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 05:25:55.360072 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 05:25:55.383819 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 05:25:55.384259 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 05:25:55.391899 jq[1610]: true Jan 30 05:25:55.396994 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 05:25:55.397354 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 05:25:55.404618 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 05:25:55.422075 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 05:25:55.422487 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 05:25:55.424061 update_engine[1608]: I20250130 05:25:55.423921 1608 main.cc:92] Flatcar Update Engine starting Jan 30 05:25:55.436708 update_engine[1608]: I20250130 05:25:55.435010 1608 update_check_scheduler.cc:74] Next update check in 4m30s Jan 30 05:25:55.461969 (ntainerd)[1620]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 05:25:55.470737 jq[1619]: true Jan 30 05:25:55.545399 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Jan 30 05:25:55.594207 tar[1617]: linux-amd64/helm Jan 30 05:25:55.597212 systemd[1]: Started update-engine.service - Update Engine. Jan 30 05:25:55.605546 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 05:25:55.605591 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 05:25:55.606190 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 05:25:55.606217 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 05:25:55.609404 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 05:25:55.622347 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 05:25:55.624708 extend-filesystems[1591]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 30 05:25:55.624708 extend-filesystems[1591]: old_desc_blocks = 1, new_desc_blocks = 5 Jan 30 05:25:55.624708 extend-filesystems[1591]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Jan 30 05:25:55.684105 extend-filesystems[1571]: Resized filesystem in /dev/sda9 Jan 30 05:25:55.684105 extend-filesystems[1571]: Found sr0 Jan 30 05:25:55.631259 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 05:25:55.631630 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 05:25:55.633980 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 30 05:25:55.661094 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 05:25:55.671046 systemd-logind[1601]: New seat seat0. Jan 30 05:25:55.684796 systemd-logind[1601]: Watching system buttons on /dev/input/event2 (Power Button) Jan 30 05:25:55.684817 systemd-logind[1601]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 30 05:25:55.688626 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 05:25:55.758269 bash[1664]: Updated "/home/core/.ssh/authorized_keys" Jan 30 05:25:55.760109 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 05:25:55.790738 systemd[1]: Starting sshkeys.service... Jan 30 05:25:55.840517 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 30 05:25:55.853397 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 30 05:25:55.920804 coreos-metadata[1679]: Jan 30 05:25:55.920 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Jan 30 05:25:55.929618 coreos-metadata[1679]: Jan 30 05:25:55.929 INFO Fetch successful Jan 30 05:25:55.936793 unknown[1679]: wrote ssh authorized keys file for user: core Jan 30 05:25:55.984749 update-ssh-keys[1686]: Updated "/home/core/.ssh/authorized_keys" Jan 30 05:25:55.990993 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 30 05:25:56.005472 systemd[1]: Finished sshkeys.service. Jan 30 05:25:56.045230 sshd_keygen[1611]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 05:25:56.073720 containerd[1620]: time="2025-01-30T05:25:56.072721598Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 30 05:25:56.084609 locksmithd[1649]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 05:25:56.128502 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 05:25:56.158443 containerd[1620]: time="2025-01-30T05:25:56.151803584Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 05:25:56.159232 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 05:25:56.171304 containerd[1620]: time="2025-01-30T05:25:56.171133729Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 05:25:56.171304 containerd[1620]: time="2025-01-30T05:25:56.171229379Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 05:25:56.171304 containerd[1620]: time="2025-01-30T05:25:56.171252202Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 05:25:56.171502 containerd[1620]: time="2025-01-30T05:25:56.171475982Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 05:25:56.171566 containerd[1620]: time="2025-01-30T05:25:56.171501840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 05:25:56.171610 containerd[1620]: time="2025-01-30T05:25:56.171583974Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 05:25:56.171610 containerd[1620]: time="2025-01-30T05:25:56.171605214Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 05:25:56.174202 containerd[1620]: time="2025-01-30T05:25:56.173913253Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 05:25:56.174370 containerd[1620]: time="2025-01-30T05:25:56.174356304Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 05:25:56.174447 containerd[1620]: time="2025-01-30T05:25:56.174432367Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 05:25:56.176472 containerd[1620]: time="2025-01-30T05:25:56.174809875Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 05:25:56.176472 containerd[1620]: time="2025-01-30T05:25:56.174919630Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 05:25:56.176472 containerd[1620]: time="2025-01-30T05:25:56.175202611Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 05:25:56.176472 containerd[1620]: time="2025-01-30T05:25:56.175378070Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 05:25:56.176472 containerd[1620]: time="2025-01-30T05:25:56.175392107Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 05:25:56.176472 containerd[1620]: time="2025-01-30T05:25:56.175488948Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 05:25:56.176472 containerd[1620]: time="2025-01-30T05:25:56.175545945Z" level=info msg="metadata content store policy set" policy=shared Jan 30 05:25:56.185442 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 05:25:56.185906 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 05:25:56.193608 containerd[1620]: time="2025-01-30T05:25:56.193576303Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 05:25:56.194016 containerd[1620]: time="2025-01-30T05:25:56.193905620Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 05:25:56.194016 containerd[1620]: time="2025-01-30T05:25:56.193949122Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 05:25:56.194016 containerd[1620]: time="2025-01-30T05:25:56.193968137Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 05:25:56.194016 containerd[1620]: time="2025-01-30T05:25:56.193986853Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 05:25:56.196216 containerd[1620]: time="2025-01-30T05:25:56.194306903Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 05:25:56.196216 containerd[1620]: time="2025-01-30T05:25:56.194676136Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 05:25:56.196848 containerd[1620]: time="2025-01-30T05:25:56.196777006Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 05:25:56.196848 containerd[1620]: time="2025-01-30T05:25:56.196796082Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 05:25:56.196848 containerd[1620]: time="2025-01-30T05:25:56.196809297Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 05:25:56.196848 containerd[1620]: time="2025-01-30T05:25:56.196825708Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 05:25:56.198023 containerd[1620]: time="2025-01-30T05:25:56.198006893Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 05:25:56.198098 containerd[1620]: time="2025-01-30T05:25:56.198086212Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 05:25:56.198160 containerd[1620]: time="2025-01-30T05:25:56.198139000Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 05:25:56.198441 containerd[1620]: time="2025-01-30T05:25:56.198427392Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 05:25:56.198523 containerd[1620]: time="2025-01-30T05:25:56.198510697Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 05:25:56.198704 containerd[1620]: time="2025-01-30T05:25:56.198596268Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 05:25:56.198704 containerd[1620]: time="2025-01-30T05:25:56.198614683Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 05:25:56.198704 containerd[1620]: time="2025-01-30T05:25:56.198640652Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 05:25:56.200130 containerd[1620]: time="2025-01-30T05:25:56.199758889Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 05:25:56.200130 containerd[1620]: time="2025-01-30T05:25:56.199779958Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 05:25:56.200130 containerd[1620]: time="2025-01-30T05:25:56.199794656Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 05:25:56.200130 containerd[1620]: time="2025-01-30T05:25:56.199809754Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 05:25:56.200130 containerd[1620]: time="2025-01-30T05:25:56.199840191Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 05:25:56.200130 containerd[1620]: time="2025-01-30T05:25:56.199856502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 05:25:56.200130 containerd[1620]: time="2025-01-30T05:25:56.199876339Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 05:25:56.200130 containerd[1620]: time="2025-01-30T05:25:56.199889674Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 05:25:56.200130 containerd[1620]: time="2025-01-30T05:25:56.199921724Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 05:25:56.200130 containerd[1620]: time="2025-01-30T05:25:56.199940490Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 05:25:56.200130 containerd[1620]: time="2025-01-30T05:25:56.199953314Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 05:25:56.200130 containerd[1620]: time="2025-01-30T05:25:56.199968351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 05:25:56.200130 containerd[1620]: time="2025-01-30T05:25:56.199999320Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 05:25:56.200130 containerd[1620]: time="2025-01-30T05:25:56.200023445Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 05:25:56.200130 containerd[1620]: time="2025-01-30T05:25:56.200037051Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 05:25:56.200437 containerd[1620]: time="2025-01-30T05:25:56.200052820Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 05:25:56.202122 containerd[1620]: time="2025-01-30T05:25:56.200482175Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 05:25:56.202122 containerd[1620]: time="2025-01-30T05:25:56.200507443Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 05:25:56.202122 containerd[1620]: time="2025-01-30T05:25:56.200518694Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 05:25:56.202122 containerd[1620]: time="2025-01-30T05:25:56.200766849Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 05:25:56.202122 containerd[1620]: time="2025-01-30T05:25:56.200797527Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 05:25:56.202122 containerd[1620]: time="2025-01-30T05:25:56.200810662Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 05:25:56.202122 containerd[1620]: time="2025-01-30T05:25:56.200821071Z" level=info msg="NRI interface is disabled by configuration." Jan 30 05:25:56.202122 containerd[1620]: time="2025-01-30T05:25:56.200831450Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 05:25:56.202393 containerd[1620]: time="2025-01-30T05:25:56.202090582Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 05:25:56.202393 containerd[1620]: time="2025-01-30T05:25:56.202362051Z" level=info msg="Connect containerd service" Jan 30 05:25:56.202631 containerd[1620]: time="2025-01-30T05:25:56.202612952Z" level=info msg="using legacy CRI server" Jan 30 05:25:56.202718 containerd[1620]: time="2025-01-30T05:25:56.202705465Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 05:25:56.202902 containerd[1620]: time="2025-01-30T05:25:56.202889541Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 05:25:56.203068 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 05:25:56.210441 containerd[1620]: time="2025-01-30T05:25:56.210421257Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 05:25:56.211200 containerd[1620]: time="2025-01-30T05:25:56.210594793Z" level=info msg="Start subscribing containerd event" Jan 30 05:25:56.211200 containerd[1620]: time="2025-01-30T05:25:56.210662981Z" level=info msg="Start recovering state" Jan 30 05:25:56.211200 containerd[1620]: time="2025-01-30T05:25:56.210739694Z" level=info msg="Start event monitor" Jan 30 05:25:56.211200 containerd[1620]: time="2025-01-30T05:25:56.210762547Z" level=info msg="Start snapshots syncer" Jan 30 05:25:56.211200 containerd[1620]: time="2025-01-30T05:25:56.210771574Z" level=info msg="Start cni network conf syncer for default" Jan 30 05:25:56.211200 containerd[1620]: time="2025-01-30T05:25:56.210779259Z" level=info msg="Start streaming server" Jan 30 05:25:56.212202 containerd[1620]: time="2025-01-30T05:25:56.212177190Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 05:25:56.212310 containerd[1620]: time="2025-01-30T05:25:56.212297366Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 05:25:56.212576 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 05:25:56.217775 containerd[1620]: time="2025-01-30T05:25:56.217581617Z" level=info msg="containerd successfully booted in 0.146314s" Jan 30 05:25:56.275968 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 05:25:56.293248 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 05:25:56.311990 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 30 05:25:56.314382 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 05:25:56.666157 tar[1617]: linux-amd64/LICENSE Jan 30 05:25:56.666157 tar[1617]: linux-amd64/README.md Jan 30 05:25:56.686223 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 30 05:25:57.507824 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:25:57.519234 (kubelet)[1732]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 05:25:57.519461 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 05:25:57.534606 systemd[1]: Startup finished in 9.528s (kernel) + 7.374s (userspace) = 16.902s. Jan 30 05:25:58.488009 kubelet[1732]: E0130 05:25:58.487867 1732 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 05:25:58.498583 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 05:25:58.499152 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 05:26:08.750115 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 05:26:08.759386 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 05:26:08.973944 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:26:08.989141 (kubelet)[1757]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 05:26:09.056654 kubelet[1757]: E0130 05:26:09.056416 1757 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 05:26:09.064509 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 05:26:09.066377 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 05:26:19.315824 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 30 05:26:19.323068 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 05:26:19.550016 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:26:19.550785 (kubelet)[1778]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 05:26:19.655999 kubelet[1778]: E0130 05:26:19.655798 1778 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 05:26:19.663056 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 05:26:19.663613 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 05:26:29.899184 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 30 05:26:29.906945 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 05:26:30.163862 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:26:30.179316 (kubelet)[1798]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 05:26:30.263473 kubelet[1798]: E0130 05:26:30.263304 1798 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 05:26:30.272439 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 05:26:30.273323 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 05:26:40.399126 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 30 05:26:40.407950 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 05:26:40.633010 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:26:40.633214 (kubelet)[1819]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 05:26:40.665813 update_engine[1608]: I20250130 05:26:40.664005 1608 update_attempter.cc:509] Updating boot flags... Jan 30 05:26:40.692967 kubelet[1819]: E0130 05:26:40.692876 1819 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 05:26:40.699680 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 05:26:40.700048 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 05:26:40.723176 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1838) Jan 30 05:26:40.801724 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1838) Jan 30 05:26:40.870751 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1838) Jan 30 05:26:50.899078 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 30 05:26:50.908526 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 05:26:51.138954 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:26:51.139422 (kubelet)[1862]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 05:26:51.204054 kubelet[1862]: E0130 05:26:51.203778 1862 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 05:26:51.212310 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 05:26:51.213083 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 05:27:01.398668 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 30 05:27:01.406107 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 05:27:01.686856 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:27:01.698300 (kubelet)[1883]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 05:27:01.775978 kubelet[1883]: E0130 05:27:01.775893 1883 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 05:27:01.785243 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 05:27:01.786990 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 05:27:11.898782 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 30 05:27:11.910970 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 05:27:12.110835 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:27:12.116069 (kubelet)[1902]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 05:27:12.191048 kubelet[1902]: E0130 05:27:12.190847 1902 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 05:27:12.198208 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 05:27:12.198600 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 05:27:22.398218 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jan 30 05:27:22.406818 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 05:27:22.632969 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:27:22.633238 (kubelet)[1924]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 05:27:22.699502 kubelet[1924]: E0130 05:27:22.699232 1924 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 05:27:22.702996 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 05:27:22.703451 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 05:27:32.899121 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Jan 30 05:27:32.908100 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 05:27:33.188075 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:27:33.190656 (kubelet)[1945]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 05:27:33.273100 kubelet[1945]: E0130 05:27:33.272939 1945 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 05:27:33.280310 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 05:27:33.282049 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 05:27:43.399129 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Jan 30 05:27:43.413582 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 05:27:43.648923 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:27:43.652905 (kubelet)[1966]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 05:27:43.695126 kubelet[1966]: E0130 05:27:43.695056 1966 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 05:27:43.700779 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 05:27:43.701054 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 05:27:44.448208 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 05:27:44.456159 systemd[1]: Started sshd@0-49.13.81.87:22-139.178.89.65:45034.service - OpenSSH per-connection server daemon (139.178.89.65:45034). Jan 30 05:27:45.450564 sshd[1976]: Accepted publickey for core from 139.178.89.65 port 45034 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:27:45.456003 sshd[1976]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:27:45.477044 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 05:27:45.483111 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 05:27:45.487459 systemd-logind[1601]: New session 1 of user core. Jan 30 05:27:45.528167 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 05:27:45.544228 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 05:27:45.553395 (systemd)[1982]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 05:27:45.702311 systemd[1982]: Queued start job for default target default.target. Jan 30 05:27:45.702850 systemd[1982]: Created slice app.slice - User Application Slice. Jan 30 05:27:45.702869 systemd[1982]: Reached target paths.target - Paths. Jan 30 05:27:45.702883 systemd[1982]: Reached target timers.target - Timers. Jan 30 05:27:45.713898 systemd[1982]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 05:27:45.735071 systemd[1982]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 05:27:45.735211 systemd[1982]: Reached target sockets.target - Sockets. Jan 30 05:27:45.735236 systemd[1982]: Reached target basic.target - Basic System. Jan 30 05:27:45.735325 systemd[1982]: Reached target default.target - Main User Target. Jan 30 05:27:45.735387 systemd[1982]: Startup finished in 171ms. Jan 30 05:27:45.737102 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 05:27:45.750317 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 05:27:46.447100 systemd[1]: Started sshd@1-49.13.81.87:22-139.178.89.65:45050.service - OpenSSH per-connection server daemon (139.178.89.65:45050). Jan 30 05:27:47.437938 sshd[1994]: Accepted publickey for core from 139.178.89.65 port 45050 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:27:47.440078 sshd[1994]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:27:47.445616 systemd-logind[1601]: New session 2 of user core. Jan 30 05:27:47.456200 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 05:27:48.128106 sshd[1994]: pam_unix(sshd:session): session closed for user core Jan 30 05:27:48.137216 systemd[1]: sshd@1-49.13.81.87:22-139.178.89.65:45050.service: Deactivated successfully. Jan 30 05:27:48.145491 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 05:27:48.147301 systemd-logind[1601]: Session 2 logged out. Waiting for processes to exit. Jan 30 05:27:48.148778 systemd-logind[1601]: Removed session 2. Jan 30 05:27:48.293220 systemd[1]: Started sshd@2-49.13.81.87:22-139.178.89.65:45054.service - OpenSSH per-connection server daemon (139.178.89.65:45054). Jan 30 05:27:49.278315 sshd[2002]: Accepted publickey for core from 139.178.89.65 port 45054 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:27:49.281826 sshd[2002]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:27:49.291308 systemd-logind[1601]: New session 3 of user core. Jan 30 05:27:49.301187 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 05:27:49.961556 sshd[2002]: pam_unix(sshd:session): session closed for user core Jan 30 05:27:49.969276 systemd[1]: sshd@2-49.13.81.87:22-139.178.89.65:45054.service: Deactivated successfully. Jan 30 05:27:49.979877 systemd-logind[1601]: Session 3 logged out. Waiting for processes to exit. Jan 30 05:27:49.980365 systemd[1]: session-3.scope: Deactivated successfully. Jan 30 05:27:49.985373 systemd-logind[1601]: Removed session 3. Jan 30 05:27:50.133891 systemd[1]: Started sshd@3-49.13.81.87:22-139.178.89.65:45058.service - OpenSSH per-connection server daemon (139.178.89.65:45058). Jan 30 05:27:51.144790 sshd[2010]: Accepted publickey for core from 139.178.89.65 port 45058 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:27:51.148168 sshd[2010]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:27:51.157601 systemd-logind[1601]: New session 4 of user core. Jan 30 05:27:51.174192 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 05:27:51.842809 sshd[2010]: pam_unix(sshd:session): session closed for user core Jan 30 05:27:51.848980 systemd[1]: sshd@3-49.13.81.87:22-139.178.89.65:45058.service: Deactivated successfully. Jan 30 05:27:51.859029 systemd-logind[1601]: Session 4 logged out. Waiting for processes to exit. Jan 30 05:27:51.859657 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 05:27:51.861879 systemd-logind[1601]: Removed session 4. Jan 30 05:27:52.011110 systemd[1]: Started sshd@4-49.13.81.87:22-139.178.89.65:40562.service - OpenSSH per-connection server daemon (139.178.89.65:40562). Jan 30 05:27:53.029971 sshd[2018]: Accepted publickey for core from 139.178.89.65 port 40562 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:27:53.033848 sshd[2018]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:27:53.044309 systemd-logind[1601]: New session 5 of user core. Jan 30 05:27:53.051438 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 05:27:53.583581 sudo[2022]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 05:27:53.584360 sudo[2022]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 05:27:53.609770 sudo[2022]: pam_unix(sudo:session): session closed for user root Jan 30 05:27:53.773256 sshd[2018]: pam_unix(sshd:session): session closed for user core Jan 30 05:27:53.780494 systemd[1]: sshd@4-49.13.81.87:22-139.178.89.65:40562.service: Deactivated successfully. Jan 30 05:27:53.788320 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Jan 30 05:27:53.792038 systemd-logind[1601]: Session 5 logged out. Waiting for processes to exit. Jan 30 05:27:53.799893 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 05:27:53.800207 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 05:27:53.801922 systemd-logind[1601]: Removed session 5. Jan 30 05:27:53.940301 systemd[1]: Started sshd@5-49.13.81.87:22-139.178.89.65:40566.service - OpenSSH per-connection server daemon (139.178.89.65:40566). Jan 30 05:27:54.066317 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:27:54.083344 (kubelet)[2041]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 05:27:54.149194 kubelet[2041]: E0130 05:27:54.149098 2041 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 05:27:54.158116 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 05:27:54.158515 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 05:27:54.922224 sshd[2031]: Accepted publickey for core from 139.178.89.65 port 40566 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:27:54.925654 sshd[2031]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:27:54.933738 systemd-logind[1601]: New session 6 of user core. Jan 30 05:27:54.944117 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 05:27:55.451931 sudo[2053]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 05:27:55.452876 sudo[2053]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 05:27:55.462034 sudo[2053]: pam_unix(sudo:session): session closed for user root Jan 30 05:27:55.475815 sudo[2052]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 30 05:27:55.476531 sudo[2052]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 05:27:55.505077 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 30 05:27:55.516925 auditctl[2056]: No rules Jan 30 05:27:55.518190 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 05:27:55.518906 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 30 05:27:55.531312 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 05:27:55.601567 augenrules[2075]: No rules Jan 30 05:27:55.605740 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 05:27:55.610588 sudo[2052]: pam_unix(sudo:session): session closed for user root Jan 30 05:27:55.772637 sshd[2031]: pam_unix(sshd:session): session closed for user core Jan 30 05:27:55.783508 systemd[1]: sshd@5-49.13.81.87:22-139.178.89.65:40566.service: Deactivated successfully. Jan 30 05:27:55.791761 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 05:27:55.793283 systemd-logind[1601]: Session 6 logged out. Waiting for processes to exit. Jan 30 05:27:55.795215 systemd-logind[1601]: Removed session 6. Jan 30 05:27:55.940506 systemd[1]: Started sshd@6-49.13.81.87:22-139.178.89.65:40582.service - OpenSSH per-connection server daemon (139.178.89.65:40582). Jan 30 05:27:56.943434 sshd[2084]: Accepted publickey for core from 139.178.89.65 port 40582 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:27:56.947219 sshd[2084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:27:56.958788 systemd-logind[1601]: New session 7 of user core. Jan 30 05:27:56.969331 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 05:27:57.475470 sudo[2088]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 05:27:57.476376 sudo[2088]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 05:27:58.081978 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 30 05:27:58.099644 (dockerd)[2104]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 30 05:27:58.872908 dockerd[2104]: time="2025-01-30T05:27:58.872806897Z" level=info msg="Starting up" Jan 30 05:27:59.088352 dockerd[2104]: time="2025-01-30T05:27:59.088248437Z" level=info msg="Loading containers: start." Jan 30 05:27:59.283919 kernel: Initializing XFRM netlink socket Jan 30 05:27:59.454630 systemd-networkd[1240]: docker0: Link UP Jan 30 05:27:59.487949 dockerd[2104]: time="2025-01-30T05:27:59.487876615Z" level=info msg="Loading containers: done." Jan 30 05:27:59.536612 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2041219186-merged.mount: Deactivated successfully. Jan 30 05:27:59.540361 dockerd[2104]: time="2025-01-30T05:27:59.540280359Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 30 05:27:59.540537 dockerd[2104]: time="2025-01-30T05:27:59.540494941Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 30 05:27:59.540801 dockerd[2104]: time="2025-01-30T05:27:59.540758332Z" level=info msg="Daemon has completed initialization" Jan 30 05:27:59.610309 dockerd[2104]: time="2025-01-30T05:27:59.609668477Z" level=info msg="API listen on /run/docker.sock" Jan 30 05:27:59.610099 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 30 05:28:01.465899 containerd[1620]: time="2025-01-30T05:28:01.465787313Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 30 05:28:02.219165 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount129096157.mount: Deactivated successfully. Jan 30 05:28:04.159312 containerd[1620]: time="2025-01-30T05:28:04.159230844Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:28:04.161746 containerd[1620]: time="2025-01-30T05:28:04.161626901Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=32677104" Jan 30 05:28:04.162097 containerd[1620]: time="2025-01-30T05:28:04.162040272Z" level=info msg="ImageCreate event name:\"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:28:04.166622 containerd[1620]: time="2025-01-30T05:28:04.166539197Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:28:04.168540 containerd[1620]: time="2025-01-30T05:28:04.168252158Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"32673812\" in 2.702395174s" Jan 30 05:28:04.168540 containerd[1620]: time="2025-01-30T05:28:04.168301569Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\"" Jan 30 05:28:04.206791 containerd[1620]: time="2025-01-30T05:28:04.206569798Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 30 05:28:04.398607 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Jan 30 05:28:04.406573 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 05:28:04.712082 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:28:04.719911 (kubelet)[2324]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 05:28:04.778614 kubelet[2324]: E0130 05:28:04.778504 2324 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 05:28:04.786962 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 05:28:04.787236 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 05:28:06.405000 containerd[1620]: time="2025-01-30T05:28:06.404878933Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:28:06.406861 containerd[1620]: time="2025-01-30T05:28:06.406791949Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=29605765" Jan 30 05:28:06.408643 containerd[1620]: time="2025-01-30T05:28:06.408530085Z" level=info msg="ImageCreate event name:\"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:28:06.412373 containerd[1620]: time="2025-01-30T05:28:06.412278549Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:28:06.417041 containerd[1620]: time="2025-01-30T05:28:06.415830466Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"31052327\" in 2.209202449s" Jan 30 05:28:06.417041 containerd[1620]: time="2025-01-30T05:28:06.415887652Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\"" Jan 30 05:28:06.470794 containerd[1620]: time="2025-01-30T05:28:06.470720074Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 30 05:28:07.995649 containerd[1620]: time="2025-01-30T05:28:07.995477030Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:28:07.998039 containerd[1620]: time="2025-01-30T05:28:07.997705063Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=17783084" Jan 30 05:28:08.001716 containerd[1620]: time="2025-01-30T05:28:07.999734645Z" level=info msg="ImageCreate event name:\"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:28:08.003994 containerd[1620]: time="2025-01-30T05:28:08.003950223Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:28:08.005275 containerd[1620]: time="2025-01-30T05:28:08.005234403Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"19229664\" in 1.534463895s" Jan 30 05:28:08.005275 containerd[1620]: time="2025-01-30T05:28:08.005274428Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\"" Jan 30 05:28:08.034748 containerd[1620]: time="2025-01-30T05:28:08.034663999Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 30 05:28:09.228243 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3234507876.mount: Deactivated successfully. Jan 30 05:28:09.552504 containerd[1620]: time="2025-01-30T05:28:09.552362313Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:28:09.553980 containerd[1620]: time="2025-01-30T05:28:09.553895819Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=29058363" Jan 30 05:28:09.555276 containerd[1620]: time="2025-01-30T05:28:09.555236164Z" level=info msg="ImageCreate event name:\"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:28:09.557575 containerd[1620]: time="2025-01-30T05:28:09.557549116Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:28:09.558925 containerd[1620]: time="2025-01-30T05:28:09.558053820Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"29057356\" in 1.523071666s" Jan 30 05:28:09.558925 containerd[1620]: time="2025-01-30T05:28:09.558084396Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\"" Jan 30 05:28:09.583055 containerd[1620]: time="2025-01-30T05:28:09.583005912Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 30 05:28:10.210005 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3327629821.mount: Deactivated successfully. Jan 30 05:28:11.466965 containerd[1620]: time="2025-01-30T05:28:11.466791975Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:28:11.469012 containerd[1620]: time="2025-01-30T05:28:11.468938327Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185841" Jan 30 05:28:11.470138 containerd[1620]: time="2025-01-30T05:28:11.470071205Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:28:11.474266 containerd[1620]: time="2025-01-30T05:28:11.473645015Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:28:11.475005 containerd[1620]: time="2025-01-30T05:28:11.474951437Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.891908255s" Jan 30 05:28:11.475085 containerd[1620]: time="2025-01-30T05:28:11.475006309Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 30 05:28:11.509245 containerd[1620]: time="2025-01-30T05:28:11.509143228Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 30 05:28:12.094716 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2744585216.mount: Deactivated successfully. Jan 30 05:28:12.103942 containerd[1620]: time="2025-01-30T05:28:12.103810246Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:28:12.106516 containerd[1620]: time="2025-01-30T05:28:12.106363559Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322310" Jan 30 05:28:12.107986 containerd[1620]: time="2025-01-30T05:28:12.107927623Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:28:12.113758 containerd[1620]: time="2025-01-30T05:28:12.113358455Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:28:12.115201 containerd[1620]: time="2025-01-30T05:28:12.115152339Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 605.943708ms" Jan 30 05:28:12.115378 containerd[1620]: time="2025-01-30T05:28:12.115346883Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 30 05:28:12.193793 containerd[1620]: time="2025-01-30T05:28:12.193640360Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 30 05:28:12.862178 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount80167357.mount: Deactivated successfully. Jan 30 05:28:14.632668 containerd[1620]: time="2025-01-30T05:28:14.632585408Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:28:14.634267 containerd[1620]: time="2025-01-30T05:28:14.634197662Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238651" Jan 30 05:28:14.635445 containerd[1620]: time="2025-01-30T05:28:14.635416731Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:28:14.640620 containerd[1620]: time="2025-01-30T05:28:14.640548676Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:28:14.641805 containerd[1620]: time="2025-01-30T05:28:14.641745644Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.447989969s" Jan 30 05:28:14.641805 containerd[1620]: time="2025-01-30T05:28:14.641789076Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jan 30 05:28:14.898678 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Jan 30 05:28:14.906050 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 05:28:15.165898 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:28:15.170063 (kubelet)[2487]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 05:28:15.273902 kubelet[2487]: E0130 05:28:15.273818 2487 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 05:28:15.280593 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 05:28:15.281667 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 05:28:17.368320 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:28:17.385886 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 05:28:17.399838 systemd[1]: Reloading requested from client PID 2556 ('systemctl') (unit session-7.scope)... Jan 30 05:28:17.399976 systemd[1]: Reloading... Jan 30 05:28:17.578281 zram_generator::config[2601]: No configuration found. Jan 30 05:28:17.701211 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 05:28:17.782913 systemd[1]: Reloading finished in 382 ms. Jan 30 05:28:17.839102 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:28:17.846294 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 05:28:17.855083 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 05:28:17.855794 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:28:17.871414 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 05:28:18.060884 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:28:18.071556 (kubelet)[2665]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 05:28:18.137177 kubelet[2665]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 05:28:18.137177 kubelet[2665]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 05:28:18.137177 kubelet[2665]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 05:28:18.138024 kubelet[2665]: I0130 05:28:18.137219 2665 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 05:28:18.528240 kubelet[2665]: I0130 05:28:18.528164 2665 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 05:28:18.528240 kubelet[2665]: I0130 05:28:18.528201 2665 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 05:28:18.528532 kubelet[2665]: I0130 05:28:18.528427 2665 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 05:28:18.557364 kubelet[2665]: I0130 05:28:18.557293 2665 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 05:28:18.572747 kubelet[2665]: E0130 05:28:18.572301 2665 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://49.13.81.87:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 49.13.81.87:6443: connect: connection refused Jan 30 05:28:18.599534 kubelet[2665]: I0130 05:28:18.599464 2665 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 05:28:18.607337 kubelet[2665]: I0130 05:28:18.607266 2665 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 05:28:18.610577 kubelet[2665]: I0130 05:28:18.607325 2665 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-0-c-240f39d8fc","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 05:28:18.610769 kubelet[2665]: I0130 05:28:18.610593 2665 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 05:28:18.610769 kubelet[2665]: I0130 05:28:18.610625 2665 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 05:28:18.611017 kubelet[2665]: I0130 05:28:18.610965 2665 state_mem.go:36] "Initialized new in-memory state store" Jan 30 05:28:18.612800 kubelet[2665]: I0130 05:28:18.612737 2665 kubelet.go:400] "Attempting to sync node with API server" Jan 30 05:28:18.612800 kubelet[2665]: I0130 05:28:18.612780 2665 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 05:28:18.612902 kubelet[2665]: I0130 05:28:18.612822 2665 kubelet.go:312] "Adding apiserver pod source" Jan 30 05:28:18.612902 kubelet[2665]: I0130 05:28:18.612850 2665 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 05:28:18.620086 kubelet[2665]: W0130 05:28:18.618494 2665 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://49.13.81.87:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 49.13.81.87:6443: connect: connection refused Jan 30 05:28:18.620086 kubelet[2665]: E0130 05:28:18.618641 2665 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://49.13.81.87:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 49.13.81.87:6443: connect: connection refused Jan 30 05:28:18.620086 kubelet[2665]: W0130 05:28:18.619476 2665 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://49.13.81.87:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-c-240f39d8fc&limit=500&resourceVersion=0": dial tcp 49.13.81.87:6443: connect: connection refused Jan 30 05:28:18.620086 kubelet[2665]: E0130 05:28:18.619537 2665 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://49.13.81.87:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-c-240f39d8fc&limit=500&resourceVersion=0": dial tcp 49.13.81.87:6443: connect: connection refused Jan 30 05:28:18.620491 kubelet[2665]: I0130 05:28:18.620353 2665 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 05:28:18.626383 kubelet[2665]: I0130 05:28:18.625117 2665 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 05:28:18.626383 kubelet[2665]: W0130 05:28:18.625260 2665 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 05:28:18.626782 kubelet[2665]: I0130 05:28:18.626747 2665 server.go:1264] "Started kubelet" Jan 30 05:28:18.638389 kubelet[2665]: I0130 05:28:18.638340 2665 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 05:28:18.645415 kubelet[2665]: I0130 05:28:18.645323 2665 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 05:28:18.646804 kubelet[2665]: I0130 05:28:18.646772 2665 server.go:455] "Adding debug handlers to kubelet server" Jan 30 05:28:18.647835 kubelet[2665]: I0130 05:28:18.647766 2665 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 05:28:18.648155 kubelet[2665]: I0130 05:28:18.648121 2665 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 05:28:18.648619 kubelet[2665]: I0130 05:28:18.648574 2665 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 05:28:18.652915 kubelet[2665]: I0130 05:28:18.652882 2665 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 05:28:18.653115 kubelet[2665]: I0130 05:28:18.653064 2665 reconciler.go:26] "Reconciler: start to sync state" Jan 30 05:28:18.654909 kubelet[2665]: W0130 05:28:18.654862 2665 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://49.13.81.87:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 49.13.81.87:6443: connect: connection refused Jan 30 05:28:18.655033 kubelet[2665]: E0130 05:28:18.655014 2665 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://49.13.81.87:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 49.13.81.87:6443: connect: connection refused Jan 30 05:28:18.655278 kubelet[2665]: E0130 05:28:18.655244 2665 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.13.81.87:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-c-240f39d8fc?timeout=10s\": dial tcp 49.13.81.87:6443: connect: connection refused" interval="200ms" Jan 30 05:28:18.657918 kubelet[2665]: I0130 05:28:18.657861 2665 factory.go:221] Registration of the systemd container factory successfully Jan 30 05:28:18.658045 kubelet[2665]: I0130 05:28:18.658009 2665 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 05:28:18.658416 kubelet[2665]: E0130 05:28:18.656753 2665 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://49.13.81.87:6443/api/v1/namespaces/default/events\": dial tcp 49.13.81.87:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-0-c-240f39d8fc.181f613da7ef4514 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-0-c-240f39d8fc,UID:ci-4081-3-0-c-240f39d8fc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-0-c-240f39d8fc,},FirstTimestamp:2025-01-30 05:28:18.626675988 +0000 UTC m=+0.546844592,LastTimestamp:2025-01-30 05:28:18.626675988 +0000 UTC m=+0.546844592,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-0-c-240f39d8fc,}" Jan 30 05:28:18.659064 kubelet[2665]: E0130 05:28:18.659041 2665 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 05:28:18.661361 kubelet[2665]: I0130 05:28:18.661327 2665 factory.go:221] Registration of the containerd container factory successfully Jan 30 05:28:18.697131 kubelet[2665]: I0130 05:28:18.697093 2665 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 05:28:18.698953 kubelet[2665]: I0130 05:28:18.698636 2665 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 05:28:18.698953 kubelet[2665]: I0130 05:28:18.698660 2665 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 05:28:18.698953 kubelet[2665]: I0130 05:28:18.698679 2665 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 05:28:18.698953 kubelet[2665]: E0130 05:28:18.698741 2665 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 05:28:18.707047 kubelet[2665]: W0130 05:28:18.707023 2665 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://49.13.81.87:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 49.13.81.87:6443: connect: connection refused Jan 30 05:28:18.707627 kubelet[2665]: I0130 05:28:18.707594 2665 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 05:28:18.707627 kubelet[2665]: I0130 05:28:18.707613 2665 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 05:28:18.707627 kubelet[2665]: I0130 05:28:18.707629 2665 state_mem.go:36] "Initialized new in-memory state store" Jan 30 05:28:18.709110 kubelet[2665]: E0130 05:28:18.709062 2665 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://49.13.81.87:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 49.13.81.87:6443: connect: connection refused Jan 30 05:28:18.711020 kubelet[2665]: I0130 05:28:18.711002 2665 policy_none.go:49] "None policy: Start" Jan 30 05:28:18.711785 kubelet[2665]: I0130 05:28:18.711754 2665 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 05:28:18.711785 kubelet[2665]: I0130 05:28:18.711779 2665 state_mem.go:35] "Initializing new in-memory state store" Jan 30 05:28:18.719708 kubelet[2665]: I0130 05:28:18.718492 2665 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 05:28:18.719708 kubelet[2665]: I0130 05:28:18.718659 2665 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 05:28:18.719708 kubelet[2665]: I0130 05:28:18.718780 2665 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 05:28:18.721238 kubelet[2665]: E0130 05:28:18.721208 2665 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-0-c-240f39d8fc\" not found" Jan 30 05:28:18.752181 kubelet[2665]: I0130 05:28:18.752094 2665 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-c-240f39d8fc" Jan 30 05:28:18.752909 kubelet[2665]: E0130 05:28:18.752828 2665 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://49.13.81.87:6443/api/v1/nodes\": dial tcp 49.13.81.87:6443: connect: connection refused" node="ci-4081-3-0-c-240f39d8fc" Jan 30 05:28:18.799556 kubelet[2665]: I0130 05:28:18.799143 2665 topology_manager.go:215] "Topology Admit Handler" podUID="5ad73313bb74c511777dca46043a4aa5" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-3-0-c-240f39d8fc" Jan 30 05:28:18.804355 kubelet[2665]: I0130 05:28:18.804280 2665 topology_manager.go:215] "Topology Admit Handler" podUID="2aadb5647ce92c8bb70cf69501d1055a" podNamespace="kube-system" podName="kube-scheduler-ci-4081-3-0-c-240f39d8fc" Jan 30 05:28:18.807784 kubelet[2665]: I0130 05:28:18.807630 2665 topology_manager.go:215] "Topology Admit Handler" podUID="c25cd633f17e0d8aa0b1f700dc8b4165" podNamespace="kube-system" podName="kube-apiserver-ci-4081-3-0-c-240f39d8fc" Jan 30 05:28:18.856741 kubelet[2665]: E0130 05:28:18.856638 2665 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.13.81.87:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-c-240f39d8fc?timeout=10s\": dial tcp 49.13.81.87:6443: connect: connection refused" interval="400ms" Jan 30 05:28:18.954954 kubelet[2665]: I0130 05:28:18.954456 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5ad73313bb74c511777dca46043a4aa5-ca-certs\") pod \"kube-controller-manager-ci-4081-3-0-c-240f39d8fc\" (UID: \"5ad73313bb74c511777dca46043a4aa5\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-c-240f39d8fc" Jan 30 05:28:18.954954 kubelet[2665]: I0130 05:28:18.954518 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5ad73313bb74c511777dca46043a4aa5-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-0-c-240f39d8fc\" (UID: \"5ad73313bb74c511777dca46043a4aa5\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-c-240f39d8fc" Jan 30 05:28:18.954954 kubelet[2665]: I0130 05:28:18.954553 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2aadb5647ce92c8bb70cf69501d1055a-kubeconfig\") pod \"kube-scheduler-ci-4081-3-0-c-240f39d8fc\" (UID: \"2aadb5647ce92c8bb70cf69501d1055a\") " pod="kube-system/kube-scheduler-ci-4081-3-0-c-240f39d8fc" Jan 30 05:28:18.954954 kubelet[2665]: I0130 05:28:18.954584 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c25cd633f17e0d8aa0b1f700dc8b4165-k8s-certs\") pod \"kube-apiserver-ci-4081-3-0-c-240f39d8fc\" (UID: \"c25cd633f17e0d8aa0b1f700dc8b4165\") " pod="kube-system/kube-apiserver-ci-4081-3-0-c-240f39d8fc" Jan 30 05:28:18.954954 kubelet[2665]: I0130 05:28:18.954615 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c25cd633f17e0d8aa0b1f700dc8b4165-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-0-c-240f39d8fc\" (UID: \"c25cd633f17e0d8aa0b1f700dc8b4165\") " pod="kube-system/kube-apiserver-ci-4081-3-0-c-240f39d8fc" Jan 30 05:28:18.955328 kubelet[2665]: I0130 05:28:18.954643 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5ad73313bb74c511777dca46043a4aa5-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-0-c-240f39d8fc\" (UID: \"5ad73313bb74c511777dca46043a4aa5\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-c-240f39d8fc" Jan 30 05:28:18.955328 kubelet[2665]: I0130 05:28:18.954674 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5ad73313bb74c511777dca46043a4aa5-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-0-c-240f39d8fc\" (UID: \"5ad73313bb74c511777dca46043a4aa5\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-c-240f39d8fc" Jan 30 05:28:18.955328 kubelet[2665]: I0130 05:28:18.954736 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c25cd633f17e0d8aa0b1f700dc8b4165-ca-certs\") pod \"kube-apiserver-ci-4081-3-0-c-240f39d8fc\" (UID: \"c25cd633f17e0d8aa0b1f700dc8b4165\") " pod="kube-system/kube-apiserver-ci-4081-3-0-c-240f39d8fc" Jan 30 05:28:18.955328 kubelet[2665]: I0130 05:28:18.954767 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5ad73313bb74c511777dca46043a4aa5-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-0-c-240f39d8fc\" (UID: \"5ad73313bb74c511777dca46043a4aa5\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-c-240f39d8fc" Jan 30 05:28:18.956116 kubelet[2665]: I0130 05:28:18.956086 2665 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-c-240f39d8fc" Jan 30 05:28:18.956959 kubelet[2665]: E0130 05:28:18.956891 2665 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://49.13.81.87:6443/api/v1/nodes\": dial tcp 49.13.81.87:6443: connect: connection refused" node="ci-4081-3-0-c-240f39d8fc" Jan 30 05:28:19.124884 containerd[1620]: time="2025-01-30T05:28:19.124515877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-0-c-240f39d8fc,Uid:2aadb5647ce92c8bb70cf69501d1055a,Namespace:kube-system,Attempt:0,}" Jan 30 05:28:19.129880 containerd[1620]: time="2025-01-30T05:28:19.129356881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-0-c-240f39d8fc,Uid:5ad73313bb74c511777dca46043a4aa5,Namespace:kube-system,Attempt:0,}" Jan 30 05:28:19.130116 containerd[1620]: time="2025-01-30T05:28:19.130041171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-0-c-240f39d8fc,Uid:c25cd633f17e0d8aa0b1f700dc8b4165,Namespace:kube-system,Attempt:0,}" Jan 30 05:28:19.258234 kubelet[2665]: E0130 05:28:19.258148 2665 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.13.81.87:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-c-240f39d8fc?timeout=10s\": dial tcp 49.13.81.87:6443: connect: connection refused" interval="800ms" Jan 30 05:28:19.361418 kubelet[2665]: I0130 05:28:19.361361 2665 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-c-240f39d8fc" Jan 30 05:28:19.361992 kubelet[2665]: E0130 05:28:19.361918 2665 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://49.13.81.87:6443/api/v1/nodes\": dial tcp 49.13.81.87:6443: connect: connection refused" node="ci-4081-3-0-c-240f39d8fc" Jan 30 05:28:19.642599 kubelet[2665]: W0130 05:28:19.642498 2665 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://49.13.81.87:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-c-240f39d8fc&limit=500&resourceVersion=0": dial tcp 49.13.81.87:6443: connect: connection refused Jan 30 05:28:19.642599 kubelet[2665]: E0130 05:28:19.642598 2665 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://49.13.81.87:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-c-240f39d8fc&limit=500&resourceVersion=0": dial tcp 49.13.81.87:6443: connect: connection refused Jan 30 05:28:19.722655 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount740123903.mount: Deactivated successfully. Jan 30 05:28:19.734241 containerd[1620]: time="2025-01-30T05:28:19.734121918Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 05:28:19.740015 containerd[1620]: time="2025-01-30T05:28:19.739890886Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312076" Jan 30 05:28:19.741338 containerd[1620]: time="2025-01-30T05:28:19.741281056Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 05:28:19.744356 containerd[1620]: time="2025-01-30T05:28:19.744299141Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 05:28:19.747154 containerd[1620]: time="2025-01-30T05:28:19.747062028Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 05:28:19.748707 containerd[1620]: time="2025-01-30T05:28:19.748523842Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 05:28:19.750107 containerd[1620]: time="2025-01-30T05:28:19.750047170Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 05:28:19.755738 containerd[1620]: time="2025-01-30T05:28:19.754819587Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 05:28:19.757317 containerd[1620]: time="2025-01-30T05:28:19.757272823Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 632.665507ms" Jan 30 05:28:19.760301 containerd[1620]: time="2025-01-30T05:28:19.759957405Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 629.796069ms" Jan 30 05:28:19.764427 containerd[1620]: time="2025-01-30T05:28:19.764356612Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 634.884845ms" Jan 30 05:28:19.814734 kubelet[2665]: W0130 05:28:19.813251 2665 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://49.13.81.87:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 49.13.81.87:6443: connect: connection refused Jan 30 05:28:19.814734 kubelet[2665]: E0130 05:28:19.813311 2665 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://49.13.81.87:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 49.13.81.87:6443: connect: connection refused Jan 30 05:28:19.991825 containerd[1620]: time="2025-01-30T05:28:19.991560801Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 05:28:19.991825 containerd[1620]: time="2025-01-30T05:28:19.991607358Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 05:28:19.991825 containerd[1620]: time="2025-01-30T05:28:19.991621805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:28:19.991825 containerd[1620]: time="2025-01-30T05:28:19.991735447Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:28:19.992310 containerd[1620]: time="2025-01-30T05:28:19.992133412Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 05:28:19.992310 containerd[1620]: time="2025-01-30T05:28:19.992219563Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 05:28:19.992441 containerd[1620]: time="2025-01-30T05:28:19.992270910Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:28:19.993253 containerd[1620]: time="2025-01-30T05:28:19.993215215Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:28:20.000474 containerd[1620]: time="2025-01-30T05:28:20.000230043Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 05:28:20.000474 containerd[1620]: time="2025-01-30T05:28:20.000271702Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 05:28:20.000474 containerd[1620]: time="2025-01-30T05:28:20.000281470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:28:20.000474 containerd[1620]: time="2025-01-30T05:28:20.000354777Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:28:20.059371 kubelet[2665]: E0130 05:28:20.059326 2665 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.13.81.87:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-c-240f39d8fc?timeout=10s\": dial tcp 49.13.81.87:6443: connect: connection refused" interval="1.6s" Jan 30 05:28:20.076349 containerd[1620]: time="2025-01-30T05:28:20.076306090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-0-c-240f39d8fc,Uid:c25cd633f17e0d8aa0b1f700dc8b4165,Namespace:kube-system,Attempt:0,} returns sandbox id \"57ad13bd6b0a45c37cf25a58065a2ddb0936fa34c77147a8806263270a4ea32c\"" Jan 30 05:28:20.083161 containerd[1620]: time="2025-01-30T05:28:20.081201607Z" level=info msg="CreateContainer within sandbox \"57ad13bd6b0a45c37cf25a58065a2ddb0936fa34c77147a8806263270a4ea32c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 30 05:28:20.098124 containerd[1620]: time="2025-01-30T05:28:20.098089322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-0-c-240f39d8fc,Uid:5ad73313bb74c511777dca46043a4aa5,Namespace:kube-system,Attempt:0,} returns sandbox id \"38f7e4dd89a1f47c5496025ea07d460233743fc1983c6688445831988fa06203\"" Jan 30 05:28:20.102924 containerd[1620]: time="2025-01-30T05:28:20.102780135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-0-c-240f39d8fc,Uid:2aadb5647ce92c8bb70cf69501d1055a,Namespace:kube-system,Attempt:0,} returns sandbox id \"e4b3a4f5a27dd99bfdf8d1f4530ad941ae2095dee1bbd894f6fa6fc4461e8159\"" Jan 30 05:28:20.104037 containerd[1620]: time="2025-01-30T05:28:20.103824037Z" level=info msg="CreateContainer within sandbox \"38f7e4dd89a1f47c5496025ea07d460233743fc1983c6688445831988fa06203\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 30 05:28:20.107738 containerd[1620]: time="2025-01-30T05:28:20.107696179Z" level=info msg="CreateContainer within sandbox \"e4b3a4f5a27dd99bfdf8d1f4530ad941ae2095dee1bbd894f6fa6fc4461e8159\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 30 05:28:20.113435 kubelet[2665]: W0130 05:28:20.113377 2665 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://49.13.81.87:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 49.13.81.87:6443: connect: connection refused Jan 30 05:28:20.113540 kubelet[2665]: E0130 05:28:20.113525 2665 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://49.13.81.87:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 49.13.81.87:6443: connect: connection refused Jan 30 05:28:20.121310 containerd[1620]: time="2025-01-30T05:28:20.121207060Z" level=info msg="CreateContainer within sandbox \"57ad13bd6b0a45c37cf25a58065a2ddb0936fa34c77147a8806263270a4ea32c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"be1661f7eb19e6db8aa0978bbf0b75340ba57d29ffcf3efdee0065019c287e81\"" Jan 30 05:28:20.122058 containerd[1620]: time="2025-01-30T05:28:20.122041850Z" level=info msg="StartContainer for \"be1661f7eb19e6db8aa0978bbf0b75340ba57d29ffcf3efdee0065019c287e81\"" Jan 30 05:28:20.141596 containerd[1620]: time="2025-01-30T05:28:20.141517205Z" level=info msg="CreateContainer within sandbox \"e4b3a4f5a27dd99bfdf8d1f4530ad941ae2095dee1bbd894f6fa6fc4461e8159\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"421ba78610de8357cdb798af6c23362666a4f473a7c671cb8da5be0cf1af156e\"" Jan 30 05:28:20.147492 containerd[1620]: time="2025-01-30T05:28:20.142930519Z" level=info msg="CreateContainer within sandbox \"38f7e4dd89a1f47c5496025ea07d460233743fc1983c6688445831988fa06203\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a568b4b851e5c78d8e9198e7f9dc283ecf90a4bf3e9b8f424fb657449d1f8e16\"" Jan 30 05:28:20.147492 containerd[1620]: time="2025-01-30T05:28:20.142972638Z" level=info msg="StartContainer for \"421ba78610de8357cdb798af6c23362666a4f473a7c671cb8da5be0cf1af156e\"" Jan 30 05:28:20.147492 containerd[1620]: time="2025-01-30T05:28:20.143413742Z" level=info msg="StartContainer for \"a568b4b851e5c78d8e9198e7f9dc283ecf90a4bf3e9b8f424fb657449d1f8e16\"" Jan 30 05:28:20.165534 kubelet[2665]: I0130 05:28:20.165122 2665 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-c-240f39d8fc" Jan 30 05:28:20.165878 kubelet[2665]: E0130 05:28:20.165751 2665 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://49.13.81.87:6443/api/v1/nodes\": dial tcp 49.13.81.87:6443: connect: connection refused" node="ci-4081-3-0-c-240f39d8fc" Jan 30 05:28:20.201490 kubelet[2665]: W0130 05:28:20.201387 2665 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://49.13.81.87:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 49.13.81.87:6443: connect: connection refused Jan 30 05:28:20.201490 kubelet[2665]: E0130 05:28:20.201489 2665 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://49.13.81.87:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 49.13.81.87:6443: connect: connection refused Jan 30 05:28:20.221822 containerd[1620]: time="2025-01-30T05:28:20.221779310Z" level=info msg="StartContainer for \"be1661f7eb19e6db8aa0978bbf0b75340ba57d29ffcf3efdee0065019c287e81\" returns successfully" Jan 30 05:28:20.257263 containerd[1620]: time="2025-01-30T05:28:20.256896911Z" level=info msg="StartContainer for \"a568b4b851e5c78d8e9198e7f9dc283ecf90a4bf3e9b8f424fb657449d1f8e16\" returns successfully" Jan 30 05:28:20.291218 containerd[1620]: time="2025-01-30T05:28:20.291178369Z" level=info msg="StartContainer for \"421ba78610de8357cdb798af6c23362666a4f473a7c671cb8da5be0cf1af156e\" returns successfully" Jan 30 05:28:21.772748 kubelet[2665]: I0130 05:28:21.769866 2665 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-c-240f39d8fc" Jan 30 05:28:21.983049 kubelet[2665]: E0130 05:28:21.983002 2665 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-0-c-240f39d8fc\" not found" node="ci-4081-3-0-c-240f39d8fc" Jan 30 05:28:22.047266 kubelet[2665]: I0130 05:28:22.046908 2665 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-3-0-c-240f39d8fc" Jan 30 05:28:22.058929 kubelet[2665]: E0130 05:28:22.057589 2665 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-3-0-c-240f39d8fc\" not found" Jan 30 05:28:22.619662 kubelet[2665]: I0130 05:28:22.619593 2665 apiserver.go:52] "Watching apiserver" Jan 30 05:28:22.653276 kubelet[2665]: I0130 05:28:22.653183 2665 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 05:28:24.547126 systemd[1]: Reloading requested from client PID 2942 ('systemctl') (unit session-7.scope)... Jan 30 05:28:24.547162 systemd[1]: Reloading... Jan 30 05:28:24.719740 zram_generator::config[2988]: No configuration found. Jan 30 05:28:24.852570 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 05:28:24.942042 systemd[1]: Reloading finished in 393 ms. Jan 30 05:28:24.984658 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 05:28:24.985624 kubelet[2665]: E0130 05:28:24.985161 2665 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{ci-4081-3-0-c-240f39d8fc.181f613da7ef4514 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-0-c-240f39d8fc,UID:ci-4081-3-0-c-240f39d8fc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-0-c-240f39d8fc,},FirstTimestamp:2025-01-30 05:28:18.626675988 +0000 UTC m=+0.546844592,LastTimestamp:2025-01-30 05:28:18.626675988 +0000 UTC m=+0.546844592,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-0-c-240f39d8fc,}" Jan 30 05:28:24.986230 kubelet[2665]: I0130 05:28:24.985985 2665 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 05:28:24.999875 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 05:28:25.000930 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:28:25.014817 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 05:28:25.206867 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:28:25.218900 (kubelet)[3043]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 05:28:25.354966 kubelet[3043]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 05:28:25.354966 kubelet[3043]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 05:28:25.354966 kubelet[3043]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 05:28:25.354966 kubelet[3043]: I0130 05:28:25.354977 3043 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 05:28:25.364848 kubelet[3043]: I0130 05:28:25.364797 3043 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 05:28:25.364848 kubelet[3043]: I0130 05:28:25.364831 3043 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 05:28:25.365157 kubelet[3043]: I0130 05:28:25.365121 3043 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 05:28:25.366620 kubelet[3043]: I0130 05:28:25.366596 3043 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 05:28:25.368311 kubelet[3043]: I0130 05:28:25.367786 3043 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 05:28:25.382398 kubelet[3043]: I0130 05:28:25.382372 3043 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 05:28:25.383678 kubelet[3043]: I0130 05:28:25.383632 3043 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 05:28:25.386520 kubelet[3043]: I0130 05:28:25.383789 3043 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-0-c-240f39d8fc","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 05:28:25.386520 kubelet[3043]: I0130 05:28:25.384116 3043 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 05:28:25.386520 kubelet[3043]: I0130 05:28:25.384133 3043 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 05:28:25.386520 kubelet[3043]: I0130 05:28:25.384203 3043 state_mem.go:36] "Initialized new in-memory state store" Jan 30 05:28:25.386520 kubelet[3043]: I0130 05:28:25.386131 3043 kubelet.go:400] "Attempting to sync node with API server" Jan 30 05:28:25.386908 kubelet[3043]: I0130 05:28:25.386169 3043 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 05:28:25.386908 kubelet[3043]: I0130 05:28:25.386207 3043 kubelet.go:312] "Adding apiserver pod source" Jan 30 05:28:25.386908 kubelet[3043]: I0130 05:28:25.386226 3043 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 05:28:25.391489 kubelet[3043]: I0130 05:28:25.390891 3043 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 05:28:25.393806 kubelet[3043]: I0130 05:28:25.393785 3043 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 05:28:25.395320 kubelet[3043]: I0130 05:28:25.394463 3043 server.go:1264] "Started kubelet" Jan 30 05:28:25.398562 kubelet[3043]: I0130 05:28:25.397123 3043 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 05:28:25.398755 kubelet[3043]: I0130 05:28:25.398728 3043 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 05:28:25.398851 kubelet[3043]: I0130 05:28:25.398798 3043 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 05:28:25.399816 kubelet[3043]: I0130 05:28:25.399803 3043 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 05:28:25.406995 kubelet[3043]: I0130 05:28:25.406190 3043 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 05:28:25.410103 kubelet[3043]: I0130 05:28:25.399840 3043 server.go:455] "Adding debug handlers to kubelet server" Jan 30 05:28:25.416796 kubelet[3043]: I0130 05:28:25.416777 3043 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 05:28:25.418052 kubelet[3043]: I0130 05:28:25.418024 3043 reconciler.go:26] "Reconciler: start to sync state" Jan 30 05:28:25.418970 kubelet[3043]: I0130 05:28:25.418956 3043 factory.go:221] Registration of the systemd container factory successfully Jan 30 05:28:25.419127 kubelet[3043]: I0130 05:28:25.419112 3043 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 05:28:25.420215 kubelet[3043]: E0130 05:28:25.420201 3043 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 05:28:25.424321 kubelet[3043]: I0130 05:28:25.424263 3043 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 05:28:25.425492 kubelet[3043]: I0130 05:28:25.425464 3043 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 05:28:25.425670 kubelet[3043]: I0130 05:28:25.425500 3043 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 05:28:25.425670 kubelet[3043]: I0130 05:28:25.425531 3043 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 05:28:25.425670 kubelet[3043]: E0130 05:28:25.425572 3043 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 05:28:25.426760 kubelet[3043]: I0130 05:28:25.426745 3043 factory.go:221] Registration of the containerd container factory successfully Jan 30 05:28:25.517218 kubelet[3043]: I0130 05:28:25.516853 3043 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-c-240f39d8fc" Jan 30 05:28:25.526004 kubelet[3043]: E0130 05:28:25.525657 3043 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 30 05:28:25.526004 kubelet[3043]: I0130 05:28:25.525730 3043 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081-3-0-c-240f39d8fc" Jan 30 05:28:25.526004 kubelet[3043]: I0130 05:28:25.525811 3043 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-3-0-c-240f39d8fc" Jan 30 05:28:25.559323 sudo[3074]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 30 05:28:25.560623 sudo[3074]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 30 05:28:25.566526 kubelet[3043]: I0130 05:28:25.565892 3043 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 05:28:25.566526 kubelet[3043]: I0130 05:28:25.565909 3043 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 05:28:25.566526 kubelet[3043]: I0130 05:28:25.565929 3043 state_mem.go:36] "Initialized new in-memory state store" Jan 30 05:28:25.566526 kubelet[3043]: I0130 05:28:25.566064 3043 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 30 05:28:25.566526 kubelet[3043]: I0130 05:28:25.566076 3043 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 30 05:28:25.566526 kubelet[3043]: I0130 05:28:25.566094 3043 policy_none.go:49] "None policy: Start" Jan 30 05:28:25.568883 kubelet[3043]: I0130 05:28:25.568830 3043 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 05:28:25.568883 kubelet[3043]: I0130 05:28:25.568849 3043 state_mem.go:35] "Initializing new in-memory state store" Jan 30 05:28:25.569311 kubelet[3043]: I0130 05:28:25.569299 3043 state_mem.go:75] "Updated machine memory state" Jan 30 05:28:25.574069 kubelet[3043]: I0130 05:28:25.573878 3043 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 05:28:25.574613 kubelet[3043]: I0130 05:28:25.574095 3043 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 05:28:25.575097 kubelet[3043]: I0130 05:28:25.575071 3043 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 05:28:25.726352 kubelet[3043]: I0130 05:28:25.726296 3043 topology_manager.go:215] "Topology Admit Handler" podUID="5ad73313bb74c511777dca46043a4aa5" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-3-0-c-240f39d8fc" Jan 30 05:28:25.726547 kubelet[3043]: I0130 05:28:25.726418 3043 topology_manager.go:215] "Topology Admit Handler" podUID="2aadb5647ce92c8bb70cf69501d1055a" podNamespace="kube-system" podName="kube-scheduler-ci-4081-3-0-c-240f39d8fc" Jan 30 05:28:25.726547 kubelet[3043]: I0130 05:28:25.726478 3043 topology_manager.go:215] "Topology Admit Handler" podUID="c25cd633f17e0d8aa0b1f700dc8b4165" podNamespace="kube-system" podName="kube-apiserver-ci-4081-3-0-c-240f39d8fc" Jan 30 05:28:25.819814 kubelet[3043]: I0130 05:28:25.819456 3043 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5ad73313bb74c511777dca46043a4aa5-ca-certs\") pod \"kube-controller-manager-ci-4081-3-0-c-240f39d8fc\" (UID: \"5ad73313bb74c511777dca46043a4aa5\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-c-240f39d8fc" Jan 30 05:28:25.819814 kubelet[3043]: I0130 05:28:25.819518 3043 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5ad73313bb74c511777dca46043a4aa5-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-0-c-240f39d8fc\" (UID: \"5ad73313bb74c511777dca46043a4aa5\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-c-240f39d8fc" Jan 30 05:28:25.819814 kubelet[3043]: I0130 05:28:25.819545 3043 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5ad73313bb74c511777dca46043a4aa5-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-0-c-240f39d8fc\" (UID: \"5ad73313bb74c511777dca46043a4aa5\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-c-240f39d8fc" Jan 30 05:28:25.819814 kubelet[3043]: I0130 05:28:25.819569 3043 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5ad73313bb74c511777dca46043a4aa5-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-0-c-240f39d8fc\" (UID: \"5ad73313bb74c511777dca46043a4aa5\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-c-240f39d8fc" Jan 30 05:28:25.819814 kubelet[3043]: I0130 05:28:25.819593 3043 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c25cd633f17e0d8aa0b1f700dc8b4165-ca-certs\") pod \"kube-apiserver-ci-4081-3-0-c-240f39d8fc\" (UID: \"c25cd633f17e0d8aa0b1f700dc8b4165\") " pod="kube-system/kube-apiserver-ci-4081-3-0-c-240f39d8fc" Jan 30 05:28:25.820065 kubelet[3043]: I0130 05:28:25.819614 3043 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c25cd633f17e0d8aa0b1f700dc8b4165-k8s-certs\") pod \"kube-apiserver-ci-4081-3-0-c-240f39d8fc\" (UID: \"c25cd633f17e0d8aa0b1f700dc8b4165\") " pod="kube-system/kube-apiserver-ci-4081-3-0-c-240f39d8fc" Jan 30 05:28:25.820065 kubelet[3043]: I0130 05:28:25.819635 3043 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5ad73313bb74c511777dca46043a4aa5-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-0-c-240f39d8fc\" (UID: \"5ad73313bb74c511777dca46043a4aa5\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-c-240f39d8fc" Jan 30 05:28:25.820065 kubelet[3043]: I0130 05:28:25.819653 3043 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2aadb5647ce92c8bb70cf69501d1055a-kubeconfig\") pod \"kube-scheduler-ci-4081-3-0-c-240f39d8fc\" (UID: \"2aadb5647ce92c8bb70cf69501d1055a\") " pod="kube-system/kube-scheduler-ci-4081-3-0-c-240f39d8fc" Jan 30 05:28:25.820065 kubelet[3043]: I0130 05:28:25.819674 3043 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c25cd633f17e0d8aa0b1f700dc8b4165-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-0-c-240f39d8fc\" (UID: \"c25cd633f17e0d8aa0b1f700dc8b4165\") " pod="kube-system/kube-apiserver-ci-4081-3-0-c-240f39d8fc" Jan 30 05:28:26.161791 sudo[3074]: pam_unix(sudo:session): session closed for user root Jan 30 05:28:26.390613 kubelet[3043]: I0130 05:28:26.390547 3043 apiserver.go:52] "Watching apiserver" Jan 30 05:28:26.417829 kubelet[3043]: I0130 05:28:26.417427 3043 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 05:28:26.487547 kubelet[3043]: E0130 05:28:26.486280 3043 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4081-3-0-c-240f39d8fc\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-0-c-240f39d8fc" Jan 30 05:28:26.524646 kubelet[3043]: I0130 05:28:26.524540 3043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-0-c-240f39d8fc" podStartSLOduration=1.524502981 podStartE2EDuration="1.524502981s" podCreationTimestamp="2025-01-30 05:28:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 05:28:26.519996761 +0000 UTC m=+1.251624116" watchObservedRunningTime="2025-01-30 05:28:26.524502981 +0000 UTC m=+1.256130337" Jan 30 05:28:26.531852 kubelet[3043]: I0130 05:28:26.531782 3043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-0-c-240f39d8fc" podStartSLOduration=1.5317609509999999 podStartE2EDuration="1.531760951s" podCreationTimestamp="2025-01-30 05:28:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 05:28:26.530912684 +0000 UTC m=+1.262540039" watchObservedRunningTime="2025-01-30 05:28:26.531760951 +0000 UTC m=+1.263388296" Jan 30 05:28:26.555314 kubelet[3043]: I0130 05:28:26.555162 3043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-0-c-240f39d8fc" podStartSLOduration=1.555136594 podStartE2EDuration="1.555136594s" podCreationTimestamp="2025-01-30 05:28:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 05:28:26.544502779 +0000 UTC m=+1.276130124" watchObservedRunningTime="2025-01-30 05:28:26.555136594 +0000 UTC m=+1.286763939" Jan 30 05:28:28.315005 sudo[2088]: pam_unix(sudo:session): session closed for user root Jan 30 05:28:28.477545 sshd[2084]: pam_unix(sshd:session): session closed for user core Jan 30 05:28:28.484517 systemd[1]: sshd@6-49.13.81.87:22-139.178.89.65:40582.service: Deactivated successfully. Jan 30 05:28:28.494165 systemd-logind[1601]: Session 7 logged out. Waiting for processes to exit. Jan 30 05:28:28.497130 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 05:28:28.500472 systemd-logind[1601]: Removed session 7. Jan 30 05:28:39.029844 kubelet[3043]: I0130 05:28:39.027853 3043 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 30 05:28:39.031977 containerd[1620]: time="2025-01-30T05:28:39.031261232Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 05:28:39.032872 kubelet[3043]: I0130 05:28:39.031508 3043 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 30 05:28:39.837948 kubelet[3043]: I0130 05:28:39.836804 3043 topology_manager.go:215] "Topology Admit Handler" podUID="8d963e68-2b0e-4c81-b03e-48903d6f85b1" podNamespace="kube-system" podName="kube-proxy-9hf8v" Jan 30 05:28:39.849936 kubelet[3043]: I0130 05:28:39.849475 3043 topology_manager.go:215] "Topology Admit Handler" podUID="5b95efdc-0040-48b2-b0e6-9dd57bd04e74" podNamespace="kube-system" podName="cilium-97xwd" Jan 30 05:28:39.896292 kubelet[3043]: I0130 05:28:39.896234 3043 topology_manager.go:215] "Topology Admit Handler" podUID="3b83b2f4-b0c0-4176-97e7-37dc0e605ed3" podNamespace="kube-system" podName="cilium-operator-599987898-774ts" Jan 30 05:28:39.915812 kubelet[3043]: I0130 05:28:39.915774 3043 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5b95efdc-0040-48b2-b0e6-9dd57bd04e74-clustermesh-secrets\") pod \"cilium-97xwd\" (UID: \"5b95efdc-0040-48b2-b0e6-9dd57bd04e74\") " pod="kube-system/cilium-97xwd" Jan 30 05:28:39.915953 kubelet[3043]: I0130 05:28:39.915934 3043 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5b95efdc-0040-48b2-b0e6-9dd57bd04e74-host-proc-sys-kernel\") pod \"cilium-97xwd\" (UID: \"5b95efdc-0040-48b2-b0e6-9dd57bd04e74\") " pod="kube-system/cilium-97xwd" Jan 30 05:28:39.915977 kubelet[3043]: I0130 05:28:39.915965 3043 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3b83b2f4-b0c0-4176-97e7-37dc0e605ed3-cilium-config-path\") pod \"cilium-operator-599987898-774ts\" (UID: \"3b83b2f4-b0c0-4176-97e7-37dc0e605ed3\") " pod="kube-system/cilium-operator-599987898-774ts" Jan 30 05:28:39.916002 kubelet[3043]: I0130 05:28:39.915982 3043 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8d963e68-2b0e-4c81-b03e-48903d6f85b1-kube-proxy\") pod \"kube-proxy-9hf8v\" (UID: \"8d963e68-2b0e-4c81-b03e-48903d6f85b1\") " pod="kube-system/kube-proxy-9hf8v" Jan 30 05:28:39.916002 kubelet[3043]: I0130 05:28:39.915995 3043 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8d963e68-2b0e-4c81-b03e-48903d6f85b1-lib-modules\") pod \"kube-proxy-9hf8v\" (UID: \"8d963e68-2b0e-4c81-b03e-48903d6f85b1\") " pod="kube-system/kube-proxy-9hf8v" Jan 30 05:28:39.916543 kubelet[3043]: I0130 05:28:39.916009 3043 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5b95efdc-0040-48b2-b0e6-9dd57bd04e74-hostproc\") pod \"cilium-97xwd\" (UID: \"5b95efdc-0040-48b2-b0e6-9dd57bd04e74\") " pod="kube-system/cilium-97xwd" Jan 30 05:28:39.916543 kubelet[3043]: I0130 05:28:39.916141 3043 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5b95efdc-0040-48b2-b0e6-9dd57bd04e74-host-proc-sys-net\") pod \"cilium-97xwd\" (UID: \"5b95efdc-0040-48b2-b0e6-9dd57bd04e74\") " pod="kube-system/cilium-97xwd" Jan 30 05:28:39.916543 kubelet[3043]: I0130 05:28:39.916154 3043 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5b95efdc-0040-48b2-b0e6-9dd57bd04e74-hubble-tls\") pod \"cilium-97xwd\" (UID: \"5b95efdc-0040-48b2-b0e6-9dd57bd04e74\") " pod="kube-system/cilium-97xwd" Jan 30 05:28:39.916543 kubelet[3043]: I0130 05:28:39.916169 3043 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sztbr\" (UniqueName: \"kubernetes.io/projected/3b83b2f4-b0c0-4176-97e7-37dc0e605ed3-kube-api-access-sztbr\") pod \"cilium-operator-599987898-774ts\" (UID: \"3b83b2f4-b0c0-4176-97e7-37dc0e605ed3\") " pod="kube-system/cilium-operator-599987898-774ts" Jan 30 05:28:39.916543 kubelet[3043]: I0130 05:28:39.916184 3043 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ft8j\" (UniqueName: \"kubernetes.io/projected/8d963e68-2b0e-4c81-b03e-48903d6f85b1-kube-api-access-9ft8j\") pod \"kube-proxy-9hf8v\" (UID: \"8d963e68-2b0e-4c81-b03e-48903d6f85b1\") " pod="kube-system/kube-proxy-9hf8v" Jan 30 05:28:39.917844 kubelet[3043]: I0130 05:28:39.916319 3043 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5b95efdc-0040-48b2-b0e6-9dd57bd04e74-cilium-run\") pod \"cilium-97xwd\" (UID: \"5b95efdc-0040-48b2-b0e6-9dd57bd04e74\") " pod="kube-system/cilium-97xwd" Jan 30 05:28:39.917844 kubelet[3043]: I0130 05:28:39.916335 3043 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5b95efdc-0040-48b2-b0e6-9dd57bd04e74-etc-cni-netd\") pod \"cilium-97xwd\" (UID: \"5b95efdc-0040-48b2-b0e6-9dd57bd04e74\") " pod="kube-system/cilium-97xwd" Jan 30 05:28:39.917844 kubelet[3043]: I0130 05:28:39.916347 3043 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5b95efdc-0040-48b2-b0e6-9dd57bd04e74-xtables-lock\") pod \"cilium-97xwd\" (UID: \"5b95efdc-0040-48b2-b0e6-9dd57bd04e74\") " pod="kube-system/cilium-97xwd" Jan 30 05:28:39.917844 kubelet[3043]: I0130 05:28:39.916360 3043 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5b95efdc-0040-48b2-b0e6-9dd57bd04e74-lib-modules\") pod \"cilium-97xwd\" (UID: \"5b95efdc-0040-48b2-b0e6-9dd57bd04e74\") " pod="kube-system/cilium-97xwd" Jan 30 05:28:39.917844 kubelet[3043]: I0130 05:28:39.916498 3043 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5b95efdc-0040-48b2-b0e6-9dd57bd04e74-bpf-maps\") pod \"cilium-97xwd\" (UID: \"5b95efdc-0040-48b2-b0e6-9dd57bd04e74\") " pod="kube-system/cilium-97xwd" Jan 30 05:28:39.917844 kubelet[3043]: I0130 05:28:39.916514 3043 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5b95efdc-0040-48b2-b0e6-9dd57bd04e74-cilium-cgroup\") pod \"cilium-97xwd\" (UID: \"5b95efdc-0040-48b2-b0e6-9dd57bd04e74\") " pod="kube-system/cilium-97xwd" Jan 30 05:28:39.918011 kubelet[3043]: I0130 05:28:39.916660 3043 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8d963e68-2b0e-4c81-b03e-48903d6f85b1-xtables-lock\") pod \"kube-proxy-9hf8v\" (UID: \"8d963e68-2b0e-4c81-b03e-48903d6f85b1\") " pod="kube-system/kube-proxy-9hf8v" Jan 30 05:28:39.918011 kubelet[3043]: I0130 05:28:39.916797 3043 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htrgb\" (UniqueName: \"kubernetes.io/projected/5b95efdc-0040-48b2-b0e6-9dd57bd04e74-kube-api-access-htrgb\") pod \"cilium-97xwd\" (UID: \"5b95efdc-0040-48b2-b0e6-9dd57bd04e74\") " pod="kube-system/cilium-97xwd" Jan 30 05:28:39.918011 kubelet[3043]: I0130 05:28:39.916814 3043 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5b95efdc-0040-48b2-b0e6-9dd57bd04e74-cni-path\") pod \"cilium-97xwd\" (UID: \"5b95efdc-0040-48b2-b0e6-9dd57bd04e74\") " pod="kube-system/cilium-97xwd" Jan 30 05:28:39.918011 kubelet[3043]: I0130 05:28:39.916827 3043 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5b95efdc-0040-48b2-b0e6-9dd57bd04e74-cilium-config-path\") pod \"cilium-97xwd\" (UID: \"5b95efdc-0040-48b2-b0e6-9dd57bd04e74\") " pod="kube-system/cilium-97xwd" Jan 30 05:28:40.149378 containerd[1620]: time="2025-01-30T05:28:40.149290097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9hf8v,Uid:8d963e68-2b0e-4c81-b03e-48903d6f85b1,Namespace:kube-system,Attempt:0,}" Jan 30 05:28:40.162644 containerd[1620]: time="2025-01-30T05:28:40.162227675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-97xwd,Uid:5b95efdc-0040-48b2-b0e6-9dd57bd04e74,Namespace:kube-system,Attempt:0,}" Jan 30 05:28:40.209548 containerd[1620]: time="2025-01-30T05:28:40.209504301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-774ts,Uid:3b83b2f4-b0c0-4176-97e7-37dc0e605ed3,Namespace:kube-system,Attempt:0,}" Jan 30 05:28:40.243395 containerd[1620]: time="2025-01-30T05:28:40.240809020Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 05:28:40.243395 containerd[1620]: time="2025-01-30T05:28:40.241204902Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 05:28:40.243395 containerd[1620]: time="2025-01-30T05:28:40.241356966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:28:40.246960 containerd[1620]: time="2025-01-30T05:28:40.242738002Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:28:40.255123 containerd[1620]: time="2025-01-30T05:28:40.254988873Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 05:28:40.255928 containerd[1620]: time="2025-01-30T05:28:40.255073021Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 05:28:40.256094 containerd[1620]: time="2025-01-30T05:28:40.256060319Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:28:40.256836 containerd[1620]: time="2025-01-30T05:28:40.256798101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:28:40.295489 containerd[1620]: time="2025-01-30T05:28:40.295217057Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 05:28:40.295489 containerd[1620]: time="2025-01-30T05:28:40.295262983Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 05:28:40.295489 containerd[1620]: time="2025-01-30T05:28:40.295272721Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:28:40.295489 containerd[1620]: time="2025-01-30T05:28:40.295345937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:28:40.333921 containerd[1620]: time="2025-01-30T05:28:40.333872315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9hf8v,Uid:8d963e68-2b0e-4c81-b03e-48903d6f85b1,Namespace:kube-system,Attempt:0,} returns sandbox id \"2bd433bbb6e46b736c1095d12e9b5af368de619ff2901764a34abef54be54146\"" Jan 30 05:28:40.339747 containerd[1620]: time="2025-01-30T05:28:40.339715774Z" level=info msg="CreateContainer within sandbox \"2bd433bbb6e46b736c1095d12e9b5af368de619ff2901764a34abef54be54146\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 05:28:40.355680 containerd[1620]: time="2025-01-30T05:28:40.355628691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-97xwd,Uid:5b95efdc-0040-48b2-b0e6-9dd57bd04e74,Namespace:kube-system,Attempt:0,} returns sandbox id \"baa28a65f7be81e9ac681138c24ae39a96d59a12a49fe2d8c0d281d6382e35e4\"" Jan 30 05:28:40.362483 containerd[1620]: time="2025-01-30T05:28:40.362426335Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 30 05:28:40.367615 containerd[1620]: time="2025-01-30T05:28:40.367543473Z" level=info msg="CreateContainer within sandbox \"2bd433bbb6e46b736c1095d12e9b5af368de619ff2901764a34abef54be54146\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5d775b4610fe5846e4d9e75c20bfdfda1d4aaaa0fb96babb81d0e9e58ce8badd\"" Jan 30 05:28:40.368728 containerd[1620]: time="2025-01-30T05:28:40.368155449Z" level=info msg="StartContainer for \"5d775b4610fe5846e4d9e75c20bfdfda1d4aaaa0fb96babb81d0e9e58ce8badd\"" Jan 30 05:28:40.395216 containerd[1620]: time="2025-01-30T05:28:40.395181610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-774ts,Uid:3b83b2f4-b0c0-4176-97e7-37dc0e605ed3,Namespace:kube-system,Attempt:0,} returns sandbox id \"71fb77faead4a3d3604f2b1de8588459f210c984d2ba6f0f93c93b21b12d10dd\"" Jan 30 05:28:40.447651 containerd[1620]: time="2025-01-30T05:28:40.447085548Z" level=info msg="StartContainer for \"5d775b4610fe5846e4d9e75c20bfdfda1d4aaaa0fb96babb81d0e9e58ce8badd\" returns successfully" Jan 30 05:28:44.942993 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1610566877.mount: Deactivated successfully. Jan 30 05:28:46.860764 containerd[1620]: time="2025-01-30T05:28:46.860705154Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:28:46.862057 containerd[1620]: time="2025-01-30T05:28:46.861983848Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 30 05:28:46.863446 containerd[1620]: time="2025-01-30T05:28:46.863412783Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:28:46.865613 containerd[1620]: time="2025-01-30T05:28:46.864866504Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 6.502382852s" Jan 30 05:28:46.865613 containerd[1620]: time="2025-01-30T05:28:46.864898244Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 30 05:28:46.866403 containerd[1620]: time="2025-01-30T05:28:46.866368516Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 30 05:28:46.868890 containerd[1620]: time="2025-01-30T05:28:46.868842639Z" level=info msg="CreateContainer within sandbox \"baa28a65f7be81e9ac681138c24ae39a96d59a12a49fe2d8c0d281d6382e35e4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 05:28:46.959006 containerd[1620]: time="2025-01-30T05:28:46.958778106Z" level=info msg="CreateContainer within sandbox \"baa28a65f7be81e9ac681138c24ae39a96d59a12a49fe2d8c0d281d6382e35e4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ede4c0774f609e1d18ee15bc6372e31bafb6c8429a0faac087a57afef3dd6d6c\"" Jan 30 05:28:46.961741 containerd[1620]: time="2025-01-30T05:28:46.961388463Z" level=info msg="StartContainer for \"ede4c0774f609e1d18ee15bc6372e31bafb6c8429a0faac087a57afef3dd6d6c\"" Jan 30 05:28:47.166897 systemd[1]: run-containerd-runc-k8s.io-ede4c0774f609e1d18ee15bc6372e31bafb6c8429a0faac087a57afef3dd6d6c-runc.hocDXo.mount: Deactivated successfully. Jan 30 05:28:47.219612 containerd[1620]: time="2025-01-30T05:28:47.219565202Z" level=info msg="StartContainer for \"ede4c0774f609e1d18ee15bc6372e31bafb6c8429a0faac087a57afef3dd6d6c\" returns successfully" Jan 30 05:28:47.459923 containerd[1620]: time="2025-01-30T05:28:47.421586926Z" level=info msg="shim disconnected" id=ede4c0774f609e1d18ee15bc6372e31bafb6c8429a0faac087a57afef3dd6d6c namespace=k8s.io Jan 30 05:28:47.459923 containerd[1620]: time="2025-01-30T05:28:47.459569049Z" level=warning msg="cleaning up after shim disconnected" id=ede4c0774f609e1d18ee15bc6372e31bafb6c8429a0faac087a57afef3dd6d6c namespace=k8s.io Jan 30 05:28:47.459923 containerd[1620]: time="2025-01-30T05:28:47.459603434Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 05:28:47.552718 containerd[1620]: time="2025-01-30T05:28:47.552426952Z" level=info msg="CreateContainer within sandbox \"baa28a65f7be81e9ac681138c24ae39a96d59a12a49fe2d8c0d281d6382e35e4\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 05:28:47.578754 containerd[1620]: time="2025-01-30T05:28:47.578638276Z" level=info msg="CreateContainer within sandbox \"baa28a65f7be81e9ac681138c24ae39a96d59a12a49fe2d8c0d281d6382e35e4\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2d0db50702d3307f4338e491094664315dfe78e610f1c4721eaeaabb69849a6d\"" Jan 30 05:28:47.583913 kubelet[3043]: I0130 05:28:47.581326 3043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9hf8v" podStartSLOduration=8.581298386 podStartE2EDuration="8.581298386s" podCreationTimestamp="2025-01-30 05:28:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 05:28:40.535963468 +0000 UTC m=+15.267590833" watchObservedRunningTime="2025-01-30 05:28:47.581298386 +0000 UTC m=+22.312925751" Jan 30 05:28:47.588158 containerd[1620]: time="2025-01-30T05:28:47.588098428Z" level=info msg="StartContainer for \"2d0db50702d3307f4338e491094664315dfe78e610f1c4721eaeaabb69849a6d\"" Jan 30 05:28:47.675850 containerd[1620]: time="2025-01-30T05:28:47.675785429Z" level=info msg="StartContainer for \"2d0db50702d3307f4338e491094664315dfe78e610f1c4721eaeaabb69849a6d\" returns successfully" Jan 30 05:28:47.697141 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 05:28:47.698038 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 05:28:47.698138 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 30 05:28:47.707349 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 05:28:47.742963 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 05:28:47.751806 containerd[1620]: time="2025-01-30T05:28:47.751735928Z" level=info msg="shim disconnected" id=2d0db50702d3307f4338e491094664315dfe78e610f1c4721eaeaabb69849a6d namespace=k8s.io Jan 30 05:28:47.752035 containerd[1620]: time="2025-01-30T05:28:47.752020981Z" level=warning msg="cleaning up after shim disconnected" id=2d0db50702d3307f4338e491094664315dfe78e610f1c4721eaeaabb69849a6d namespace=k8s.io Jan 30 05:28:47.752082 containerd[1620]: time="2025-01-30T05:28:47.752071295Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 05:28:47.952529 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ede4c0774f609e1d18ee15bc6372e31bafb6c8429a0faac087a57afef3dd6d6c-rootfs.mount: Deactivated successfully. Jan 30 05:28:48.558011 containerd[1620]: time="2025-01-30T05:28:48.557921111Z" level=info msg="CreateContainer within sandbox \"baa28a65f7be81e9ac681138c24ae39a96d59a12a49fe2d8c0d281d6382e35e4\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 05:28:48.612783 systemd-journald[1159]: Under memory pressure, flushing caches. Jan 30 05:28:48.610233 systemd-resolved[1500]: Under memory pressure, flushing caches. Jan 30 05:28:48.610369 systemd-resolved[1500]: Flushed all caches. Jan 30 05:28:48.616274 containerd[1620]: time="2025-01-30T05:28:48.616213862Z" level=info msg="CreateContainer within sandbox \"baa28a65f7be81e9ac681138c24ae39a96d59a12a49fe2d8c0d281d6382e35e4\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5b4629b16507eaf40bafb8d7792f34f7fb4e14311fcda8cd56e0193d2c9c2c51\"" Jan 30 05:28:48.617214 containerd[1620]: time="2025-01-30T05:28:48.617170574Z" level=info msg="StartContainer for \"5b4629b16507eaf40bafb8d7792f34f7fb4e14311fcda8cd56e0193d2c9c2c51\"" Jan 30 05:28:48.729363 containerd[1620]: time="2025-01-30T05:28:48.729238967Z" level=info msg="StartContainer for \"5b4629b16507eaf40bafb8d7792f34f7fb4e14311fcda8cd56e0193d2c9c2c51\" returns successfully" Jan 30 05:28:48.773204 containerd[1620]: time="2025-01-30T05:28:48.773090090Z" level=info msg="shim disconnected" id=5b4629b16507eaf40bafb8d7792f34f7fb4e14311fcda8cd56e0193d2c9c2c51 namespace=k8s.io Jan 30 05:28:48.773204 containerd[1620]: time="2025-01-30T05:28:48.773151936Z" level=warning msg="cleaning up after shim disconnected" id=5b4629b16507eaf40bafb8d7792f34f7fb4e14311fcda8cd56e0193d2c9c2c51 namespace=k8s.io Jan 30 05:28:48.773204 containerd[1620]: time="2025-01-30T05:28:48.773166373Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 05:28:48.954342 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5b4629b16507eaf40bafb8d7792f34f7fb4e14311fcda8cd56e0193d2c9c2c51-rootfs.mount: Deactivated successfully. Jan 30 05:28:49.359750 containerd[1620]: time="2025-01-30T05:28:49.359625482Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:28:49.360743 containerd[1620]: time="2025-01-30T05:28:49.360710081Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 30 05:28:49.361928 containerd[1620]: time="2025-01-30T05:28:49.361887686Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:28:49.363048 containerd[1620]: time="2025-01-30T05:28:49.363002433Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.496605043s" Jan 30 05:28:49.363048 containerd[1620]: time="2025-01-30T05:28:49.363033171Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 30 05:28:49.366299 containerd[1620]: time="2025-01-30T05:28:49.366185774Z" level=info msg="CreateContainer within sandbox \"71fb77faead4a3d3604f2b1de8588459f210c984d2ba6f0f93c93b21b12d10dd\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 30 05:28:49.381072 containerd[1620]: time="2025-01-30T05:28:49.381032551Z" level=info msg="CreateContainer within sandbox \"71fb77faead4a3d3604f2b1de8588459f210c984d2ba6f0f93c93b21b12d10dd\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"1afb9bb433cbb58ee113f45eab4e542f5c8d23639582307b142d50d4180711ab\"" Jan 30 05:28:49.382486 containerd[1620]: time="2025-01-30T05:28:49.381524192Z" level=info msg="StartContainer for \"1afb9bb433cbb58ee113f45eab4e542f5c8d23639582307b142d50d4180711ab\"" Jan 30 05:28:49.461938 containerd[1620]: time="2025-01-30T05:28:49.461819271Z" level=info msg="StartContainer for \"1afb9bb433cbb58ee113f45eab4e542f5c8d23639582307b142d50d4180711ab\" returns successfully" Jan 30 05:28:49.576361 containerd[1620]: time="2025-01-30T05:28:49.576327249Z" level=info msg="CreateContainer within sandbox \"baa28a65f7be81e9ac681138c24ae39a96d59a12a49fe2d8c0d281d6382e35e4\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 05:28:49.602790 containerd[1620]: time="2025-01-30T05:28:49.601774345Z" level=info msg="CreateContainer within sandbox \"baa28a65f7be81e9ac681138c24ae39a96d59a12a49fe2d8c0d281d6382e35e4\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"780fa2cb594079adf40e19acb8d37fcb89b98534ad5564a8fe9ca9763f771005\"" Jan 30 05:28:49.605095 containerd[1620]: time="2025-01-30T05:28:49.604208993Z" level=info msg="StartContainer for \"780fa2cb594079adf40e19acb8d37fcb89b98534ad5564a8fe9ca9763f771005\"" Jan 30 05:28:49.707536 containerd[1620]: time="2025-01-30T05:28:49.707487756Z" level=info msg="StartContainer for \"780fa2cb594079adf40e19acb8d37fcb89b98534ad5564a8fe9ca9763f771005\" returns successfully" Jan 30 05:28:49.733251 kubelet[3043]: I0130 05:28:49.732697 3043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-774ts" podStartSLOduration=1.7671140699999999 podStartE2EDuration="10.732665376s" podCreationTimestamp="2025-01-30 05:28:39 +0000 UTC" firstStartedPulling="2025-01-30 05:28:40.398436673 +0000 UTC m=+15.130064008" lastFinishedPulling="2025-01-30 05:28:49.363987978 +0000 UTC m=+24.095615314" observedRunningTime="2025-01-30 05:28:49.603720358 +0000 UTC m=+24.335347693" watchObservedRunningTime="2025-01-30 05:28:49.732665376 +0000 UTC m=+24.464292711" Jan 30 05:28:49.752222 containerd[1620]: time="2025-01-30T05:28:49.751936728Z" level=info msg="shim disconnected" id=780fa2cb594079adf40e19acb8d37fcb89b98534ad5564a8fe9ca9763f771005 namespace=k8s.io Jan 30 05:28:49.752222 containerd[1620]: time="2025-01-30T05:28:49.752144989Z" level=warning msg="cleaning up after shim disconnected" id=780fa2cb594079adf40e19acb8d37fcb89b98534ad5564a8fe9ca9763f771005 namespace=k8s.io Jan 30 05:28:49.752222 containerd[1620]: time="2025-01-30T05:28:49.752155719Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 05:28:50.584444 containerd[1620]: time="2025-01-30T05:28:50.584373293Z" level=info msg="CreateContainer within sandbox \"baa28a65f7be81e9ac681138c24ae39a96d59a12a49fe2d8c0d281d6382e35e4\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 05:28:50.643140 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2866222574.mount: Deactivated successfully. Jan 30 05:28:50.670620 containerd[1620]: time="2025-01-30T05:28:50.670474858Z" level=info msg="CreateContainer within sandbox \"baa28a65f7be81e9ac681138c24ae39a96d59a12a49fe2d8c0d281d6382e35e4\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4167b33015f06a3d0c0a64871e3370e17c6da3534c52ba3b7d85859a885b5d0c\"" Jan 30 05:28:50.672651 containerd[1620]: time="2025-01-30T05:28:50.672427315Z" level=info msg="StartContainer for \"4167b33015f06a3d0c0a64871e3370e17c6da3534c52ba3b7d85859a885b5d0c\"" Jan 30 05:28:50.758335 containerd[1620]: time="2025-01-30T05:28:50.758214670Z" level=info msg="StartContainer for \"4167b33015f06a3d0c0a64871e3370e17c6da3534c52ba3b7d85859a885b5d0c\" returns successfully" Jan 30 05:28:50.983173 kubelet[3043]: I0130 05:28:50.983136 3043 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 30 05:28:51.012416 kubelet[3043]: I0130 05:28:51.012361 3043 topology_manager.go:215] "Topology Admit Handler" podUID="e434c756-966f-4bc1-83f8-9ea863c93673" podNamespace="kube-system" podName="coredns-7db6d8ff4d-6v77m" Jan 30 05:28:51.016528 kubelet[3043]: I0130 05:28:51.016120 3043 topology_manager.go:215] "Topology Admit Handler" podUID="94cdf30f-4ea0-4c28-b2e3-d86adc8ba254" podNamespace="kube-system" podName="coredns-7db6d8ff4d-rrx7z" Jan 30 05:28:51.098624 kubelet[3043]: I0130 05:28:51.098398 3043 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e434c756-966f-4bc1-83f8-9ea863c93673-config-volume\") pod \"coredns-7db6d8ff4d-6v77m\" (UID: \"e434c756-966f-4bc1-83f8-9ea863c93673\") " pod="kube-system/coredns-7db6d8ff4d-6v77m" Jan 30 05:28:51.098624 kubelet[3043]: I0130 05:28:51.098459 3043 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/94cdf30f-4ea0-4c28-b2e3-d86adc8ba254-config-volume\") pod \"coredns-7db6d8ff4d-rrx7z\" (UID: \"94cdf30f-4ea0-4c28-b2e3-d86adc8ba254\") " pod="kube-system/coredns-7db6d8ff4d-rrx7z" Jan 30 05:28:51.098624 kubelet[3043]: I0130 05:28:51.098496 3043 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grj9f\" (UniqueName: \"kubernetes.io/projected/e434c756-966f-4bc1-83f8-9ea863c93673-kube-api-access-grj9f\") pod \"coredns-7db6d8ff4d-6v77m\" (UID: \"e434c756-966f-4bc1-83f8-9ea863c93673\") " pod="kube-system/coredns-7db6d8ff4d-6v77m" Jan 30 05:28:51.098624 kubelet[3043]: I0130 05:28:51.098553 3043 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbtn8\" (UniqueName: \"kubernetes.io/projected/94cdf30f-4ea0-4c28-b2e3-d86adc8ba254-kube-api-access-lbtn8\") pod \"coredns-7db6d8ff4d-rrx7z\" (UID: \"94cdf30f-4ea0-4c28-b2e3-d86adc8ba254\") " pod="kube-system/coredns-7db6d8ff4d-rrx7z" Jan 30 05:28:51.328324 containerd[1620]: time="2025-01-30T05:28:51.328196535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rrx7z,Uid:94cdf30f-4ea0-4c28-b2e3-d86adc8ba254,Namespace:kube-system,Attempt:0,}" Jan 30 05:28:51.329811 containerd[1620]: time="2025-01-30T05:28:51.329771504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6v77m,Uid:e434c756-966f-4bc1-83f8-9ea863c93673,Namespace:kube-system,Attempt:0,}" Jan 30 05:28:51.598900 kubelet[3043]: I0130 05:28:51.598650 3043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-97xwd" podStartSLOduration=6.089637976 podStartE2EDuration="12.598629649s" podCreationTimestamp="2025-01-30 05:28:39 +0000 UTC" firstStartedPulling="2025-01-30 05:28:40.356779825 +0000 UTC m=+15.088407160" lastFinishedPulling="2025-01-30 05:28:46.865771499 +0000 UTC m=+21.597398833" observedRunningTime="2025-01-30 05:28:51.596665942 +0000 UTC m=+26.328293277" watchObservedRunningTime="2025-01-30 05:28:51.598629649 +0000 UTC m=+26.330256984" Jan 30 05:28:53.356049 systemd-networkd[1240]: cilium_host: Link UP Jan 30 05:28:53.356291 systemd-networkd[1240]: cilium_net: Link UP Jan 30 05:28:53.356558 systemd-networkd[1240]: cilium_net: Gained carrier Jan 30 05:28:53.359389 systemd-networkd[1240]: cilium_host: Gained carrier Jan 30 05:28:53.527308 systemd-networkd[1240]: cilium_vxlan: Link UP Jan 30 05:28:53.528114 systemd-networkd[1240]: cilium_vxlan: Gained carrier Jan 30 05:28:53.628801 systemd-networkd[1240]: cilium_net: Gained IPv6LL Jan 30 05:28:54.042844 kernel: NET: Registered PF_ALG protocol family Jan 30 05:28:54.241905 systemd-networkd[1240]: cilium_host: Gained IPv6LL Jan 30 05:28:54.892421 systemd-networkd[1240]: lxc_health: Link UP Jan 30 05:28:54.913000 systemd-networkd[1240]: lxc_health: Gained carrier Jan 30 05:28:54.947709 systemd-networkd[1240]: cilium_vxlan: Gained IPv6LL Jan 30 05:28:55.499438 systemd-networkd[1240]: lxc0b434bfbb71d: Link UP Jan 30 05:28:55.510474 kernel: eth0: renamed from tmp7092d Jan 30 05:28:55.520472 systemd-networkd[1240]: lxc0b434bfbb71d: Gained carrier Jan 30 05:28:55.565512 systemd-networkd[1240]: lxc201e12785c41: Link UP Jan 30 05:28:55.578552 kernel: eth0: renamed from tmpc2705 Jan 30 05:28:55.583473 systemd-networkd[1240]: lxc201e12785c41: Gained carrier Jan 30 05:28:56.738219 systemd-networkd[1240]: lxc201e12785c41: Gained IPv6LL Jan 30 05:28:56.929883 systemd-networkd[1240]: lxc_health: Gained IPv6LL Jan 30 05:28:56.930179 systemd-networkd[1240]: lxc0b434bfbb71d: Gained IPv6LL Jan 30 05:28:59.368560 containerd[1620]: time="2025-01-30T05:28:59.368434063Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 05:28:59.372742 containerd[1620]: time="2025-01-30T05:28:59.369155385Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 05:28:59.372742 containerd[1620]: time="2025-01-30T05:28:59.369171344Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:28:59.372742 containerd[1620]: time="2025-01-30T05:28:59.369249089Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:28:59.396364 containerd[1620]: time="2025-01-30T05:28:59.395677412Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 05:28:59.396364 containerd[1620]: time="2025-01-30T05:28:59.395794451Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 05:28:59.396364 containerd[1620]: time="2025-01-30T05:28:59.395809669Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:28:59.396364 containerd[1620]: time="2025-01-30T05:28:59.395900129Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:28:59.503803 containerd[1620]: time="2025-01-30T05:28:59.503768014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rrx7z,Uid:94cdf30f-4ea0-4c28-b2e3-d86adc8ba254,Namespace:kube-system,Attempt:0,} returns sandbox id \"c2705be9c73913d3751bda6ac8e8028bdf3d0d65f24ca87b7216cf8b64da7b9e\"" Jan 30 05:28:59.511073 containerd[1620]: time="2025-01-30T05:28:59.510777680Z" level=info msg="CreateContainer within sandbox \"c2705be9c73913d3751bda6ac8e8028bdf3d0d65f24ca87b7216cf8b64da7b9e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 05:28:59.536109 containerd[1620]: time="2025-01-30T05:28:59.535846406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6v77m,Uid:e434c756-966f-4bc1-83f8-9ea863c93673,Namespace:kube-system,Attempt:0,} returns sandbox id \"7092dd3c650121338d1844b8769128556ba6c6e39751b192fd9ed5fd29dec04e\"" Jan 30 05:28:59.542259 containerd[1620]: time="2025-01-30T05:28:59.541638534Z" level=info msg="CreateContainer within sandbox \"7092dd3c650121338d1844b8769128556ba6c6e39751b192fd9ed5fd29dec04e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 05:28:59.563152 containerd[1620]: time="2025-01-30T05:28:59.563055160Z" level=info msg="CreateContainer within sandbox \"c2705be9c73913d3751bda6ac8e8028bdf3d0d65f24ca87b7216cf8b64da7b9e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9d884aaa34a65202f54c2a9fc4aafb305d1bac7a6acd5fb6ce4a585a95338e8e\"" Jan 30 05:28:59.566756 containerd[1620]: time="2025-01-30T05:28:59.565951384Z" level=info msg="StartContainer for \"9d884aaa34a65202f54c2a9fc4aafb305d1bac7a6acd5fb6ce4a585a95338e8e\"" Jan 30 05:28:59.584025 containerd[1620]: time="2025-01-30T05:28:59.583983132Z" level=info msg="CreateContainer within sandbox \"7092dd3c650121338d1844b8769128556ba6c6e39751b192fd9ed5fd29dec04e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cd5b7f131c446e2b447c62093f63e19fda57e66b7ae4968f7b8201f6d8355f7a\"" Jan 30 05:28:59.585474 containerd[1620]: time="2025-01-30T05:28:59.585456300Z" level=info msg="StartContainer for \"cd5b7f131c446e2b447c62093f63e19fda57e66b7ae4968f7b8201f6d8355f7a\"" Jan 30 05:28:59.674835 containerd[1620]: time="2025-01-30T05:28:59.674790257Z" level=info msg="StartContainer for \"9d884aaa34a65202f54c2a9fc4aafb305d1bac7a6acd5fb6ce4a585a95338e8e\" returns successfully" Jan 30 05:28:59.687046 containerd[1620]: time="2025-01-30T05:28:59.687000322Z" level=info msg="StartContainer for \"cd5b7f131c446e2b447c62093f63e19fda57e66b7ae4968f7b8201f6d8355f7a\" returns successfully" Jan 30 05:29:00.385251 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount933413389.mount: Deactivated successfully. Jan 30 05:29:00.660766 kubelet[3043]: I0130 05:29:00.656473 3043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-6v77m" podStartSLOduration=21.6563968 podStartE2EDuration="21.6563968s" podCreationTimestamp="2025-01-30 05:28:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 05:29:00.653589493 +0000 UTC m=+35.385216838" watchObservedRunningTime="2025-01-30 05:29:00.6563968 +0000 UTC m=+35.388024166" Jan 30 05:30:25.783087 update_engine[1608]: I20250130 05:30:25.782815 1608 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 30 05:30:25.783087 update_engine[1608]: I20250130 05:30:25.782891 1608 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 30 05:30:25.789119 update_engine[1608]: I20250130 05:30:25.788316 1608 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 30 05:30:25.790533 update_engine[1608]: I20250130 05:30:25.789839 1608 omaha_request_params.cc:62] Current group set to lts Jan 30 05:30:25.790533 update_engine[1608]: I20250130 05:30:25.790043 1608 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 30 05:30:25.790533 update_engine[1608]: I20250130 05:30:25.790055 1608 update_attempter.cc:643] Scheduling an action processor start. Jan 30 05:30:25.790533 update_engine[1608]: I20250130 05:30:25.790082 1608 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 30 05:30:25.790533 update_engine[1608]: I20250130 05:30:25.790130 1608 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 30 05:30:25.790533 update_engine[1608]: I20250130 05:30:25.790210 1608 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 30 05:30:25.790533 update_engine[1608]: I20250130 05:30:25.790220 1608 omaha_request_action.cc:272] Request: Jan 30 05:30:25.790533 update_engine[1608]: Jan 30 05:30:25.790533 update_engine[1608]: Jan 30 05:30:25.790533 update_engine[1608]: Jan 30 05:30:25.790533 update_engine[1608]: Jan 30 05:30:25.790533 update_engine[1608]: Jan 30 05:30:25.790533 update_engine[1608]: Jan 30 05:30:25.790533 update_engine[1608]: Jan 30 05:30:25.790533 update_engine[1608]: Jan 30 05:30:25.790533 update_engine[1608]: I20250130 05:30:25.790230 1608 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 05:30:25.817488 locksmithd[1649]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 30 05:30:25.818299 update_engine[1608]: I20250130 05:30:25.818240 1608 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 05:30:25.820894 update_engine[1608]: I20250130 05:30:25.818761 1608 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 05:30:25.826719 update_engine[1608]: E20250130 05:30:25.826548 1608 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 05:30:25.826906 update_engine[1608]: I20250130 05:30:25.826866 1608 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 30 05:30:35.633651 update_engine[1608]: I20250130 05:30:35.633545 1608 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 05:30:35.634163 update_engine[1608]: I20250130 05:30:35.633976 1608 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 05:30:35.634361 update_engine[1608]: I20250130 05:30:35.634318 1608 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 05:30:35.635181 update_engine[1608]: E20250130 05:30:35.635138 1608 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 05:30:35.635251 update_engine[1608]: I20250130 05:30:35.635221 1608 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 30 05:30:45.641214 update_engine[1608]: I20250130 05:30:45.641104 1608 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 05:30:45.642107 update_engine[1608]: I20250130 05:30:45.641482 1608 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 05:30:45.642107 update_engine[1608]: I20250130 05:30:45.641840 1608 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 05:30:45.643217 update_engine[1608]: E20250130 05:30:45.643136 1608 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 05:30:45.643368 update_engine[1608]: I20250130 05:30:45.643285 1608 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 30 05:30:55.642278 update_engine[1608]: I20250130 05:30:55.642165 1608 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 05:30:55.645857 update_engine[1608]: I20250130 05:30:55.642642 1608 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 05:30:55.645857 update_engine[1608]: I20250130 05:30:55.643110 1608 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 05:30:55.645857 update_engine[1608]: E20250130 05:30:55.644001 1608 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 05:30:55.645857 update_engine[1608]: I20250130 05:30:55.644083 1608 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 30 05:30:55.645857 update_engine[1608]: I20250130 05:30:55.644101 1608 omaha_request_action.cc:617] Omaha request response: Jan 30 05:30:55.645857 update_engine[1608]: E20250130 05:30:55.644234 1608 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 30 05:30:55.645857 update_engine[1608]: I20250130 05:30:55.644268 1608 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 30 05:30:55.645857 update_engine[1608]: I20250130 05:30:55.644281 1608 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 30 05:30:55.645857 update_engine[1608]: I20250130 05:30:55.644295 1608 update_attempter.cc:306] Processing Done. Jan 30 05:30:55.645857 update_engine[1608]: E20250130 05:30:55.644320 1608 update_attempter.cc:619] Update failed. Jan 30 05:30:55.645857 update_engine[1608]: I20250130 05:30:55.644338 1608 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 30 05:30:55.645857 update_engine[1608]: I20250130 05:30:55.644352 1608 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 30 05:30:55.645857 update_engine[1608]: I20250130 05:30:55.644367 1608 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 30 05:30:55.645857 update_engine[1608]: I20250130 05:30:55.644490 1608 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 30 05:30:55.645857 update_engine[1608]: I20250130 05:30:55.644526 1608 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 30 05:30:55.645857 update_engine[1608]: I20250130 05:30:55.644541 1608 omaha_request_action.cc:272] Request: Jan 30 05:30:55.645857 update_engine[1608]: Jan 30 05:30:55.645857 update_engine[1608]: Jan 30 05:30:55.647210 locksmithd[1649]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 30 05:30:55.647210 locksmithd[1649]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 30 05:30:55.648086 update_engine[1608]: Jan 30 05:30:55.648086 update_engine[1608]: Jan 30 05:30:55.648086 update_engine[1608]: Jan 30 05:30:55.648086 update_engine[1608]: Jan 30 05:30:55.648086 update_engine[1608]: I20250130 05:30:55.644555 1608 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 05:30:55.648086 update_engine[1608]: I20250130 05:30:55.645193 1608 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 05:30:55.648086 update_engine[1608]: I20250130 05:30:55.645457 1608 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 05:30:55.648086 update_engine[1608]: E20250130 05:30:55.646291 1608 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 05:30:55.648086 update_engine[1608]: I20250130 05:30:55.646366 1608 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 30 05:30:55.648086 update_engine[1608]: I20250130 05:30:55.646384 1608 omaha_request_action.cc:617] Omaha request response: Jan 30 05:30:55.648086 update_engine[1608]: I20250130 05:30:55.646399 1608 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 30 05:30:55.648086 update_engine[1608]: I20250130 05:30:55.646411 1608 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 30 05:30:55.648086 update_engine[1608]: I20250130 05:30:55.646427 1608 update_attempter.cc:306] Processing Done. Jan 30 05:30:55.648086 update_engine[1608]: I20250130 05:30:55.646442 1608 update_attempter.cc:310] Error event sent. Jan 30 05:30:55.648086 update_engine[1608]: I20250130 05:30:55.646461 1608 update_check_scheduler.cc:74] Next update check in 40m5s Jan 30 05:31:51.132932 systemd[1]: Started sshd@7-49.13.81.87:22-2.57.122.188:54740.service - OpenSSH per-connection server daemon (2.57.122.188:54740). Jan 30 05:31:51.320997 sshd[4442]: Invalid user tenderly from 2.57.122.188 port 54740 Jan 30 05:31:51.351747 sshd[4442]: Connection closed by invalid user tenderly 2.57.122.188 port 54740 [preauth] Jan 30 05:31:51.356283 systemd[1]: sshd@7-49.13.81.87:22-2.57.122.188:54740.service: Deactivated successfully. Jan 30 05:33:14.562219 systemd[1]: Started sshd@8-49.13.81.87:22-139.178.89.65:51496.service - OpenSSH per-connection server daemon (139.178.89.65:51496). Jan 30 05:33:15.601360 sshd[4456]: Accepted publickey for core from 139.178.89.65 port 51496 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:33:15.605065 sshd[4456]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:33:15.614463 systemd-logind[1601]: New session 8 of user core. Jan 30 05:33:15.625377 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 05:33:17.192784 sshd[4456]: pam_unix(sshd:session): session closed for user core Jan 30 05:33:17.202894 systemd[1]: sshd@8-49.13.81.87:22-139.178.89.65:51496.service: Deactivated successfully. Jan 30 05:33:17.212572 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 05:33:17.214598 systemd-logind[1601]: Session 8 logged out. Waiting for processes to exit. Jan 30 05:33:17.217568 systemd-logind[1601]: Removed session 8. Jan 30 05:33:22.358407 systemd[1]: Started sshd@9-49.13.81.87:22-139.178.89.65:58464.service - OpenSSH per-connection server daemon (139.178.89.65:58464). Jan 30 05:33:23.364399 sshd[4471]: Accepted publickey for core from 139.178.89.65 port 58464 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:33:23.367452 sshd[4471]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:33:23.374753 systemd-logind[1601]: New session 9 of user core. Jan 30 05:33:23.382767 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 05:33:24.161493 sshd[4471]: pam_unix(sshd:session): session closed for user core Jan 30 05:33:24.173571 systemd[1]: sshd@9-49.13.81.87:22-139.178.89.65:58464.service: Deactivated successfully. Jan 30 05:33:24.181829 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 05:33:24.183777 systemd-logind[1601]: Session 9 logged out. Waiting for processes to exit. Jan 30 05:33:24.185814 systemd-logind[1601]: Removed session 9. Jan 30 05:33:29.332011 systemd[1]: Started sshd@10-49.13.81.87:22-139.178.89.65:58480.service - OpenSSH per-connection server daemon (139.178.89.65:58480). Jan 30 05:33:30.317792 sshd[4488]: Accepted publickey for core from 139.178.89.65 port 58480 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:33:30.321207 sshd[4488]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:33:30.330735 systemd-logind[1601]: New session 10 of user core. Jan 30 05:33:30.339059 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 05:33:31.133913 sshd[4488]: pam_unix(sshd:session): session closed for user core Jan 30 05:33:31.147415 systemd[1]: sshd@10-49.13.81.87:22-139.178.89.65:58480.service: Deactivated successfully. Jan 30 05:33:31.155019 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 05:33:31.156987 systemd-logind[1601]: Session 10 logged out. Waiting for processes to exit. Jan 30 05:33:31.159407 systemd-logind[1601]: Removed session 10. Jan 30 05:33:31.297793 systemd[1]: Started sshd@11-49.13.81.87:22-139.178.89.65:51004.service - OpenSSH per-connection server daemon (139.178.89.65:51004). Jan 30 05:33:32.301800 sshd[4503]: Accepted publickey for core from 139.178.89.65 port 51004 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:33:32.305599 sshd[4503]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:33:32.317800 systemd-logind[1601]: New session 11 of user core. Jan 30 05:33:32.326771 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 05:33:33.191948 sshd[4503]: pam_unix(sshd:session): session closed for user core Jan 30 05:33:33.207311 systemd[1]: sshd@11-49.13.81.87:22-139.178.89.65:51004.service: Deactivated successfully. Jan 30 05:33:33.218457 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 05:33:33.219007 systemd-logind[1601]: Session 11 logged out. Waiting for processes to exit. Jan 30 05:33:33.223938 systemd-logind[1601]: Removed session 11. Jan 30 05:33:33.358787 systemd[1]: Started sshd@12-49.13.81.87:22-139.178.89.65:51008.service - OpenSSH per-connection server daemon (139.178.89.65:51008). Jan 30 05:33:34.375473 sshd[4515]: Accepted publickey for core from 139.178.89.65 port 51008 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:33:34.379053 sshd[4515]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:33:34.388147 systemd-logind[1601]: New session 12 of user core. Jan 30 05:33:34.397404 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 30 05:33:35.195941 sshd[4515]: pam_unix(sshd:session): session closed for user core Jan 30 05:33:35.199917 systemd[1]: sshd@12-49.13.81.87:22-139.178.89.65:51008.service: Deactivated successfully. Jan 30 05:33:35.204863 systemd[1]: session-12.scope: Deactivated successfully. Jan 30 05:33:35.206919 systemd-logind[1601]: Session 12 logged out. Waiting for processes to exit. Jan 30 05:33:35.208670 systemd-logind[1601]: Removed session 12. Jan 30 05:33:40.365218 systemd[1]: Started sshd@13-49.13.81.87:22-139.178.89.65:51020.service - OpenSSH per-connection server daemon (139.178.89.65:51020). Jan 30 05:33:40.686435 systemd[1]: Started sshd@14-49.13.81.87:22-36.40.88.142:33879.service - OpenSSH per-connection server daemon (36.40.88.142:33879). Jan 30 05:33:41.369031 sshd[4529]: Accepted publickey for core from 139.178.89.65 port 51020 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:33:41.372411 sshd[4529]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:33:41.381832 systemd-logind[1601]: New session 13 of user core. Jan 30 05:33:41.390124 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 30 05:33:42.169301 sshd[4529]: pam_unix(sshd:session): session closed for user core Jan 30 05:33:42.175785 systemd[1]: sshd@13-49.13.81.87:22-139.178.89.65:51020.service: Deactivated successfully. Jan 30 05:33:42.185590 systemd[1]: session-13.scope: Deactivated successfully. Jan 30 05:33:42.187810 systemd-logind[1601]: Session 13 logged out. Waiting for processes to exit. Jan 30 05:33:42.190296 systemd-logind[1601]: Removed session 13. Jan 30 05:33:42.335437 systemd[1]: Started sshd@15-49.13.81.87:22-139.178.89.65:56990.service - OpenSSH per-connection server daemon (139.178.89.65:56990). Jan 30 05:33:43.342434 sshd[4547]: Accepted publickey for core from 139.178.89.65 port 56990 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:33:43.346563 sshd[4547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:33:43.357317 systemd-logind[1601]: New session 14 of user core. Jan 30 05:33:43.363435 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 30 05:33:44.502848 sshd[4547]: pam_unix(sshd:session): session closed for user core Jan 30 05:33:44.509560 systemd[1]: sshd@15-49.13.81.87:22-139.178.89.65:56990.service: Deactivated successfully. Jan 30 05:33:44.517976 systemd-logind[1601]: Session 14 logged out. Waiting for processes to exit. Jan 30 05:33:44.519184 systemd[1]: session-14.scope: Deactivated successfully. Jan 30 05:33:44.520900 systemd-logind[1601]: Removed session 14. Jan 30 05:33:44.671327 systemd[1]: Started sshd@16-49.13.81.87:22-139.178.89.65:57002.service - OpenSSH per-connection server daemon (139.178.89.65:57002). Jan 30 05:33:45.711523 sshd[4559]: Accepted publickey for core from 139.178.89.65 port 57002 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:33:45.715270 sshd[4559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:33:45.725278 systemd-logind[1601]: New session 15 of user core. Jan 30 05:33:45.731979 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 30 05:33:48.013468 sshd[4559]: pam_unix(sshd:session): session closed for user core Jan 30 05:33:48.028724 systemd[1]: sshd@16-49.13.81.87:22-139.178.89.65:57002.service: Deactivated successfully. Jan 30 05:33:48.037454 systemd[1]: session-15.scope: Deactivated successfully. Jan 30 05:33:48.039367 systemd-logind[1601]: Session 15 logged out. Waiting for processes to exit. Jan 30 05:33:48.042362 systemd-logind[1601]: Removed session 15. Jan 30 05:33:48.179916 systemd[1]: Started sshd@17-49.13.81.87:22-139.178.89.65:57006.service - OpenSSH per-connection server daemon (139.178.89.65:57006). Jan 30 05:33:49.187156 sshd[4578]: Accepted publickey for core from 139.178.89.65 port 57006 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:33:49.190796 sshd[4578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:33:49.202333 systemd-logind[1601]: New session 16 of user core. Jan 30 05:33:49.205299 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 30 05:33:50.191937 sshd[4578]: pam_unix(sshd:session): session closed for user core Jan 30 05:33:50.202614 systemd[1]: sshd@17-49.13.81.87:22-139.178.89.65:57006.service: Deactivated successfully. Jan 30 05:33:50.215596 systemd[1]: session-16.scope: Deactivated successfully. Jan 30 05:33:50.218126 systemd-logind[1601]: Session 16 logged out. Waiting for processes to exit. Jan 30 05:33:50.220887 systemd-logind[1601]: Removed session 16. Jan 30 05:33:50.357450 systemd[1]: Started sshd@18-49.13.81.87:22-139.178.89.65:57010.service - OpenSSH per-connection server daemon (139.178.89.65:57010). Jan 30 05:33:51.347508 sshd[4589]: Accepted publickey for core from 139.178.89.65 port 57010 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:33:51.350569 sshd[4589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:33:51.364439 systemd-logind[1601]: New session 17 of user core. Jan 30 05:33:51.369283 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 30 05:33:52.130413 sshd[4589]: pam_unix(sshd:session): session closed for user core Jan 30 05:33:52.137912 systemd[1]: sshd@18-49.13.81.87:22-139.178.89.65:57010.service: Deactivated successfully. Jan 30 05:33:52.144073 systemd[1]: session-17.scope: Deactivated successfully. Jan 30 05:33:52.146392 systemd-logind[1601]: Session 17 logged out. Waiting for processes to exit. Jan 30 05:33:52.148887 systemd-logind[1601]: Removed session 17. Jan 30 05:33:57.301574 systemd[1]: Started sshd@19-49.13.81.87:22-139.178.89.65:60876.service - OpenSSH per-connection server daemon (139.178.89.65:60876). Jan 30 05:33:58.302907 sshd[4608]: Accepted publickey for core from 139.178.89.65 port 60876 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:33:58.306789 sshd[4608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:33:58.316010 systemd-logind[1601]: New session 18 of user core. Jan 30 05:33:58.324771 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 30 05:33:59.099151 sshd[4608]: pam_unix(sshd:session): session closed for user core Jan 30 05:33:59.104431 systemd[1]: sshd@19-49.13.81.87:22-139.178.89.65:60876.service: Deactivated successfully. Jan 30 05:33:59.111292 systemd[1]: session-18.scope: Deactivated successfully. Jan 30 05:33:59.115262 systemd-logind[1601]: Session 18 logged out. Waiting for processes to exit. Jan 30 05:33:59.117360 systemd-logind[1601]: Removed session 18. Jan 30 05:34:04.268148 systemd[1]: Started sshd@20-49.13.81.87:22-139.178.89.65:53086.service - OpenSSH per-connection server daemon (139.178.89.65:53086). Jan 30 05:34:05.269963 sshd[4623]: Accepted publickey for core from 139.178.89.65 port 53086 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:34:05.273826 sshd[4623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:34:05.284728 systemd-logind[1601]: New session 19 of user core. Jan 30 05:34:05.290260 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 30 05:34:06.057934 sshd[4623]: pam_unix(sshd:session): session closed for user core Jan 30 05:34:06.067204 systemd[1]: sshd@20-49.13.81.87:22-139.178.89.65:53086.service: Deactivated successfully. Jan 30 05:34:06.076790 systemd[1]: session-19.scope: Deactivated successfully. Jan 30 05:34:06.078300 systemd-logind[1601]: Session 19 logged out. Waiting for processes to exit. Jan 30 05:34:06.080350 systemd-logind[1601]: Removed session 19. Jan 30 05:34:06.231164 systemd[1]: Started sshd@21-49.13.81.87:22-139.178.89.65:53094.service - OpenSSH per-connection server daemon (139.178.89.65:53094). Jan 30 05:34:07.236875 sshd[4637]: Accepted publickey for core from 139.178.89.65 port 53094 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:34:07.243708 sshd[4637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:34:07.251000 systemd-logind[1601]: New session 20 of user core. Jan 30 05:34:07.255272 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 30 05:34:09.260025 kubelet[3043]: I0130 05:34:09.259477 3043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-rrx7z" podStartSLOduration=330.259453083 podStartE2EDuration="5m30.259453083s" podCreationTimestamp="2025-01-30 05:28:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 05:29:00.703824971 +0000 UTC m=+35.435452306" watchObservedRunningTime="2025-01-30 05:34:09.259453083 +0000 UTC m=+343.991080428" Jan 30 05:34:09.331575 containerd[1620]: time="2025-01-30T05:34:09.331009685Z" level=info msg="StopContainer for \"1afb9bb433cbb58ee113f45eab4e542f5c8d23639582307b142d50d4180711ab\" with timeout 30 (s)" Jan 30 05:34:09.336438 containerd[1620]: time="2025-01-30T05:34:09.335616861Z" level=info msg="Stop container \"1afb9bb433cbb58ee113f45eab4e542f5c8d23639582307b142d50d4180711ab\" with signal terminated" Jan 30 05:34:09.420664 containerd[1620]: time="2025-01-30T05:34:09.420606158Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 05:34:09.430611 containerd[1620]: time="2025-01-30T05:34:09.430341811Z" level=info msg="StopContainer for \"4167b33015f06a3d0c0a64871e3370e17c6da3534c52ba3b7d85859a885b5d0c\" with timeout 2 (s)" Jan 30 05:34:09.431184 containerd[1620]: time="2025-01-30T05:34:09.431104852Z" level=info msg="Stop container \"4167b33015f06a3d0c0a64871e3370e17c6da3534c52ba3b7d85859a885b5d0c\" with signal terminated" Jan 30 05:34:09.439974 systemd-networkd[1240]: lxc_health: Link DOWN Jan 30 05:34:09.440077 systemd-networkd[1240]: lxc_health: Lost carrier Jan 30 05:34:09.459140 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1afb9bb433cbb58ee113f45eab4e542f5c8d23639582307b142d50d4180711ab-rootfs.mount: Deactivated successfully. Jan 30 05:34:09.488490 containerd[1620]: time="2025-01-30T05:34:09.488411759Z" level=info msg="shim disconnected" id=1afb9bb433cbb58ee113f45eab4e542f5c8d23639582307b142d50d4180711ab namespace=k8s.io Jan 30 05:34:09.489226 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4167b33015f06a3d0c0a64871e3370e17c6da3534c52ba3b7d85859a885b5d0c-rootfs.mount: Deactivated successfully. Jan 30 05:34:09.489394 containerd[1620]: time="2025-01-30T05:34:09.489375706Z" level=warning msg="cleaning up after shim disconnected" id=1afb9bb433cbb58ee113f45eab4e542f5c8d23639582307b142d50d4180711ab namespace=k8s.io Jan 30 05:34:09.489464 containerd[1620]: time="2025-01-30T05:34:09.489451758Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 05:34:09.498886 containerd[1620]: time="2025-01-30T05:34:09.498814355Z" level=info msg="shim disconnected" id=4167b33015f06a3d0c0a64871e3370e17c6da3534c52ba3b7d85859a885b5d0c namespace=k8s.io Jan 30 05:34:09.499188 containerd[1620]: time="2025-01-30T05:34:09.498895667Z" level=warning msg="cleaning up after shim disconnected" id=4167b33015f06a3d0c0a64871e3370e17c6da3534c52ba3b7d85859a885b5d0c namespace=k8s.io Jan 30 05:34:09.499188 containerd[1620]: time="2025-01-30T05:34:09.498908491Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 05:34:09.515468 containerd[1620]: time="2025-01-30T05:34:09.514956197Z" level=info msg="StopContainer for \"1afb9bb433cbb58ee113f45eab4e542f5c8d23639582307b142d50d4180711ab\" returns successfully" Jan 30 05:34:09.518328 containerd[1620]: time="2025-01-30T05:34:09.518010239Z" level=info msg="StopContainer for \"4167b33015f06a3d0c0a64871e3370e17c6da3534c52ba3b7d85859a885b5d0c\" returns successfully" Jan 30 05:34:09.518410 containerd[1620]: time="2025-01-30T05:34:09.518394294Z" level=info msg="StopPodSandbox for \"71fb77faead4a3d3604f2b1de8588459f210c984d2ba6f0f93c93b21b12d10dd\"" Jan 30 05:34:09.518470 containerd[1620]: time="2025-01-30T05:34:09.518457502Z" level=info msg="Container to stop \"1afb9bb433cbb58ee113f45eab4e542f5c8d23639582307b142d50d4180711ab\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 05:34:09.519303 containerd[1620]: time="2025-01-30T05:34:09.518501093Z" level=info msg="StopPodSandbox for \"baa28a65f7be81e9ac681138c24ae39a96d59a12a49fe2d8c0d281d6382e35e4\"" Jan 30 05:34:09.519344 containerd[1620]: time="2025-01-30T05:34:09.519307977Z" level=info msg="Container to stop \"4167b33015f06a3d0c0a64871e3370e17c6da3534c52ba3b7d85859a885b5d0c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 05:34:09.519344 containerd[1620]: time="2025-01-30T05:34:09.519318437Z" level=info msg="Container to stop \"ede4c0774f609e1d18ee15bc6372e31bafb6c8429a0faac087a57afef3dd6d6c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 05:34:09.519344 containerd[1620]: time="2025-01-30T05:34:09.519327473Z" level=info msg="Container to stop \"2d0db50702d3307f4338e491094664315dfe78e610f1c4721eaeaabb69849a6d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 05:34:09.520724 containerd[1620]: time="2025-01-30T05:34:09.519335919Z" level=info msg="Container to stop \"5b4629b16507eaf40bafb8d7792f34f7fb4e14311fcda8cd56e0193d2c9c2c51\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 05:34:09.520768 containerd[1620]: time="2025-01-30T05:34:09.520724347Z" level=info msg="Container to stop \"780fa2cb594079adf40e19acb8d37fcb89b98534ad5564a8fe9ca9763f771005\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 05:34:09.522089 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-71fb77faead4a3d3604f2b1de8588459f210c984d2ba6f0f93c93b21b12d10dd-shm.mount: Deactivated successfully. Jan 30 05:34:09.569022 containerd[1620]: time="2025-01-30T05:34:09.568946894Z" level=info msg="shim disconnected" id=baa28a65f7be81e9ac681138c24ae39a96d59a12a49fe2d8c0d281d6382e35e4 namespace=k8s.io Jan 30 05:34:09.569424 containerd[1620]: time="2025-01-30T05:34:09.569391994Z" level=warning msg="cleaning up after shim disconnected" id=baa28a65f7be81e9ac681138c24ae39a96d59a12a49fe2d8c0d281d6382e35e4 namespace=k8s.io Jan 30 05:34:09.569424 containerd[1620]: time="2025-01-30T05:34:09.569407483Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 05:34:09.569776 containerd[1620]: time="2025-01-30T05:34:09.569111381Z" level=info msg="shim disconnected" id=71fb77faead4a3d3604f2b1de8588459f210c984d2ba6f0f93c93b21b12d10dd namespace=k8s.io Jan 30 05:34:09.569776 containerd[1620]: time="2025-01-30T05:34:09.569649233Z" level=warning msg="cleaning up after shim disconnected" id=71fb77faead4a3d3604f2b1de8588459f210c984d2ba6f0f93c93b21b12d10dd namespace=k8s.io Jan 30 05:34:09.569776 containerd[1620]: time="2025-01-30T05:34:09.569656617Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 05:34:09.603230 containerd[1620]: time="2025-01-30T05:34:09.603187736Z" level=info msg="TearDown network for sandbox \"71fb77faead4a3d3604f2b1de8588459f210c984d2ba6f0f93c93b21b12d10dd\" successfully" Jan 30 05:34:09.603230 containerd[1620]: time="2025-01-30T05:34:09.603220846Z" level=info msg="StopPodSandbox for \"71fb77faead4a3d3604f2b1de8588459f210c984d2ba6f0f93c93b21b12d10dd\" returns successfully" Jan 30 05:34:09.603372 containerd[1620]: time="2025-01-30T05:34:09.603303901Z" level=info msg="TearDown network for sandbox \"baa28a65f7be81e9ac681138c24ae39a96d59a12a49fe2d8c0d281d6382e35e4\" successfully" Jan 30 05:34:09.603372 containerd[1620]: time="2025-01-30T05:34:09.603313359Z" level=info msg="StopPodSandbox for \"baa28a65f7be81e9ac681138c24ae39a96d59a12a49fe2d8c0d281d6382e35e4\" returns successfully" Jan 30 05:34:09.748850 kubelet[3043]: I0130 05:34:09.748762 3043 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5b95efdc-0040-48b2-b0e6-9dd57bd04e74-hostproc\") pod \"5b95efdc-0040-48b2-b0e6-9dd57bd04e74\" (UID: \"5b95efdc-0040-48b2-b0e6-9dd57bd04e74\") " Jan 30 05:34:09.748850 kubelet[3043]: I0130 05:34:09.748823 3043 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5b95efdc-0040-48b2-b0e6-9dd57bd04e74-cilium-run\") pod \"5b95efdc-0040-48b2-b0e6-9dd57bd04e74\" (UID: \"5b95efdc-0040-48b2-b0e6-9dd57bd04e74\") " Jan 30 05:34:09.748850 kubelet[3043]: I0130 05:34:09.748853 3043 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5b95efdc-0040-48b2-b0e6-9dd57bd04e74-bpf-maps\") pod \"5b95efdc-0040-48b2-b0e6-9dd57bd04e74\" (UID: \"5b95efdc-0040-48b2-b0e6-9dd57bd04e74\") " Jan 30 05:34:09.749219 kubelet[3043]: I0130 05:34:09.748897 3043 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htrgb\" (UniqueName: \"kubernetes.io/projected/5b95efdc-0040-48b2-b0e6-9dd57bd04e74-kube-api-access-htrgb\") pod \"5b95efdc-0040-48b2-b0e6-9dd57bd04e74\" (UID: \"5b95efdc-0040-48b2-b0e6-9dd57bd04e74\") " Jan 30 05:34:09.749219 kubelet[3043]: I0130 05:34:09.748933 3043 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3b83b2f4-b0c0-4176-97e7-37dc0e605ed3-cilium-config-path\") pod \"3b83b2f4-b0c0-4176-97e7-37dc0e605ed3\" (UID: \"3b83b2f4-b0c0-4176-97e7-37dc0e605ed3\") " Jan 30 05:34:09.749219 kubelet[3043]: I0130 05:34:09.748963 3043 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5b95efdc-0040-48b2-b0e6-9dd57bd04e74-clustermesh-secrets\") pod \"5b95efdc-0040-48b2-b0e6-9dd57bd04e74\" (UID: \"5b95efdc-0040-48b2-b0e6-9dd57bd04e74\") " Jan 30 05:34:09.749219 kubelet[3043]: I0130 05:34:09.748991 3043 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5b95efdc-0040-48b2-b0e6-9dd57bd04e74-lib-modules\") pod \"5b95efdc-0040-48b2-b0e6-9dd57bd04e74\" (UID: \"5b95efdc-0040-48b2-b0e6-9dd57bd04e74\") " Jan 30 05:34:09.749219 kubelet[3043]: I0130 05:34:09.749016 3043 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5b95efdc-0040-48b2-b0e6-9dd57bd04e74-cni-path\") pod \"5b95efdc-0040-48b2-b0e6-9dd57bd04e74\" (UID: \"5b95efdc-0040-48b2-b0e6-9dd57bd04e74\") " Jan 30 05:34:09.749219 kubelet[3043]: I0130 05:34:09.749046 3043 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sztbr\" (UniqueName: \"kubernetes.io/projected/3b83b2f4-b0c0-4176-97e7-37dc0e605ed3-kube-api-access-sztbr\") pod \"3b83b2f4-b0c0-4176-97e7-37dc0e605ed3\" (UID: \"3b83b2f4-b0c0-4176-97e7-37dc0e605ed3\") " Jan 30 05:34:09.749533 kubelet[3043]: I0130 05:34:09.749096 3043 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5b95efdc-0040-48b2-b0e6-9dd57bd04e74-etc-cni-netd\") pod \"5b95efdc-0040-48b2-b0e6-9dd57bd04e74\" (UID: \"5b95efdc-0040-48b2-b0e6-9dd57bd04e74\") " Jan 30 05:34:09.749533 kubelet[3043]: I0130 05:34:09.749127 3043 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5b95efdc-0040-48b2-b0e6-9dd57bd04e74-cilium-config-path\") pod \"5b95efdc-0040-48b2-b0e6-9dd57bd04e74\" (UID: \"5b95efdc-0040-48b2-b0e6-9dd57bd04e74\") " Jan 30 05:34:09.749533 kubelet[3043]: I0130 05:34:09.749153 3043 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5b95efdc-0040-48b2-b0e6-9dd57bd04e74-host-proc-sys-kernel\") pod \"5b95efdc-0040-48b2-b0e6-9dd57bd04e74\" (UID: \"5b95efdc-0040-48b2-b0e6-9dd57bd04e74\") " Jan 30 05:34:09.749533 kubelet[3043]: I0130 05:34:09.749214 3043 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5b95efdc-0040-48b2-b0e6-9dd57bd04e74-xtables-lock\") pod \"5b95efdc-0040-48b2-b0e6-9dd57bd04e74\" (UID: \"5b95efdc-0040-48b2-b0e6-9dd57bd04e74\") " Jan 30 05:34:09.749533 kubelet[3043]: I0130 05:34:09.749247 3043 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5b95efdc-0040-48b2-b0e6-9dd57bd04e74-host-proc-sys-net\") pod \"5b95efdc-0040-48b2-b0e6-9dd57bd04e74\" (UID: \"5b95efdc-0040-48b2-b0e6-9dd57bd04e74\") " Jan 30 05:34:09.749533 kubelet[3043]: I0130 05:34:09.749274 3043 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5b95efdc-0040-48b2-b0e6-9dd57bd04e74-hubble-tls\") pod \"5b95efdc-0040-48b2-b0e6-9dd57bd04e74\" (UID: \"5b95efdc-0040-48b2-b0e6-9dd57bd04e74\") " Jan 30 05:34:09.749956 kubelet[3043]: I0130 05:34:09.749302 3043 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5b95efdc-0040-48b2-b0e6-9dd57bd04e74-cilium-cgroup\") pod \"5b95efdc-0040-48b2-b0e6-9dd57bd04e74\" (UID: \"5b95efdc-0040-48b2-b0e6-9dd57bd04e74\") " Jan 30 05:34:09.751910 kubelet[3043]: I0130 05:34:09.749671 3043 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b95efdc-0040-48b2-b0e6-9dd57bd04e74-cni-path" (OuterVolumeSpecName: "cni-path") pod "5b95efdc-0040-48b2-b0e6-9dd57bd04e74" (UID: "5b95efdc-0040-48b2-b0e6-9dd57bd04e74"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:34:09.751910 kubelet[3043]: I0130 05:34:09.749421 3043 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b95efdc-0040-48b2-b0e6-9dd57bd04e74-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "5b95efdc-0040-48b2-b0e6-9dd57bd04e74" (UID: "5b95efdc-0040-48b2-b0e6-9dd57bd04e74"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:34:09.751910 kubelet[3043]: I0130 05:34:09.751813 3043 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b95efdc-0040-48b2-b0e6-9dd57bd04e74-hostproc" (OuterVolumeSpecName: "hostproc") pod "5b95efdc-0040-48b2-b0e6-9dd57bd04e74" (UID: "5b95efdc-0040-48b2-b0e6-9dd57bd04e74"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:34:09.751910 kubelet[3043]: I0130 05:34:09.751848 3043 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b95efdc-0040-48b2-b0e6-9dd57bd04e74-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "5b95efdc-0040-48b2-b0e6-9dd57bd04e74" (UID: "5b95efdc-0040-48b2-b0e6-9dd57bd04e74"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:34:09.752555 kubelet[3043]: I0130 05:34:09.752217 3043 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b95efdc-0040-48b2-b0e6-9dd57bd04e74-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "5b95efdc-0040-48b2-b0e6-9dd57bd04e74" (UID: "5b95efdc-0040-48b2-b0e6-9dd57bd04e74"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:34:09.770963 kubelet[3043]: I0130 05:34:09.768850 3043 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b95efdc-0040-48b2-b0e6-9dd57bd04e74-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "5b95efdc-0040-48b2-b0e6-9dd57bd04e74" (UID: "5b95efdc-0040-48b2-b0e6-9dd57bd04e74"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:34:09.775246 kubelet[3043]: I0130 05:34:09.772742 3043 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b95efdc-0040-48b2-b0e6-9dd57bd04e74-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "5b95efdc-0040-48b2-b0e6-9dd57bd04e74" (UID: "5b95efdc-0040-48b2-b0e6-9dd57bd04e74"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:34:09.775246 kubelet[3043]: I0130 05:34:09.772777 3043 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b95efdc-0040-48b2-b0e6-9dd57bd04e74-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "5b95efdc-0040-48b2-b0e6-9dd57bd04e74" (UID: "5b95efdc-0040-48b2-b0e6-9dd57bd04e74"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:34:09.775246 kubelet[3043]: I0130 05:34:09.772799 3043 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b95efdc-0040-48b2-b0e6-9dd57bd04e74-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "5b95efdc-0040-48b2-b0e6-9dd57bd04e74" (UID: "5b95efdc-0040-48b2-b0e6-9dd57bd04e74"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:34:09.775246 kubelet[3043]: I0130 05:34:09.774339 3043 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b95efdc-0040-48b2-b0e6-9dd57bd04e74-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "5b95efdc-0040-48b2-b0e6-9dd57bd04e74" (UID: "5b95efdc-0040-48b2-b0e6-9dd57bd04e74"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:34:09.783125 kubelet[3043]: I0130 05:34:09.782944 3043 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b83b2f4-b0c0-4176-97e7-37dc0e605ed3-kube-api-access-sztbr" (OuterVolumeSpecName: "kube-api-access-sztbr") pod "3b83b2f4-b0c0-4176-97e7-37dc0e605ed3" (UID: "3b83b2f4-b0c0-4176-97e7-37dc0e605ed3"). InnerVolumeSpecName "kube-api-access-sztbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:34:09.783125 kubelet[3043]: I0130 05:34:09.783083 3043 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b95efdc-0040-48b2-b0e6-9dd57bd04e74-kube-api-access-htrgb" (OuterVolumeSpecName: "kube-api-access-htrgb") pod "5b95efdc-0040-48b2-b0e6-9dd57bd04e74" (UID: "5b95efdc-0040-48b2-b0e6-9dd57bd04e74"). InnerVolumeSpecName "kube-api-access-htrgb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:34:09.785237 kubelet[3043]: I0130 05:34:09.785092 3043 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b95efdc-0040-48b2-b0e6-9dd57bd04e74-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "5b95efdc-0040-48b2-b0e6-9dd57bd04e74" (UID: "5b95efdc-0040-48b2-b0e6-9dd57bd04e74"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:34:09.790389 kubelet[3043]: I0130 05:34:09.790296 3043 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b95efdc-0040-48b2-b0e6-9dd57bd04e74-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "5b95efdc-0040-48b2-b0e6-9dd57bd04e74" (UID: "5b95efdc-0040-48b2-b0e6-9dd57bd04e74"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:34:09.792331 kubelet[3043]: I0130 05:34:09.792285 3043 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b83b2f4-b0c0-4176-97e7-37dc0e605ed3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3b83b2f4-b0c0-4176-97e7-37dc0e605ed3" (UID: "3b83b2f4-b0c0-4176-97e7-37dc0e605ed3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:34:09.793552 kubelet[3043]: I0130 05:34:09.793425 3043 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b95efdc-0040-48b2-b0e6-9dd57bd04e74-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5b95efdc-0040-48b2-b0e6-9dd57bd04e74" (UID: "5b95efdc-0040-48b2-b0e6-9dd57bd04e74"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:34:09.855300 kubelet[3043]: I0130 05:34:09.854987 3043 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5b95efdc-0040-48b2-b0e6-9dd57bd04e74-cilium-cgroup\") on node \"ci-4081-3-0-c-240f39d8fc\" DevicePath \"\"" Jan 30 05:34:09.855300 kubelet[3043]: I0130 05:34:09.855051 3043 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5b95efdc-0040-48b2-b0e6-9dd57bd04e74-hostproc\") on node \"ci-4081-3-0-c-240f39d8fc\" DevicePath \"\"" Jan 30 05:34:09.855300 kubelet[3043]: I0130 05:34:09.855089 3043 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5b95efdc-0040-48b2-b0e6-9dd57bd04e74-cilium-run\") on node \"ci-4081-3-0-c-240f39d8fc\" DevicePath \"\"" Jan 30 05:34:09.855300 kubelet[3043]: I0130 05:34:09.855107 3043 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5b95efdc-0040-48b2-b0e6-9dd57bd04e74-bpf-maps\") on node \"ci-4081-3-0-c-240f39d8fc\" DevicePath \"\"" Jan 30 05:34:09.855300 kubelet[3043]: I0130 05:34:09.855128 3043 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-htrgb\" (UniqueName: \"kubernetes.io/projected/5b95efdc-0040-48b2-b0e6-9dd57bd04e74-kube-api-access-htrgb\") on node \"ci-4081-3-0-c-240f39d8fc\" DevicePath \"\"" Jan 30 05:34:09.855300 kubelet[3043]: I0130 05:34:09.855144 3043 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3b83b2f4-b0c0-4176-97e7-37dc0e605ed3-cilium-config-path\") on node \"ci-4081-3-0-c-240f39d8fc\" DevicePath \"\"" Jan 30 05:34:09.855300 kubelet[3043]: I0130 05:34:09.855159 3043 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5b95efdc-0040-48b2-b0e6-9dd57bd04e74-clustermesh-secrets\") on node \"ci-4081-3-0-c-240f39d8fc\" DevicePath \"\"" Jan 30 05:34:09.855300 kubelet[3043]: I0130 05:34:09.855171 3043 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5b95efdc-0040-48b2-b0e6-9dd57bd04e74-lib-modules\") on node \"ci-4081-3-0-c-240f39d8fc\" DevicePath \"\"" Jan 30 05:34:09.856227 kubelet[3043]: I0130 05:34:09.855184 3043 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5b95efdc-0040-48b2-b0e6-9dd57bd04e74-cni-path\") on node \"ci-4081-3-0-c-240f39d8fc\" DevicePath \"\"" Jan 30 05:34:09.856227 kubelet[3043]: I0130 05:34:09.855195 3043 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-sztbr\" (UniqueName: \"kubernetes.io/projected/3b83b2f4-b0c0-4176-97e7-37dc0e605ed3-kube-api-access-sztbr\") on node \"ci-4081-3-0-c-240f39d8fc\" DevicePath \"\"" Jan 30 05:34:09.856227 kubelet[3043]: I0130 05:34:09.855206 3043 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5b95efdc-0040-48b2-b0e6-9dd57bd04e74-etc-cni-netd\") on node \"ci-4081-3-0-c-240f39d8fc\" DevicePath \"\"" Jan 30 05:34:09.856227 kubelet[3043]: I0130 05:34:09.855219 3043 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5b95efdc-0040-48b2-b0e6-9dd57bd04e74-cilium-config-path\") on node \"ci-4081-3-0-c-240f39d8fc\" DevicePath \"\"" Jan 30 05:34:09.856227 kubelet[3043]: I0130 05:34:09.855231 3043 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5b95efdc-0040-48b2-b0e6-9dd57bd04e74-host-proc-sys-kernel\") on node \"ci-4081-3-0-c-240f39d8fc\" DevicePath \"\"" Jan 30 05:34:09.856227 kubelet[3043]: I0130 05:34:09.855242 3043 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5b95efdc-0040-48b2-b0e6-9dd57bd04e74-xtables-lock\") on node \"ci-4081-3-0-c-240f39d8fc\" DevicePath \"\"" Jan 30 05:34:09.856227 kubelet[3043]: I0130 05:34:09.855254 3043 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5b95efdc-0040-48b2-b0e6-9dd57bd04e74-host-proc-sys-net\") on node \"ci-4081-3-0-c-240f39d8fc\" DevicePath \"\"" Jan 30 05:34:09.856227 kubelet[3043]: I0130 05:34:09.855265 3043 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5b95efdc-0040-48b2-b0e6-9dd57bd04e74-hubble-tls\") on node \"ci-4081-3-0-c-240f39d8fc\" DevicePath \"\"" Jan 30 05:34:10.392950 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-71fb77faead4a3d3604f2b1de8588459f210c984d2ba6f0f93c93b21b12d10dd-rootfs.mount: Deactivated successfully. Jan 30 05:34:10.393876 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-baa28a65f7be81e9ac681138c24ae39a96d59a12a49fe2d8c0d281d6382e35e4-rootfs.mount: Deactivated successfully. Jan 30 05:34:10.394226 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-baa28a65f7be81e9ac681138c24ae39a96d59a12a49fe2d8c0d281d6382e35e4-shm.mount: Deactivated successfully. Jan 30 05:34:10.394531 systemd[1]: var-lib-kubelet-pods-3b83b2f4\x2db0c0\x2d4176\x2d97e7\x2d37dc0e605ed3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsztbr.mount: Deactivated successfully. Jan 30 05:34:10.394872 systemd[1]: var-lib-kubelet-pods-5b95efdc\x2d0040\x2d48b2\x2db0e6\x2d9dd57bd04e74-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 30 05:34:10.395286 systemd[1]: var-lib-kubelet-pods-5b95efdc\x2d0040\x2d48b2\x2db0e6\x2d9dd57bd04e74-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhtrgb.mount: Deactivated successfully. Jan 30 05:34:10.395617 systemd[1]: var-lib-kubelet-pods-5b95efdc\x2d0040\x2d48b2\x2db0e6\x2d9dd57bd04e74-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 30 05:34:10.566677 kubelet[3043]: I0130 05:34:10.566507 3043 scope.go:117] "RemoveContainer" containerID="1afb9bb433cbb58ee113f45eab4e542f5c8d23639582307b142d50d4180711ab" Jan 30 05:34:10.573880 containerd[1620]: time="2025-01-30T05:34:10.573812331Z" level=info msg="RemoveContainer for \"1afb9bb433cbb58ee113f45eab4e542f5c8d23639582307b142d50d4180711ab\"" Jan 30 05:34:10.585359 containerd[1620]: time="2025-01-30T05:34:10.585162494Z" level=info msg="RemoveContainer for \"1afb9bb433cbb58ee113f45eab4e542f5c8d23639582307b142d50d4180711ab\" returns successfully" Jan 30 05:34:10.588758 kubelet[3043]: I0130 05:34:10.586518 3043 scope.go:117] "RemoveContainer" containerID="4167b33015f06a3d0c0a64871e3370e17c6da3534c52ba3b7d85859a885b5d0c" Jan 30 05:34:10.592018 containerd[1620]: time="2025-01-30T05:34:10.591910080Z" level=info msg="RemoveContainer for \"4167b33015f06a3d0c0a64871e3370e17c6da3534c52ba3b7d85859a885b5d0c\"" Jan 30 05:34:10.601496 containerd[1620]: time="2025-01-30T05:34:10.601413150Z" level=info msg="RemoveContainer for \"4167b33015f06a3d0c0a64871e3370e17c6da3534c52ba3b7d85859a885b5d0c\" returns successfully" Jan 30 05:34:10.601963 kubelet[3043]: I0130 05:34:10.601936 3043 scope.go:117] "RemoveContainer" containerID="780fa2cb594079adf40e19acb8d37fcb89b98534ad5564a8fe9ca9763f771005" Jan 30 05:34:10.605332 containerd[1620]: time="2025-01-30T05:34:10.605258597Z" level=info msg="RemoveContainer for \"780fa2cb594079adf40e19acb8d37fcb89b98534ad5564a8fe9ca9763f771005\"" Jan 30 05:34:10.616025 containerd[1620]: time="2025-01-30T05:34:10.615945303Z" level=info msg="RemoveContainer for \"780fa2cb594079adf40e19acb8d37fcb89b98534ad5564a8fe9ca9763f771005\" returns successfully" Jan 30 05:34:10.616785 kubelet[3043]: I0130 05:34:10.616250 3043 scope.go:117] "RemoveContainer" containerID="5b4629b16507eaf40bafb8d7792f34f7fb4e14311fcda8cd56e0193d2c9c2c51" Jan 30 05:34:10.618984 containerd[1620]: time="2025-01-30T05:34:10.618924756Z" level=info msg="RemoveContainer for \"5b4629b16507eaf40bafb8d7792f34f7fb4e14311fcda8cd56e0193d2c9c2c51\"" Jan 30 05:34:10.625443 containerd[1620]: time="2025-01-30T05:34:10.625368394Z" level=info msg="RemoveContainer for \"5b4629b16507eaf40bafb8d7792f34f7fb4e14311fcda8cd56e0193d2c9c2c51\" returns successfully" Jan 30 05:34:10.625679 kubelet[3043]: I0130 05:34:10.625606 3043 scope.go:117] "RemoveContainer" containerID="2d0db50702d3307f4338e491094664315dfe78e610f1c4721eaeaabb69849a6d" Jan 30 05:34:10.627740 containerd[1620]: time="2025-01-30T05:34:10.627459321Z" level=info msg="RemoveContainer for \"2d0db50702d3307f4338e491094664315dfe78e610f1c4721eaeaabb69849a6d\"" Jan 30 05:34:10.633540 containerd[1620]: time="2025-01-30T05:34:10.633470163Z" level=info msg="RemoveContainer for \"2d0db50702d3307f4338e491094664315dfe78e610f1c4721eaeaabb69849a6d\" returns successfully" Jan 30 05:34:10.633949 kubelet[3043]: I0130 05:34:10.633814 3043 scope.go:117] "RemoveContainer" containerID="ede4c0774f609e1d18ee15bc6372e31bafb6c8429a0faac087a57afef3dd6d6c" Jan 30 05:34:10.635479 containerd[1620]: time="2025-01-30T05:34:10.635384010Z" level=info msg="RemoveContainer for \"ede4c0774f609e1d18ee15bc6372e31bafb6c8429a0faac087a57afef3dd6d6c\"" Jan 30 05:34:10.641074 containerd[1620]: time="2025-01-30T05:34:10.641019162Z" level=info msg="RemoveContainer for \"ede4c0774f609e1d18ee15bc6372e31bafb6c8429a0faac087a57afef3dd6d6c\" returns successfully" Jan 30 05:34:10.689004 kubelet[3043]: E0130 05:34:10.688923 3043 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 30 05:34:11.357615 sshd[4637]: pam_unix(sshd:session): session closed for user core Jan 30 05:34:11.368853 systemd[1]: sshd@21-49.13.81.87:22-139.178.89.65:53094.service: Deactivated successfully. Jan 30 05:34:11.377125 systemd[1]: session-20.scope: Deactivated successfully. Jan 30 05:34:11.378764 systemd-logind[1601]: Session 20 logged out. Waiting for processes to exit. Jan 30 05:34:11.380773 systemd-logind[1601]: Removed session 20. Jan 30 05:34:11.430849 kubelet[3043]: I0130 05:34:11.430804 3043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b83b2f4-b0c0-4176-97e7-37dc0e605ed3" path="/var/lib/kubelet/pods/3b83b2f4-b0c0-4176-97e7-37dc0e605ed3/volumes" Jan 30 05:34:11.432328 kubelet[3043]: I0130 05:34:11.432266 3043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b95efdc-0040-48b2-b0e6-9dd57bd04e74" path="/var/lib/kubelet/pods/5b95efdc-0040-48b2-b0e6-9dd57bd04e74/volumes" Jan 30 05:34:11.521660 systemd[1]: Started sshd@22-49.13.81.87:22-139.178.89.65:40106.service - OpenSSH per-connection server daemon (139.178.89.65:40106). Jan 30 05:34:12.369641 kubelet[3043]: I0130 05:34:12.369558 3043 setters.go:580] "Node became not ready" node="ci-4081-3-0-c-240f39d8fc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-30T05:34:12Z","lastTransitionTime":"2025-01-30T05:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 30 05:34:12.515619 sshd[4810]: Accepted publickey for core from 139.178.89.65 port 40106 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:34:12.519024 sshd[4810]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:34:12.527989 systemd-logind[1601]: New session 21 of user core. Jan 30 05:34:12.532618 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 30 05:34:13.428722 kubelet[3043]: E0130 05:34:13.426726 3043 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-6v77m" podUID="e434c756-966f-4bc1-83f8-9ea863c93673" Jan 30 05:34:13.630626 kubelet[3043]: I0130 05:34:13.627583 3043 topology_manager.go:215] "Topology Admit Handler" podUID="6d8505be-f31b-4bdd-a8e9-e17e5205bad8" podNamespace="kube-system" podName="cilium-sff29" Jan 30 05:34:13.630626 kubelet[3043]: E0130 05:34:13.629634 3043 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5b95efdc-0040-48b2-b0e6-9dd57bd04e74" containerName="apply-sysctl-overwrites" Jan 30 05:34:13.630626 kubelet[3043]: E0130 05:34:13.629652 3043 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5b95efdc-0040-48b2-b0e6-9dd57bd04e74" containerName="mount-bpf-fs" Jan 30 05:34:13.630626 kubelet[3043]: E0130 05:34:13.629659 3043 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3b83b2f4-b0c0-4176-97e7-37dc0e605ed3" containerName="cilium-operator" Jan 30 05:34:13.630626 kubelet[3043]: E0130 05:34:13.629666 3043 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5b95efdc-0040-48b2-b0e6-9dd57bd04e74" containerName="clean-cilium-state" Jan 30 05:34:13.630626 kubelet[3043]: E0130 05:34:13.629673 3043 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5b95efdc-0040-48b2-b0e6-9dd57bd04e74" containerName="mount-cgroup" Jan 30 05:34:13.630626 kubelet[3043]: E0130 05:34:13.629679 3043 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5b95efdc-0040-48b2-b0e6-9dd57bd04e74" containerName="cilium-agent" Jan 30 05:34:13.633994 kubelet[3043]: I0130 05:34:13.633968 3043 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b95efdc-0040-48b2-b0e6-9dd57bd04e74" containerName="cilium-agent" Jan 30 05:34:13.634420 kubelet[3043]: I0130 05:34:13.634409 3043 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b83b2f4-b0c0-4176-97e7-37dc0e605ed3" containerName="cilium-operator" Jan 30 05:34:13.793439 kubelet[3043]: I0130 05:34:13.793259 3043 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6d8505be-f31b-4bdd-a8e9-e17e5205bad8-bpf-maps\") pod \"cilium-sff29\" (UID: \"6d8505be-f31b-4bdd-a8e9-e17e5205bad8\") " pod="kube-system/cilium-sff29" Jan 30 05:34:13.793439 kubelet[3043]: I0130 05:34:13.793328 3043 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6d8505be-f31b-4bdd-a8e9-e17e5205bad8-cni-path\") pod \"cilium-sff29\" (UID: \"6d8505be-f31b-4bdd-a8e9-e17e5205bad8\") " pod="kube-system/cilium-sff29" Jan 30 05:34:13.793439 kubelet[3043]: I0130 05:34:13.793360 3043 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6d8505be-f31b-4bdd-a8e9-e17e5205bad8-etc-cni-netd\") pod \"cilium-sff29\" (UID: \"6d8505be-f31b-4bdd-a8e9-e17e5205bad8\") " pod="kube-system/cilium-sff29" Jan 30 05:34:13.793439 kubelet[3043]: I0130 05:34:13.793393 3043 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6d8505be-f31b-4bdd-a8e9-e17e5205bad8-clustermesh-secrets\") pod \"cilium-sff29\" (UID: \"6d8505be-f31b-4bdd-a8e9-e17e5205bad8\") " pod="kube-system/cilium-sff29" Jan 30 05:34:13.793439 kubelet[3043]: I0130 05:34:13.793427 3043 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6d8505be-f31b-4bdd-a8e9-e17e5205bad8-cilium-config-path\") pod \"cilium-sff29\" (UID: \"6d8505be-f31b-4bdd-a8e9-e17e5205bad8\") " pod="kube-system/cilium-sff29" Jan 30 05:34:13.793775 kubelet[3043]: I0130 05:34:13.793456 3043 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6d8505be-f31b-4bdd-a8e9-e17e5205bad8-hostproc\") pod \"cilium-sff29\" (UID: \"6d8505be-f31b-4bdd-a8e9-e17e5205bad8\") " pod="kube-system/cilium-sff29" Jan 30 05:34:13.793775 kubelet[3043]: I0130 05:34:13.793483 3043 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6d8505be-f31b-4bdd-a8e9-e17e5205bad8-cilium-ipsec-secrets\") pod \"cilium-sff29\" (UID: \"6d8505be-f31b-4bdd-a8e9-e17e5205bad8\") " pod="kube-system/cilium-sff29" Jan 30 05:34:13.793775 kubelet[3043]: I0130 05:34:13.793510 3043 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4628\" (UniqueName: \"kubernetes.io/projected/6d8505be-f31b-4bdd-a8e9-e17e5205bad8-kube-api-access-q4628\") pod \"cilium-sff29\" (UID: \"6d8505be-f31b-4bdd-a8e9-e17e5205bad8\") " pod="kube-system/cilium-sff29" Jan 30 05:34:13.793775 kubelet[3043]: I0130 05:34:13.793560 3043 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6d8505be-f31b-4bdd-a8e9-e17e5205bad8-lib-modules\") pod \"cilium-sff29\" (UID: \"6d8505be-f31b-4bdd-a8e9-e17e5205bad8\") " pod="kube-system/cilium-sff29" Jan 30 05:34:13.793775 kubelet[3043]: I0130 05:34:13.793588 3043 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6d8505be-f31b-4bdd-a8e9-e17e5205bad8-xtables-lock\") pod \"cilium-sff29\" (UID: \"6d8505be-f31b-4bdd-a8e9-e17e5205bad8\") " pod="kube-system/cilium-sff29" Jan 30 05:34:13.793775 kubelet[3043]: I0130 05:34:13.793614 3043 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6d8505be-f31b-4bdd-a8e9-e17e5205bad8-host-proc-sys-net\") pod \"cilium-sff29\" (UID: \"6d8505be-f31b-4bdd-a8e9-e17e5205bad8\") " pod="kube-system/cilium-sff29" Jan 30 05:34:13.793997 kubelet[3043]: I0130 05:34:13.793641 3043 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6d8505be-f31b-4bdd-a8e9-e17e5205bad8-host-proc-sys-kernel\") pod \"cilium-sff29\" (UID: \"6d8505be-f31b-4bdd-a8e9-e17e5205bad8\") " pod="kube-system/cilium-sff29" Jan 30 05:34:13.793997 kubelet[3043]: I0130 05:34:13.793673 3043 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6d8505be-f31b-4bdd-a8e9-e17e5205bad8-cilium-run\") pod \"cilium-sff29\" (UID: \"6d8505be-f31b-4bdd-a8e9-e17e5205bad8\") " pod="kube-system/cilium-sff29" Jan 30 05:34:13.793997 kubelet[3043]: I0130 05:34:13.793730 3043 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6d8505be-f31b-4bdd-a8e9-e17e5205bad8-hubble-tls\") pod \"cilium-sff29\" (UID: \"6d8505be-f31b-4bdd-a8e9-e17e5205bad8\") " pod="kube-system/cilium-sff29" Jan 30 05:34:13.793997 kubelet[3043]: I0130 05:34:13.793758 3043 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6d8505be-f31b-4bdd-a8e9-e17e5205bad8-cilium-cgroup\") pod \"cilium-sff29\" (UID: \"6d8505be-f31b-4bdd-a8e9-e17e5205bad8\") " pod="kube-system/cilium-sff29" Jan 30 05:34:13.816988 sshd[4810]: pam_unix(sshd:session): session closed for user core Jan 30 05:34:13.823347 systemd[1]: sshd@22-49.13.81.87:22-139.178.89.65:40106.service: Deactivated successfully. Jan 30 05:34:13.830679 systemd-logind[1601]: Session 21 logged out. Waiting for processes to exit. Jan 30 05:34:13.832094 systemd[1]: session-21.scope: Deactivated successfully. Jan 30 05:34:13.834220 systemd-logind[1601]: Removed session 21. Jan 30 05:34:13.976411 containerd[1620]: time="2025-01-30T05:34:13.976292593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sff29,Uid:6d8505be-f31b-4bdd-a8e9-e17e5205bad8,Namespace:kube-system,Attempt:0,}" Jan 30 05:34:13.982919 systemd[1]: Started sshd@23-49.13.81.87:22-139.178.89.65:40116.service - OpenSSH per-connection server daemon (139.178.89.65:40116). Jan 30 05:34:14.008514 containerd[1620]: time="2025-01-30T05:34:14.008436655Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 05:34:14.008514 containerd[1620]: time="2025-01-30T05:34:14.008522004Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 05:34:14.008821 containerd[1620]: time="2025-01-30T05:34:14.008791126Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:34:14.009178 containerd[1620]: time="2025-01-30T05:34:14.009147469Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:34:14.054017 containerd[1620]: time="2025-01-30T05:34:14.053845572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sff29,Uid:6d8505be-f31b-4bdd-a8e9-e17e5205bad8,Namespace:kube-system,Attempt:0,} returns sandbox id \"c43c45bb68b3684c4e0b86b278cfc2d665366812c82c4d726d733c0d189f1fc1\"" Jan 30 05:34:14.060387 containerd[1620]: time="2025-01-30T05:34:14.060094480Z" level=info msg="CreateContainer within sandbox \"c43c45bb68b3684c4e0b86b278cfc2d665366812c82c4d726d733c0d189f1fc1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 05:34:14.077321 containerd[1620]: time="2025-01-30T05:34:14.077182730Z" level=info msg="CreateContainer within sandbox \"c43c45bb68b3684c4e0b86b278cfc2d665366812c82c4d726d733c0d189f1fc1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"390f5c906df90b2e189a9b53679b78907a0d66023d6f017674d9fc06e78c1c5e\"" Jan 30 05:34:14.078851 containerd[1620]: time="2025-01-30T05:34:14.078822676Z" level=info msg="StartContainer for \"390f5c906df90b2e189a9b53679b78907a0d66023d6f017674d9fc06e78c1c5e\"" Jan 30 05:34:14.161760 containerd[1620]: time="2025-01-30T05:34:14.161230035Z" level=info msg="StartContainer for \"390f5c906df90b2e189a9b53679b78907a0d66023d6f017674d9fc06e78c1c5e\" returns successfully" Jan 30 05:34:14.226614 containerd[1620]: time="2025-01-30T05:34:14.226496986Z" level=info msg="shim disconnected" id=390f5c906df90b2e189a9b53679b78907a0d66023d6f017674d9fc06e78c1c5e namespace=k8s.io Jan 30 05:34:14.226614 containerd[1620]: time="2025-01-30T05:34:14.226598334Z" level=warning msg="cleaning up after shim disconnected" id=390f5c906df90b2e189a9b53679b78907a0d66023d6f017674d9fc06e78c1c5e namespace=k8s.io Jan 30 05:34:14.226614 containerd[1620]: time="2025-01-30T05:34:14.226612080Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 05:34:14.594029 containerd[1620]: time="2025-01-30T05:34:14.593849570Z" level=info msg="CreateContainer within sandbox \"c43c45bb68b3684c4e0b86b278cfc2d665366812c82c4d726d733c0d189f1fc1\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 05:34:14.608662 containerd[1620]: time="2025-01-30T05:34:14.607599289Z" level=info msg="CreateContainer within sandbox \"c43c45bb68b3684c4e0b86b278cfc2d665366812c82c4d726d733c0d189f1fc1\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0fa8b4fae816e9bd3a7591c798be0d795e6699ed9cef2d69768834f8c6058037\"" Jan 30 05:34:14.609782 containerd[1620]: time="2025-01-30T05:34:14.609017902Z" level=info msg="StartContainer for \"0fa8b4fae816e9bd3a7591c798be0d795e6699ed9cef2d69768834f8c6058037\"" Jan 30 05:34:14.694956 containerd[1620]: time="2025-01-30T05:34:14.694907410Z" level=info msg="StartContainer for \"0fa8b4fae816e9bd3a7591c798be0d795e6699ed9cef2d69768834f8c6058037\" returns successfully" Jan 30 05:34:14.738384 containerd[1620]: time="2025-01-30T05:34:14.738326870Z" level=info msg="shim disconnected" id=0fa8b4fae816e9bd3a7591c798be0d795e6699ed9cef2d69768834f8c6058037 namespace=k8s.io Jan 30 05:34:14.738384 containerd[1620]: time="2025-01-30T05:34:14.738376163Z" level=warning msg="cleaning up after shim disconnected" id=0fa8b4fae816e9bd3a7591c798be0d795e6699ed9cef2d69768834f8c6058037 namespace=k8s.io Jan 30 05:34:14.738384 containerd[1620]: time="2025-01-30T05:34:14.738383676Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 05:34:14.979751 sshd[4827]: Accepted publickey for core from 139.178.89.65 port 40116 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:34:14.982963 sshd[4827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:34:14.994079 systemd-logind[1601]: New session 22 of user core. Jan 30 05:34:15.000286 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 30 05:34:15.427734 kubelet[3043]: E0130 05:34:15.426907 3043 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-6v77m" podUID="e434c756-966f-4bc1-83f8-9ea863c93673" Jan 30 05:34:15.602613 containerd[1620]: time="2025-01-30T05:34:15.602071471Z" level=info msg="CreateContainer within sandbox \"c43c45bb68b3684c4e0b86b278cfc2d665366812c82c4d726d733c0d189f1fc1\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 05:34:15.650799 containerd[1620]: time="2025-01-30T05:34:15.650585087Z" level=info msg="CreateContainer within sandbox \"c43c45bb68b3684c4e0b86b278cfc2d665366812c82c4d726d733c0d189f1fc1\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2f80dca5ed154ef82e8d2f745d7e15f6c53b570e7497910c30cc22e7d0307ce0\"" Jan 30 05:34:15.651753 containerd[1620]: time="2025-01-30T05:34:15.651717409Z" level=info msg="StartContainer for \"2f80dca5ed154ef82e8d2f745d7e15f6c53b570e7497910c30cc22e7d0307ce0\"" Jan 30 05:34:15.670979 sshd[4827]: pam_unix(sshd:session): session closed for user core Jan 30 05:34:15.678607 systemd[1]: sshd@23-49.13.81.87:22-139.178.89.65:40116.service: Deactivated successfully. Jan 30 05:34:15.694165 kubelet[3043]: E0130 05:34:15.692163 3043 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 30 05:34:15.696737 systemd[1]: session-22.scope: Deactivated successfully. Jan 30 05:34:15.702073 systemd-logind[1601]: Session 22 logged out. Waiting for processes to exit. Jan 30 05:34:15.704863 systemd-logind[1601]: Removed session 22. Jan 30 05:34:15.767623 containerd[1620]: time="2025-01-30T05:34:15.767499890Z" level=info msg="StartContainer for \"2f80dca5ed154ef82e8d2f745d7e15f6c53b570e7497910c30cc22e7d0307ce0\" returns successfully" Jan 30 05:34:15.813457 containerd[1620]: time="2025-01-30T05:34:15.813364571Z" level=info msg="shim disconnected" id=2f80dca5ed154ef82e8d2f745d7e15f6c53b570e7497910c30cc22e7d0307ce0 namespace=k8s.io Jan 30 05:34:15.813457 containerd[1620]: time="2025-01-30T05:34:15.813423300Z" level=warning msg="cleaning up after shim disconnected" id=2f80dca5ed154ef82e8d2f745d7e15f6c53b570e7497910c30cc22e7d0307ce0 namespace=k8s.io Jan 30 05:34:15.813457 containerd[1620]: time="2025-01-30T05:34:15.813431905Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 05:34:15.836672 systemd[1]: Started sshd@24-49.13.81.87:22-139.178.89.65:40132.service - OpenSSH per-connection server daemon (139.178.89.65:40132). Jan 30 05:34:15.905813 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2f80dca5ed154ef82e8d2f745d7e15f6c53b570e7497910c30cc22e7d0307ce0-rootfs.mount: Deactivated successfully. Jan 30 05:34:16.606447 containerd[1620]: time="2025-01-30T05:34:16.606387007Z" level=info msg="CreateContainer within sandbox \"c43c45bb68b3684c4e0b86b278cfc2d665366812c82c4d726d733c0d189f1fc1\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 05:34:16.643667 containerd[1620]: time="2025-01-30T05:34:16.643602708Z" level=info msg="CreateContainer within sandbox \"c43c45bb68b3684c4e0b86b278cfc2d665366812c82c4d726d733c0d189f1fc1\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"dff5211692ced94f35e7828623240a376fda5012581a2be33f5a4e4f34849607\"" Jan 30 05:34:16.645276 containerd[1620]: time="2025-01-30T05:34:16.645052250Z" level=info msg="StartContainer for \"dff5211692ced94f35e7828623240a376fda5012581a2be33f5a4e4f34849607\"" Jan 30 05:34:16.730775 containerd[1620]: time="2025-01-30T05:34:16.730739752Z" level=info msg="StartContainer for \"dff5211692ced94f35e7828623240a376fda5012581a2be33f5a4e4f34849607\" returns successfully" Jan 30 05:34:16.776234 containerd[1620]: time="2025-01-30T05:34:16.776101096Z" level=info msg="shim disconnected" id=dff5211692ced94f35e7828623240a376fda5012581a2be33f5a4e4f34849607 namespace=k8s.io Jan 30 05:34:16.776234 containerd[1620]: time="2025-01-30T05:34:16.776222922Z" level=warning msg="cleaning up after shim disconnected" id=dff5211692ced94f35e7828623240a376fda5012581a2be33f5a4e4f34849607 namespace=k8s.io Jan 30 05:34:16.776234 containerd[1620]: time="2025-01-30T05:34:16.776231358Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 05:34:16.816842 sshd[5058]: Accepted publickey for core from 139.178.89.65 port 40132 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:34:16.819557 sshd[5058]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:34:16.826292 systemd-logind[1601]: New session 23 of user core. Jan 30 05:34:16.833132 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 30 05:34:16.905796 systemd[1]: run-containerd-runc-k8s.io-dff5211692ced94f35e7828623240a376fda5012581a2be33f5a4e4f34849607-runc.xlive0.mount: Deactivated successfully. Jan 30 05:34:16.906231 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dff5211692ced94f35e7828623240a376fda5012581a2be33f5a4e4f34849607-rootfs.mount: Deactivated successfully. Jan 30 05:34:17.427989 kubelet[3043]: E0130 05:34:17.427493 3043 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-6v77m" podUID="e434c756-966f-4bc1-83f8-9ea863c93673" Jan 30 05:34:17.616743 containerd[1620]: time="2025-01-30T05:34:17.613780880Z" level=info msg="CreateContainer within sandbox \"c43c45bb68b3684c4e0b86b278cfc2d665366812c82c4d726d733c0d189f1fc1\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 05:34:17.648651 containerd[1620]: time="2025-01-30T05:34:17.648425380Z" level=info msg="CreateContainer within sandbox \"c43c45bb68b3684c4e0b86b278cfc2d665366812c82c4d726d733c0d189f1fc1\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6bf93c880c8d3e434ad2e969acc56c3a337ff340cd0360660998c13288d300a4\"" Jan 30 05:34:17.651728 containerd[1620]: time="2025-01-30T05:34:17.650223071Z" level=info msg="StartContainer for \"6bf93c880c8d3e434ad2e969acc56c3a337ff340cd0360660998c13288d300a4\"" Jan 30 05:34:17.740234 containerd[1620]: time="2025-01-30T05:34:17.740093685Z" level=info msg="StartContainer for \"6bf93c880c8d3e434ad2e969acc56c3a337ff340cd0360660998c13288d300a4\" returns successfully" Jan 30 05:34:18.489863 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 30 05:34:18.538890 kernel: jitterentropy: Initialization failed with host not compliant with requirements: 9 Jan 30 05:34:18.564060 kernel: DRBG: Continuing without Jitter RNG Jan 30 05:34:19.426761 kubelet[3043]: E0130 05:34:19.426287 3043 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-6v77m" podUID="e434c756-966f-4bc1-83f8-9ea863c93673" Jan 30 05:34:19.893543 systemd[1]: run-containerd-runc-k8s.io-6bf93c880c8d3e434ad2e969acc56c3a337ff340cd0360660998c13288d300a4-runc.QHvr6E.mount: Deactivated successfully. Jan 30 05:34:22.044629 systemd-networkd[1240]: lxc_health: Link UP Jan 30 05:34:22.057298 systemd-networkd[1240]: lxc_health: Gained carrier Jan 30 05:34:22.203206 kubelet[3043]: E0130 05:34:22.201949 3043 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:51998->127.0.0.1:44717: write tcp 127.0.0.1:51998->127.0.0.1:44717: write: broken pipe Jan 30 05:34:23.713920 systemd-networkd[1240]: lxc_health: Gained IPv6LL Jan 30 05:34:24.034749 kubelet[3043]: I0130 05:34:24.034330 3043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-sff29" podStartSLOduration=11.034303407 podStartE2EDuration="11.034303407s" podCreationTimestamp="2025-01-30 05:34:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 05:34:18.663948827 +0000 UTC m=+353.395576172" watchObservedRunningTime="2025-01-30 05:34:24.034303407 +0000 UTC m=+358.765930772" Jan 30 05:34:25.454406 containerd[1620]: time="2025-01-30T05:34:25.454159309Z" level=info msg="StopPodSandbox for \"71fb77faead4a3d3604f2b1de8588459f210c984d2ba6f0f93c93b21b12d10dd\"" Jan 30 05:34:25.455813 containerd[1620]: time="2025-01-30T05:34:25.454320119Z" level=info msg="TearDown network for sandbox \"71fb77faead4a3d3604f2b1de8588459f210c984d2ba6f0f93c93b21b12d10dd\" successfully" Jan 30 05:34:25.455813 containerd[1620]: time="2025-01-30T05:34:25.454973537Z" level=info msg="StopPodSandbox for \"71fb77faead4a3d3604f2b1de8588459f210c984d2ba6f0f93c93b21b12d10dd\" returns successfully" Jan 30 05:34:25.460663 containerd[1620]: time="2025-01-30T05:34:25.458243053Z" level=info msg="RemovePodSandbox for \"71fb77faead4a3d3604f2b1de8588459f210c984d2ba6f0f93c93b21b12d10dd\"" Jan 30 05:34:25.460663 containerd[1620]: time="2025-01-30T05:34:25.458667424Z" level=info msg="Forcibly stopping sandbox \"71fb77faead4a3d3604f2b1de8588459f210c984d2ba6f0f93c93b21b12d10dd\"" Jan 30 05:34:25.460663 containerd[1620]: time="2025-01-30T05:34:25.459047584Z" level=info msg="TearDown network for sandbox \"71fb77faead4a3d3604f2b1de8588459f210c984d2ba6f0f93c93b21b12d10dd\" successfully" Jan 30 05:34:25.469301 containerd[1620]: time="2025-01-30T05:34:25.469259023Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"71fb77faead4a3d3604f2b1de8588459f210c984d2ba6f0f93c93b21b12d10dd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 05:34:25.469610 containerd[1620]: time="2025-01-30T05:34:25.469535888Z" level=info msg="RemovePodSandbox \"71fb77faead4a3d3604f2b1de8588459f210c984d2ba6f0f93c93b21b12d10dd\" returns successfully" Jan 30 05:34:25.472391 containerd[1620]: time="2025-01-30T05:34:25.472338464Z" level=info msg="StopPodSandbox for \"baa28a65f7be81e9ac681138c24ae39a96d59a12a49fe2d8c0d281d6382e35e4\"" Jan 30 05:34:25.472582 containerd[1620]: time="2025-01-30T05:34:25.472539579Z" level=info msg="TearDown network for sandbox \"baa28a65f7be81e9ac681138c24ae39a96d59a12a49fe2d8c0d281d6382e35e4\" successfully" Jan 30 05:34:25.473148 containerd[1620]: time="2025-01-30T05:34:25.472739292Z" level=info msg="StopPodSandbox for \"baa28a65f7be81e9ac681138c24ae39a96d59a12a49fe2d8c0d281d6382e35e4\" returns successfully" Jan 30 05:34:25.474201 containerd[1620]: time="2025-01-30T05:34:25.473488258Z" level=info msg="RemovePodSandbox for \"baa28a65f7be81e9ac681138c24ae39a96d59a12a49fe2d8c0d281d6382e35e4\"" Jan 30 05:34:25.474201 containerd[1620]: time="2025-01-30T05:34:25.473516180Z" level=info msg="Forcibly stopping sandbox \"baa28a65f7be81e9ac681138c24ae39a96d59a12a49fe2d8c0d281d6382e35e4\"" Jan 30 05:34:25.474201 containerd[1620]: time="2025-01-30T05:34:25.473582954Z" level=info msg="TearDown network for sandbox \"baa28a65f7be81e9ac681138c24ae39a96d59a12a49fe2d8c0d281d6382e35e4\" successfully" Jan 30 05:34:25.479779 containerd[1620]: time="2025-01-30T05:34:25.479749583Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"baa28a65f7be81e9ac681138c24ae39a96d59a12a49fe2d8c0d281d6382e35e4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 05:34:25.479967 containerd[1620]: time="2025-01-30T05:34:25.479931011Z" level=info msg="RemovePodSandbox \"baa28a65f7be81e9ac681138c24ae39a96d59a12a49fe2d8c0d281d6382e35e4\" returns successfully" Jan 30 05:34:28.948001 sshd[5058]: pam_unix(sshd:session): session closed for user core Jan 30 05:34:28.954048 systemd[1]: sshd@24-49.13.81.87:22-139.178.89.65:40132.service: Deactivated successfully. Jan 30 05:34:28.962932 systemd[1]: session-23.scope: Deactivated successfully. Jan 30 05:34:28.966450 systemd-logind[1601]: Session 23 logged out. Waiting for processes to exit. Jan 30 05:34:28.968868 systemd-logind[1601]: Removed session 23. Jan 30 05:34:44.630174 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a568b4b851e5c78d8e9198e7f9dc283ecf90a4bf3e9b8f424fb657449d1f8e16-rootfs.mount: Deactivated successfully. Jan 30 05:34:44.641302 containerd[1620]: time="2025-01-30T05:34:44.641220483Z" level=info msg="shim disconnected" id=a568b4b851e5c78d8e9198e7f9dc283ecf90a4bf3e9b8f424fb657449d1f8e16 namespace=k8s.io Jan 30 05:34:44.641302 containerd[1620]: time="2025-01-30T05:34:44.641297095Z" level=warning msg="cleaning up after shim disconnected" id=a568b4b851e5c78d8e9198e7f9dc283ecf90a4bf3e9b8f424fb657449d1f8e16 namespace=k8s.io Jan 30 05:34:44.641302 containerd[1620]: time="2025-01-30T05:34:44.641316762Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 05:34:44.718151 kubelet[3043]: I0130 05:34:44.717912 3043 scope.go:117] "RemoveContainer" containerID="a568b4b851e5c78d8e9198e7f9dc283ecf90a4bf3e9b8f424fb657449d1f8e16" Jan 30 05:34:44.724903 containerd[1620]: time="2025-01-30T05:34:44.724856170Z" level=info msg="CreateContainer within sandbox \"38f7e4dd89a1f47c5496025ea07d460233743fc1983c6688445831988fa06203\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 30 05:34:44.752623 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2714747536.mount: Deactivated successfully. Jan 30 05:34:44.754635 containerd[1620]: time="2025-01-30T05:34:44.754488387Z" level=info msg="CreateContainer within sandbox \"38f7e4dd89a1f47c5496025ea07d460233743fc1983c6688445831988fa06203\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"2f3f2d949beac60a4239f765f39059b218f6db1e23c15a51b9718a7b9c6cd2a5\"" Jan 30 05:34:44.755367 containerd[1620]: time="2025-01-30T05:34:44.755313566Z" level=info msg="StartContainer for \"2f3f2d949beac60a4239f765f39059b218f6db1e23c15a51b9718a7b9c6cd2a5\"" Jan 30 05:34:44.872806 containerd[1620]: time="2025-01-30T05:34:44.872740839Z" level=info msg="StartContainer for \"2f3f2d949beac60a4239f765f39059b218f6db1e23c15a51b9718a7b9c6cd2a5\" returns successfully" Jan 30 05:34:44.988972 kubelet[3043]: E0130 05:34:44.988804 3043 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:56162->10.0.0.2:2379: read: connection timed out" Jan 30 05:34:49.448604 kubelet[3043]: E0130 05:34:49.444637 3043 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:55974->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-3-0-c-240f39d8fc.181f619637ed14fd kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-3-0-c-240f39d8fc,UID:c25cd633f17e0d8aa0b1f700dc8b4165,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-3-0-c-240f39d8fc,},FirstTimestamp:2025-01-30 05:34:38.999573757 +0000 UTC m=+373.731201133,LastTimestamp:2025-01-30 05:34:38.999573757 +0000 UTC m=+373.731201133,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-0-c-240f39d8fc,}" Jan 30 05:34:50.349772 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-421ba78610de8357cdb798af6c23362666a4f473a7c671cb8da5be0cf1af156e-rootfs.mount: Deactivated successfully. Jan 30 05:34:50.370287 containerd[1620]: time="2025-01-30T05:34:50.370097053Z" level=info msg="shim disconnected" id=421ba78610de8357cdb798af6c23362666a4f473a7c671cb8da5be0cf1af156e namespace=k8s.io Jan 30 05:34:50.370287 containerd[1620]: time="2025-01-30T05:34:50.370247704Z" level=warning msg="cleaning up after shim disconnected" id=421ba78610de8357cdb798af6c23362666a4f473a7c671cb8da5be0cf1af156e namespace=k8s.io Jan 30 05:34:50.370287 containerd[1620]: time="2025-01-30T05:34:50.370277390Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 05:34:50.753648 kubelet[3043]: I0130 05:34:50.753550 3043 scope.go:117] "RemoveContainer" containerID="421ba78610de8357cdb798af6c23362666a4f473a7c671cb8da5be0cf1af156e" Jan 30 05:34:50.758153 containerd[1620]: time="2025-01-30T05:34:50.758060219Z" level=info msg="CreateContainer within sandbox \"e4b3a4f5a27dd99bfdf8d1f4530ad941ae2095dee1bbd894f6fa6fc4461e8159\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 30 05:34:50.786186 containerd[1620]: time="2025-01-30T05:34:50.786100123Z" level=info msg="CreateContainer within sandbox \"e4b3a4f5a27dd99bfdf8d1f4530ad941ae2095dee1bbd894f6fa6fc4461e8159\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"717ef29e252ed088fda13337d5f702bd8b5df549524aa717bf2e689988d4a291\"" Jan 30 05:34:50.788724 containerd[1620]: time="2025-01-30T05:34:50.787329657Z" level=info msg="StartContainer for \"717ef29e252ed088fda13337d5f702bd8b5df549524aa717bf2e689988d4a291\"" Jan 30 05:34:50.897644 containerd[1620]: time="2025-01-30T05:34:50.897587927Z" level=info msg="StartContainer for \"717ef29e252ed088fda13337d5f702bd8b5df549524aa717bf2e689988d4a291\" returns successfully"